Riverbed https://www.riverbed.com/ Digital Experience Innovation & Acceleration Wed, 04 Sep 2024 23:37:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 Elevating Digital Performance with Holistic End-to-End Visibility https://www.riverbed.com/blogs/elevating-digital-performance-with-end-to-end-visibility/ Wed, 04 Sep 2024 12:30:40 +0000 https://www.riverbed.com/?p=82275 In today’s digital economy, delivering exceptional user experiences is paramount. To achieve this, organizations must have a comprehensive understanding of their digital infrastructure’s performance.

Riverbed’s integrated end-to-end visibility solutions offer a powerful platform to gain unparalleled visibility into application, network, and end-user experiences. By providing real-time insights and actionable intelligence, Riverbed empowers businesses to proactively identify and resolve performance issues, optimize resource utilization, and ultimately, achieve their business goals.

Why Riverbed’s visibility solutions stand out

  • Keep Users Happy: By providing real-time insights into user experience, Riverbed helps organizations identify and address performance issues before users even notice them, ensuring a smooth and satisfying experience.
  • Boost IT Efficiency: Riverbed’s solutions streamline IT operations by automating incident detection and resolution, reducing mean time to repair (MTTR), and optimizing resource utilization.
  • Cut Costs: By identifying performance bottlenecks and optimizing resource allocation, Riverbed helps you lower IT expenses while getting the most out of your infrastructure investments.
  • Reduce Business Risk: Riverbed’s proactive approach to performance management means fewer service outages and less impact from potential disruptions, keeping your business running smoothly.
  • Make Smarter Decisions: The advanced analytics offered by Riverbed provide you with data-driven insights, helping you make informed decisions that drive continuous improvement.

Why choose Riverbed?

Riverbed offers a clear advantage for organizations looking to elevate their digital performance:

  • Comprehensive Visibility: Gain unmatched insight across your entire IT environment, from user devices to applications and networks. This comprehensive view enables organizations to identify and address performance issues at their root cause.
  • Advanced Analytics: Use AI-powered analytics for proactive infrastructure optimization based on performance trends.
  • Seamless Integration: Enjoy hassle-free integration with your existing IT systems, minimizing disruption and maximizing ROI.
  • Scalability: Riverbed’s solutions can scale to meet the needs of organizations of all sizes, from small businesses to large enterprises.
  • Exceptional Support: Benefit from Riverbed’s strong customer support and professional services to ensure you maximize the value of your investment.

How our solutions work

Riverbed’s visibility solutions collect data from multiple sources—user devices, networks, applications, and cloud infrastructure. This data is then processed and analyzed using advanced analytics and AI techniques to identify performance trends and anomalies. The platform alerts your IT team in real-time, providing visualizations that help them quickly identify and fix issues.

chart featuring Riverbed Platform modules

Key components of these solutions include:

  • End-user experience monitoring: Understand how your users are interacting with applications and quickly address any performance hiccups.
  • Network performance monitoring: Keep your network healthy by identifying and resolving bottlenecks.
  • Application performance monitoring: Track how well your applications are performing and tackle any issues impacting user experience.
  • Infrastructure monitoring: Keep an eye on your servers, storage, and other critical IT components to ensure they’re performing optimally.
  • Cloud monitoring: Gain visibility into your cloud-based applications and services, ensuring they’re delivering as expected.
  • AI-powered analytics: Get actionable insights that help you identify performance trends and address issues before they become problems.

Real-world successes

Riverbed’s solutions have been successfully deployed across a wide range of industries, including finance, healthcare, retail, and government. Here are a few examples of how organizations in these sectors are partnering with us to improve their digital performance:

  • Financial services: A global bank used Riverbed to identify and resolve performance issues impacting online trading platforms, resulting in improved customer satisfaction and higher revenue.
  • Healthcare: A large healthcare provider improved the performance of electronic health records (EHR) systems, improving both clinician productivity and patient care.
  • Retail: A major retailer optimized the performance of its e-commerce platform, resulting in increased sales and improved customer satisfaction.

Learn more

Riverbed’s end-to-end visibility solutions offer more than just a way to monitor digital performance—they provide the tools you need to optimize it. With clear visibility, smart analytics, and seamless integration, Riverbed empowers organizations to deliver exceptional user experiences, streamline IT operations, and reduce costs. If you’re serious about improving your digital infrastructure, Riverbed is the ideal partner.

Want to learn more? Let’s connect.

]]>
Riverbed Named a Leader in the 2024 Gartner® Magic Quadrant™ for Digital Employee Experience Tools (DEX) https://www.riverbed.com/blogs/riverbed-named-a-leader-in-the-gartner-magic-quadrant-for-dex/ Wed, 28 Aug 2024 15:10:47 +0000 https://live-riverbed-new.pantheonsite.io/?p=82320 The new Gartner® Magic Quadrant™ for Digital Employee Experience Tools (DEX) has been published and Riverbed is honored to be recognized as a Leader in the report. 

Digital employee experience is crucial today and we believe this placement reflects our vision to transform employee experiences, while optimizing performance, increasing productivity, and maximizing investments in critical business applications and employee devices. 

Delivering exceptional digital experience is not about keeping up—it is about staying ahead. Gartner has recognized Riverbed based on its Completeness of Vision and Ability to Execute.

Embracing the future of Digital Employee Experience (DEX) 

In today’s digital-first world, the importance of Digital Employee Experience (DEX) cannot be overstated. Organizations rely on technology to drive business outcomes, like AI (Artificial Intelligence), cloud solutions, and unified observability, to ensure a seamless, efficient, and positive digital experiences for employees. Our Riverbed Global DEX Survey shows 92% of IT leaders say unified observability is important for competitiveness and delivering a seamless DEX. 

Digital Employee Experience refers to the sum of all interactions an employee has with digital tools, applications, and platforms they use in their work environments as well as how they feel about their IT service. This includes everything from the responsiveness of their devices to the ease of accessing critical software applications, to overall performance and reliability of the IT infrastructure, as well as measuring employee sentiment through surveys.

According to our Global DEX Survey, business and IT leaders say investing in DEX is among their top priorities for the next five years, meaning DEX is no longer a luxury but a necessity. 

DEX is critical for the modern enterprise 

Productivity: Employees depend on digital tools to perform their jobs. This is especially true for frontline workers, who rely on these tools and mobile devices to engage with customers. Slow or unreliable technology can disrupt workflows, leading to delays. By investing in DEX, organizations can ensure that all employees have the tools they need to perform their best. 

Employee Satisfaction: A seamless and efficient digital experience contributes significantly to employee satisfaction. When employees can effortlessly navigate their digital environments, they are more likely to feel valued and supported by their organization. 

Attracting and Retaining Talent: In a competitive job market, offering superior experience can be a differentiator in attracting and retaining top talent. Modern employees expect their work environment to be technologically advanced and user-friendly.

Cost Efficiency: Addressing DEX issues proactivity can prevent larger IT problems, reducing downtime and support costs. A well-managed digital environment minimizes the need for extensive training and reduces the likelihood of errors. 

Business Continuity: In an increasingly remote and hybrid work world maintaining a high-quality DEX ensures business continuity. Employees must be able to access critical systems and applications from any location, at any time, without disruption. 

The role of Riverbed Aternity in enhancing DEX 

At Riverbed, our AI-automation driven platform places the end user at the center to optimize digital experience. Riverbed Aternity is the DEX component of the Riverbed platform, empowering organizations to resolve digital experiences swiftly & optimize costs by applying advanced ML & AI to the broadest range of telemetry across all IT domains. 

Aternity provides detailed insights into the performance of applications and devices, including mobile Android and iOS devices, from the end users’ perspective. This end-to-end visibility enables IT to identify and resolve performance issues before they impact employees, ensuring a seamless digital experience. We also measure the actual experience of every employee, and how they feel about their IT service with the actual application and device.

With Aternity, IT teams can proactivity detect and address potential problems before they escalate into major disruptions, ensuring employees can work without interruption and our AI-data driven analytics provide IT and business leaders with the data needed to make informed decisions about technology investments, resource allocation, and process improvements. Because Aternity measures the actual sentiment of employees, organizations can make strategic decisions to enhance DEX by understanding the impact of their digital tools on employee performance.

Aternity understands that investing in DEX is not just about enhancing technology, but creating a work environment where employees can thrive, innovate, and contribute to the organization’s success. With solutions like Aternity, organizations can make sure employees have a consistent and reliable digital experience, regardless of their location and ensure the full potential of their workforce

We invite you to download acomplimentary copy of the 2024 Gartner Magic Quadrant for Digital Employee Experience Tools (DEX) and see why we are evaluated among other DEX vendors.

 

 

GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

]]>
Unleash Advanced SD-WAN and Cloud Capabilities in Riverbed NetProfiler https://www.riverbed.com/blogs/sd-wan-cloud-capabilities-riverbed-netprofiler/ Tue, 27 Aug 2024 12:15:19 +0000 https://www.riverbed.com/?p=82278 In today’s fast-paced digital landscape, efficient network operations are critical to the success of any business. With the increasing complexity of hybrid and multi-cloud environments, having comprehensive visibility into network performance is essential.

Riverbed’s NetProfiler has long been a trusted solution for network flow analytics, providing organizations with the tools needed to monitor, diagnose, and optimize network performance. The release of NetProfiler 10.27 brings exciting new features and enhancements that continue to expand its capabilities, particularly in the areas of SD-WAN and cloud integration. In this blog, we delve into the key functionalities, highlight the new features, and summarize the benefits of this latest release. 

Key functionalities of NetProfiler 

NetProfiler is a powerful network observability solution designed to provide proactive monitoring, detailed dependency mapping, and broad visibility across hybrid and multi-cloud environments. Utilizing behavioral analytics, NetProfiler can detect performance issues before they impact users, while its comprehensive mapping of application transactions to underlying infrastructure ensures accurate service delivery and streamlined troubleshooting.

The platform’s flexibility allows deployment across on-premise, virtual, and cloud environments, integrating seamlessly with other Riverbed solutions like AppResponse and SteelHead to offer a unified view of network and application performance. 

Highlights of the new release 

The latest release of NetProfiler (version 10.27) introduces several enhancements and new features that cater to the evolving demands of network operations.

One of the most significant updates is the expanded support for SD-WAN, specifically the addition of VeloCloud by VMware and improvements to Cisco Viptela visibility. These enhancements include on-demand vManage configuration polling, better SD-WAN query response times, and adds support for bidirectional flows as used by VeloCloud. These updates solidify NetProfiler’s leader position in the SD-WAN monitoring space, addressing competitive pressures and providing valuable insights to customers using these technologies. 

NetProfiler Dashboard
One of the most significant updates to Riverbed NetProfiler is the expanded support for SD-WAN, specifically the addition of VeloCloud by VMware and improvements to Cisco Viptela visibility.

Another notable addition is the integration with Microsoft Azure Blob storage, allowing NetProfiler to consume Azure Virtual Network (VNET) flows. This capability is particularly beneficial for government customers and organizations with significant cloud investments, as it extends the platform’s visibility into major cloud storage solutions, complementing NetProfiler’s existing support for Amazon S3 buckets.  The integration with Microsoft Hyper-V Stack further enhances NetProfiler’s cloud capabilities, enabling improved network analysis, management, and security in environments using Hyper-V, such as Azure Stack HCI. 

NetProfiler Dashboard
Another notable addition to Riverbed NetProfiler is the integration with Microsoft Azure Blob storage, allowing NetProfiler to consume Azure Virtual Network (VNET) flows.

Advance network performance monitoring with NetProfiler 10.27

The NetProfiler 10.27 release represents a significant step forward in network performance monitoring, with enhancements that address key industry trends such as the growing adoption of SD-WAN, multi-cloud environments, and hybrid infrastructures. By expanding support for leading SD-WAN vendors like VeloCloud and Cisco Viptela, integrating with major cloud storage platforms, and strengthening security features, NetProfiler continues to evolve as a comprehensive solution that meets the complex needs of modern network operations. 

For businesses seeking to enhance their network visibility, speed up problem identification, and mitigate cybersecurity risks, NetProfiler’s latest release offers a compelling set of tools. With its proactive monitoring capabilities, robust integration options, and commitment to security, NetProfiler remains an indispensable asset for IT organizations striving to maintain optimal network performance in an increasingly complex digital landscape. 

The new release is set to further cement Riverbed’s position as a leader in network observability, ensuring that organizations can continue to rely on NetProfiler for comprehensive visibility and proactive network management in the years to come. 

Check out NetProfiler webpage here for more details.

]]>
Unpacking Riverbed’s Leader Designation in the 2024 GigaOm Radar Report for Network Observability https://www.riverbed.com/blogs/riverbeds-leader-designation-in-gigaom-radar-report-network-observability/ Fri, 09 Aug 2024 12:29:46 +0000 https://www.riverbed.com/?p=82109 In today’s digital era, ensuring seamless network performance and comprehensive visibility is paramount for businesses. As enterprises transition to complex hybrid cloud environments, the demand for sophisticated network observability solutions has surged. The recent GigaOm Radar Report for Network Observability recognizes Riverbed as a Leader in this field, highlighting its robust capabilities and strategic importance for large enterprises.

Riverbed’s position in the market 

Source: GigaOm 2024

Riverbed stands out in the Maturity/Platform Play quadrant of the GigaOm Radar for Network Observability. This position signifies Riverbed’s advanced development and comprehensive solution suite tailored for complex network environments. GigaOm recognized the company’s ability to dynamically discover network elements, analyze traffic patterns, and troubleshoot issues in real time. These capabilities help make Riverbed an indispensable tool for enterprises seeking end-to-end network visibility and performance optimization.

Key capabilities and features

Riverbed excels in several critical areas that are essential for effective network observability:

  1. Dynamic Discovery: Riverbed’s platform can automatically discover network devices and their interconnections, providing a real-time map of the network topology. This capability is crucial for maintaining an up-to-date understanding of the network’s structure and identifying any changes or anomalies promptly.
  2. Traffic Analysis: The ability to monitor and analyze network traffic in detail allows organizations to identify bottlenecks, understand usage patterns, and optimize traffic flow. Riverbed’s traffic analysis tools are sophisticated, offering deep insights that are necessary for proactive network management.
  3. Troubleshooting and RCA (Root Cause Analysis): Riverbed provides advanced tools for diagnosing network issues and determining their root causes. This feature significantly reduces the mean time to recovery (MTTR) and helps prevent recurrence by addressing the underlying problems.

Analyst perspective

GigaOm praised Riverbed for its comprehensive approach to network observability. The platform’s integration of AI and machine learning capabilities enhances its ability to provide actionable insights, automate routine tasks, and predict potential issues before they impact network performance. This proactive approach not only improves operational efficiency but also aligns network management with business objectives, ensuring that IT investments deliver maximum value.

Key statistics

  • Sector Adoption Score: The companion GigaOm Key Criteria Report for Network Observability graded the network observability sector with an overall adoption score of 4.2 out of 5, indicating strong industry recognition and the critical need for these solutions.
  • Flexible Licensing: Riverbed offers perpetual and subscription-based licenses, tiered licensing based on volume, and a variety of support options with different levels of SLA which can offer cost savings for businesses with large complex networks

Riverbed’s fit with evaluation criteria

The GigaOm report outlines several key criteria for evaluating network observability solutions, including:

  • Functional Capabilities: Riverbed meets and exceeds the fundamental requirements with its robust feature set, including dynamic discovery, traffic analysis, and comprehensive troubleshooting tools.
  • Non-Functional Requirements: These encompass business criteria such as scalability, flexibility, ease of use, ecosystem and cost. Riverbed scores highly in these areas, offering solutions that are not only technically sound but also aligned with business goals.

View the full report

Riverbed’s position as a Leader in the network observability market is well-deserved, thanks to its advanced capabilities and strategic focus on comprehensive network visibility and security observability. The GigaOm Radar report highlights Riverbed’s strengths in dynamic discovery, traffic analysis, and troubleshooting, making it an ideal choice for enterprises seeking a robust and scalable observability solution.

As businesses continue to evolve and adapt to new technological landscapes, having a reliable network observability solution like Riverbed’s will be crucial. Its ability to provide actionable insights and support proactive decision-making ensures that enterprises can maintain optimal network performance and achieve their business objectives efficiently.

For more detailed insights, you can access the full GigaOm Radar report here.

]]>
The Value of Digital Experience Management: A Lesson from the CrowdStrike Global Outage https://www.riverbed.com/blogs/digital-experience-monitoring-lesson-from-crowdstrike-outage/ Thu, 25 Jul 2024 14:52:15 +0000 https://www.riverbed.com/?p=82006 Last week, we experienced one of the largest global IT outages, impacting millions of devices. Businesses worldwide reported IT outages, including the infamous Windows “Blue Screen of Death” errors on their computers due to a defective update from cybersecurity firm CrowdStrike. No industry was immune to this incident, with the outage affecting airlines, banks, businesses, schools, governments, and even some health services facilities across the globe.

Global IT organizations are still recovering, and it could take weeks to fully recover. This incident underscores the critical importance of Observability and Digital Experience Management (DEM) solutions in today’s interconnected world. DEM solutions can provide immense value during global IT outages like the recent CrowdStrike incident.

Key benefits of DEM solutions during global IT outages

During an outage, clear communication with users is crucial. Organizations need to quickly detect and respond to issues to resolve the downtime and disruption. DEM solutions capture user interactions and performance metrics to allow organizations to keep users informed about service statuses and expected resolution times.

By offering insights into system performance and user behavior, DEM solutions help build more resilient IT infrastructures with comprehensive reporting enabling organizations to understand the impact of outages and improve future response strategies, providing valuable data for post-incident analysis and continuous improvement.

Riverbed Aternity: A vital tool for managing global outages

Riverbed Aternity is a prime example of a DEM solution that can be invaluable during global IT outages. The past few days, many customers have been using Aternity to gain visibility of the impact from the CrowdStrike incident, which has enabled organizations to take prescriptive actions to fix problems faster and mitigate this situation.

Aternity swiftly helped customers identify which applications and servers across the enterprise were affected and determined whether the issues were escalating or subsiding. This visibility let

IT teams quickly confirm which systems were back to normal, ensuring a smooth and efficient recovery process. Here are a few ways Aternity can help in these types of incidents:

  1. Real-Time Monitoring: Aternity provides real-time monitoring of user experiences and application performance. This can help organizations quickly identify and diagnose issues affecting their systems and devices.
  2. Incident Management: With its detailed analytics and insights, Aternity can assist IT teams in pinpointing the root causes of outages and performance degradation, enabling faster resolution.
  3. User Experience Insights: By understanding how the outage impacts end-users, organizations can prioritize critical issues and ensure that essential services are restored first.
  4. Proactive Alerts: Aternity’s proactive alerting system can notify IT teams of potential issues before they escalate, helping to mitigate the impact of the outage.
  5. Comprehensive Reporting: Detailed reports and dashboards provide visibility into the performance and availability of applications and services, aiding in post-incident analysis and future prevention strategies.

Aternity ensures consistent performance, availability, and continuous operation, even during large-scale disruptions. These capabilities make Riverbed Aternity a powerful ally in managing and mitigating the effects of a widespread IT outage.

Aternity’s ability to track and monitor critical errors

By tracking and monitoring instances of the Blue Screen of Death (BSOD) on Windows devices, Aternity helps IT teams identify and troubleshoot the root causes of these critical system errors, ensuring better stability and performance for end-users.

Aternity tracks BSOD events by monitoring the health and performance of Windows devices in real-time through the following process:

  • Agent Installation: A small agent is installed on each monitored device, collecting data on system performance, application usage, and errors, including BSOD events.
  • Event Logging: When a BSOD occurs, the agent logs the event details, such as the error code, timestamp, and relevant system information.
  • Data Transmission: The collected data is sent to Aternity’s central server, where it is aggregated and analyzed.
  • Dashboard and Alerts: IT teams can view BSOD events on Aternity’s dashboard, which provides visualizations and detailed reports. Alerts can also be configured to notify IT staff immediately when a BSOD occurs.
  • Root Cause Analysis: Aternity helps identify patterns and potential root causes of BSOD events by correlating them with other system and application performance data.

This comprehensive approach allows IT teams to quickly identify and address the underlying issues causing BSODs, improving overall system stability and user experience.

Assisting with remediation during outages

For those already using Aternity, the impact of software upgrades, such as the CrowdStrike Sensor Platform and CrowdStrike Windows Sensor from version 7.14.18408.0 to 7.14.18410.0, can be closely monitored. IT teams can run remediation scripts to resolve issues, such as:

  1. Booting Windows into Safe Mode or the Windows Recovery Environment.
  2. Navigating to the C:\Windows\System32\drivers\CrowdStrike directory.
  3. Locating and deleting the file matching “C-00000291*.sys”.
  4. Booting the host normally.

In conclusion, the recent CrowdStrike global outage has highlighted the critical importance of Digital Experience Management solutions. Solutions like Riverbed Aternity provide the real-time insights, proactive alerts, and comprehensive reporting needed to manage and mitigate the effects of widespread IT disruptions effectively. As organizations continue to recover, investing in robust DEM solutions will be key to building more resilient IT infrastructures and maintaining service continuity in the face of future challenges.

]]>
Enhance Compliance and Accelerate Data Replication with Riverbed SteelHead https://www.riverbed.com/blogs/enhance-compliance-and-accelerate-data-replication/ Mon, 15 Jul 2024 12:37:57 +0000 https://www.riverbed.com/?p=81582 Compliance is at the forefront of every organization’s mind worldwide. And it’s no wonder when regulations are becoming increasingly prevalent and stringent, their associated fines are rising, as are the risks for reputational damage in the case of noncompliance.

Regulations affect businesses across all industries. For example, the Defense Information Systems Agency (DISA) places tough expectations on systems–including a need for 99.9% availability–while healthcare organizations need to be mindful of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and its data-related rules. Then, there are the Payment Card Industry Data Security Standards (PCI-DSS), developed by major credit card companies to help protect cardholders and their information.

But with regulatory needs soaring, technology advancing, and people becoming ever-dependent on their digital worlds, staying compliant can prove incredibly challenging. Here are three of today’s biggest compliance obstacles, and how Riverbed SteelHead can support businesses of all types, sizes, and locations in overcoming them. 

Too much data 

As technology becomes more advanced and people depend on it more, there’s a significant increase in distributed data creation and interconnected devices and nodes from many applications and locations. This means businesses must obtain greater value from all their data sources.

However, untangling this web of complex and disparate data and getting it to the right place and people quickly enough to meet compliance needs, without impacting normal workflow and business operations, can seem impossible. Especially when some regulations may dictate a strict:

  • Recovery Point Objective (RPO): The maximum amount of data– as measured by time–that can be lost after disaster, failure, or a comparable event.
  • Recovery Time Objective (RTO): The maximum acceptable time that an app, device, network, or system can be down after an unexpected incident like those listed above.
  • Replication and backup time: How often, and how quickly, systems copy and store data.

Too much latency and congestion 

Things become even more troublesome with the introduction of network limitations, and as data, distance, location, and latency increase. Organizations often boost bandwidth to try and combat these issues at a high financial cost, but fruitlessly; after all, it’s a myth that more bandwidth guarantees higher throughput. In truth, latency hampers maximum throughput, so adding more bandwidth is less relevant. 

One of the most common ways to optimize the speed of a connection is to increase the speed of the link. Still, links can become overloaded if a device tries to send out too much data; this is called congestion. 

Too many tools 

Despite their best efforts, companies are faced with unpredictable, slow, and costly solutions, with shared data becoming stale in transit and arriving later than it’s needed. Some introduce SD-WAN to try and combat the above issues. 

This technology can increase efficiencies by boosting application performance and resiliency; improving network security; and simplifying the WAN architecture. However, its capabilities aren’t always enough. And neither are those of the other industry-standard replication tools or virtual platforms organizations rely on. 

All of this affects productivity and profits, especially when reputation is at stake and eye-watering fines must be paid–which will only add up until organizations fix the root cause of their compliance issues.

How Riverbed SteelHead can help 

Riverbed SteelHead is the number one hybrid network optimization and application performance solution, and a patented solution that is part of our Acceleration offering. 

SteelHead enables companies to efficiently distribute data–exceptionally fast and to scale–across multiple clouds and distributed hybrid networks. It’s scalable and flexible, and has been specifically designed to help organizations overcome network speed bumps like latency, congestion, and sub-optimal last-mile conditions. All securely, with market-leading encryption for your complete peace of mind. 

Riverbed SteelHead can assist you on your compliance journey by empowering you to: 

  • Achieve efficient RTO and RPO, and up to 33x faster application performance. Whether you’re mirroring databases or backing up desktops, servers, hot standby sites, or repositories.
  • Overcome congestion and consistently utilize up to 90% of available bandwidth by modifying the TCP transmission over the WAN with a layered, compound approach that employs HS-TCP or MX-TCP.
  • See 60-90% data reduction and cut the amount of bandwidth you need using scalable data referencing (SDR) and a sophisticated deduplication algorithm built to detect repeatable byte patterns in every payload.

What’s more, the solution works better together with SD-WAN–minimizing turns through local connection proxy and look-ahead capabilities; limiting the number of ‘chatty’ connections and conversations that take place over the WAN; and reducing the effect of latency, greatly improving transfer times in the process. 

Plus, it works well with industry-standard replication tools–from AWS and Azure Cloud to IBM, Dell/EMC VMAX, and NetApp SnapMirror–and virtual platforms like Hyper-V, KVM, and VMware.

To learn more about these topics, read our new whitepaper, How Riverbed’s Data Replication and Disaster Recovery can Assist with Compliance Goals. To learn more about how Riverbed SteelHead can play a part in your path to compliance, get in touch with our helpful team.

]]>
Enhancing Efficiency and Patient Care with Three Types of Performance Data https://www.riverbed.com/blogs/enhancing-patient-care-with-performance-data/ Tue, 09 Jul 2024 12:31:58 +0000 https://www.riverbed.com/?p=81476 In the rapidly evolving healthcare landscape, the importance of capturing and analyzing performance data cannot be overstated. With increasing reliance on digital systems and applications, understanding the intricacies of performance data is essential for healthcare providers to maintain high standards of patient care and operational efficiency.

This blog delves into three pivotal types of performance data: device performance, application performance, and user details, and highlights their significance in the healthcare sector. 

Device performance 

Device performance data encompasses metrics related to the health and functionality of devices used within healthcare facilities. This includes monitoring CPU usage, memory consumption, disk space, and battery life. For healthcare providers, ensuring that medical devices and computer systems operate optimally is crucial. Any lag or failure in these devices can directly impact patient care, leading to delays in diagnosis or treatment. By regularly monitoring device performance, healthcare IT teams can proactively address potential issues, ensuring that all equipment is functioning correctly and efficiently. 

Application performance 

Application performance monitoring involves tracking the behavior and efficiency of software applications used in healthcare settings. This includes response times, error rates, and usage patterns of various applications. EHR systems like EPIC and Cerner, PMS like FANS, and diagnostic tools like PACs and Powerscribes are crucial software for healthcare providers. Ensuring these applications run smoothly is vital for maintaining uninterrupted patient care and operational workflows. Monitoring application performance helps in identifying and resolving bottlenecks, thus enhancing the overall user experience and reducing downtime.   

User details 

The third crucial type of performance data is user details, which provide insights into how healthcare professionals interact with devices and applications. This includes tracking user sessions, usage patterns, and feedback. Understanding user behavior is essential for optimizing the digital experience and identifying training needs. In healthcare, where time and accuracy are paramount, ensuring that staff can efficiently navigate and utilize digital tools is critical. Analyzing user details can highlight areas where systems may need adjustments or where additional training may be required to improve efficiency and satisfaction. 

Monitoring employee sentiment 

An extension of understanding user details is monitoring employee sentiment. Gathering and analyzing feedback from healthcare professionals regarding their digital experience can offer valuable insights. Correlating sentiment data with performance metrics helps in identifying pain points and areas needing improvement. Enhancing employee satisfaction with digital tools not only boosts productivity but also contributes to better patient care.   

Leverage performance data for better healthcare outcomes

For healthcare providers, leveraging these performance data types is indispensable. The Princess Alexandra Hospital NHS Trust serves as a prime example. By implementing Riverbed Aternity Digital Experience Management (DEM), they significantly improved clinician productivity and saved approximately £3 million in IT costs over five years. This comprehensive data analysis allowed the ICT team to pinpoint issues, automate problem resolution, and optimize hardware replacements, ultimately enhancing patient care. 

Healthcare providers can achieve similar results by capturing and analyzing the objective and subjective metrics of patients and clinicians through device performance, application performance, and user details. This will lead to a more efficient, responsive, and patient-centric healthcare delivery system.

]]>
Three Critical CIO Focus Areas: Insights From Gartner Digital Workplace Summit https://www.riverbed.com/blogs/critical-cio-focus-areas-from-gartner-digital-workplace-summit/ Mon, 01 Jul 2024 12:30:45 +0000 https://www.riverbed.com/?p=81308 Group of people standing at Riverbed conference boothFrom the June 10-11, Riverbed exhibited at the Gartner Digital Workplace Summit 2024 in London, an event designed to help business and IT leaders discover the insights they need to prepare for digital evolution. With over 850 attendees, 42 exhibitors and 50 sessions covering four research tracks, there was much to discover.

At the conference, stands and speakers explored how today’s companies can:

  • Understand their digital workplace maturity level
  • Learn how to advance their digital employee experience
  • Discover how to leverage GenAI for increased workplace productivity
  • Tackle topics like hybrid working, collaboration tools, and more

As leaders in AI observability, a technology that supports each of these areas, we were well-positioned to advise and empower attendees. Here are three key takeaways from the event and our conversations there.

AI is top of mind at C-level

AI and its adoption are critical components of a CIO’s role–so the implementation and use of these technologies are a discussion for the boardroom. This point was highlighted at the summit.

Gartner shared research from its 2022 Digital Worker Survey, in which 49% of respondents named the CIO as one of the top three executives whose policies and actions have had the most positive influence on their employee experience. This aligned with our own findings that 75% of CIOs are testing or implementing AI projects, and that AI projects top budget spending over the next three years. However, the challenge is implementing AI that works.

Gartner highlighted the importance of breaking down siloed experiences, reducing fragmentation and risk, and prioritizing impactful investments. This is exactly what we do at Riverbed: we use powerful AI and machine-learning technology to achieve these goals automatically, with minimal need for human intervention. We pride ourselves on our library of 170 pre-built, expert-designed, triggered remediations.

The total experience matters most

While the experience you give your customers is paramount, employees need an exceptional one too. However, delivering better user experiences is getting harder with increasing IT complexity, end-user demands, and hybrid work.

Our scalable solutions collect, analyze, automate, and report on real data through a Unified Agent with a single, simple user interface. This means we can deliver actionable insights on user experience at every device, application, and click; immediately diagnose user experience issues, remotely and non-invasively; and pinpoint opportunities to improve the digital experience. What’s more, we tackle the problem of agent fatigue by collecting more data without adding agents via our unified agent manager.

We also offer superior out-of-box experiences for popular business process applications and open support for 35 third-party integrations–including AWS, Google Cloud Platform and Workspace, ServiceNow, and Slack–so every employee’s every experience is monitored and optimized.

Frontline workers need our focus

Gartner’s 2022 Digital Worker Survey found that most employees still only view IT as a support function, with 55% approaching IT for technical support. Yet, IT can’t prevent and resolve critical issues fast enough, with a flood of false-positive alerts, a shortfall of top IT talent, and a lack of tools and precise insights into which notifications matter.

This is especially challenging for frontline workers, whose jobs are essential and whose digital experience can be life-changing – and life-saving. The good news is that we can remedy their issues in the ways we’ve mentioned above, giving them fewer blue-screen moments and more seamless processes that boost their productivity and drive better outcomes. We can also support them across mobile and tablet devices, which they tend to use in their fast-paced, often-physical roles.

With 155 million new company-owned devices entering the market every year, unsurprisingly, this creates gaps in measuring device performance. Enter our mobile device data-collection solution, Aternity Mobile, which gathers over 150 metrics on mobile device, app and network performance. This allows IT to proactively identify digital experience issues on company-provided mobile devices and take action. It works on Apple iOS and Android across the most complete range and broadest coverage of user experience devices: computer, web, mobile, and free-standing kiosks.

At Riverbed, we support businesses in building more digitally mature workplaces using a unique combination of employee experience expertise and state-of-the-art, AI-powered tools. We hope to see you at an event soon, but in the meantime, get in touch with our friendly, helpful team to find out more about what we do and how we can help.

]]>
The Future of Network Management: Advanced Observability and AI-Driven Automation https://www.riverbed.com/blogs/future-of-network-management-observability/ Thu, 27 Jun 2024 12:58:29 +0000 https://www.riverbed.com/?p=81149 In today’s digital landscape, effective network performance monitoring and observability have become crucial for organizations aiming to ensure seamless digital experiences and robust security. With the proliferation of remote work, cloud services, and complex network architectures, traditional monitoring tools often fall short in providing comprehensive visibility and actionable insights.

On May 7, Riverbed unveiled NPM+, a new cloud observability service designed for both edge and cloud network observability, and enhancements to Riverbed IQ, our AIOps-driven automation solution. The EMA Impact Brief 2024 highlights the need for such solutions and discusses advancements in AI-driven automation and network observability solutions that address these challenges.

Here are the highlights of the brief: 

NPM+: Next-gen network observability 

Riverbed NPM+ is a SaaS-delivered observability solution designed to offer granular visibility into network performance from any client or server endpoint. Leveraging the new Riverbed Unified Agent, NPM+ passively monitors IP and TCP connections before and after data gets encrypted, providing IT managers with detailed insights into all network traffic. This includes monitoring remote users, cloud traffic, and traffic within zero trust network access service edges.

Key features include: 

  • Comprehensive Visibility: NPM+ illuminates critical blind spots in network performance, particularly for remote users and cloud-based services. 
  • Granular Monitoring: IT managers can monitor performance by user, system name, application process, and more. 
  • Reduced Administrative Overhead: By utilizing the Riverbed Unified Agent, NPM+ offers a streamlined approach to network performance monitoring, eliminating the need for additional agents and reducing complexity. Unified agent is an innovative common agent strategy to streamline deployment, management, and updates of Riverbed’s agent-based offering. The only agent management solution to support endpoint monitoring for both end user experience and network observability. 

Riverbed IQ: AI-driven automation 

Riverbed IQ, an AIOps solution introduced in 2022, has been enhanced with a new runbook editor for no-code/low-code automation. This tool analyzes telemetry from Riverbed’s observability solutions and third-party tools, detects anomalies, correlates events, and conducts root-cause analysis. The runbook editor allows IT operations teams to automate actions based on AI-derived insights through a drag-and-drop interface. Key components include: 

  • Triggers, Actions, and Outputs: The interface is centered on these concepts, enabling IT professionals to build workflows that respond to AI insights. 
  • Prebuilt Integrations: Riverbed IQ includes numerous integrations with third-party systems, extending its capabilities across various platforms and tools. 
  • Proactive Issue Resolution: The automation capabilities of Riverbed IQ enable prompt response to incidents, minimizing downtime and enhancing operational efficiency. 

Addressing modern IT challenges

Riverbed’s updates cater to the needs of modern IT operations teams. According to Enterprise Management Associates (EMA), 87% of IT organizations are allocating budgets to improve network experience monitoring for remote workers. Traditional tools like remote desktop access and endpoint monitoring fail to provide comprehensive network visibility. Riverbed’s NPM+ and Aternity endpoint monitoring solutions, leveraging the new Unified Agent, offer a streamlined approach with less administrative overhead. 

Moreover, 57% of network teams expressed a desire for AIOps solutions with low-code interfaces to build runbook automation. Riverbed IQ directly addresses this need, supporting intelligent alerting, escalations, and self-healing networks, making it a valuable asset for IT operations seeking efficiency and reliability. 

Learn more 

EMA impact briefRiverbed’s introduction of NPM+ and enhancements to Riverbed IQ represent significant strides in network observability. These tools address critical blind spots, enhance AI-driven automation, and provide comprehensive visibility and control over modern, dynamic network environments. By integrating these advanced solutions, organizations can ensure efficient and reliable network performance, improve security, and drive operational efficiency. As networks continue to evolve, Riverbed’s cutting-edge solutions and innovative approach establish us as a leader in delivering seamless digital experiences and robust network observability.

The future of network management lies in comprehensive, intelligent observability, and Riverbed is at the forefront of this transformation. Download the EMA impact brief here for more information. 

]]>
More Observability, More Opportunity: Ten Steps to Banking Tech Success https://www.riverbed.com/blogs/steps-to-banking-tech-success/ Mon, 24 Jun 2024 12:26:20 +0000 https://www.riverbed.com/?p=81236 Since the mid-2000s, we’ve seen profound transformation in banking–from the establishment of FinTech and the emergence of crypto assets, blockchains and programmable Web3 over the last decade to the rise of artificial intelligence (AI) more recently. In fact, it’s estimated that AI could contribute up to $4 trillion to GDP growth over the next several years.

Across financial services, technology like machine learning is already being applied from the back office to the front, as it is used to replace human judgement in underwriting, assess risk in the payments industry, provide customers with conversational interfaces in mobile banking apps, and more.

The question is, how can banks keep up, get ahead, and capitalize on these exciting new advancements? Here are our top tips.

Meet your peers

Today, chains like Ethereum, Polygon and Solana have over 1.3 billion addresses–the equivalent of a bank account–and there are thousands of developers working on an open-source way to extend financial software, right now as you read this.

Essentially, FinTechs and neobanks have taken on the biggest names in the business, the likes of Visa and MasterCard, and they’re winning. An estimated $4 billion in capital is going into the FinTech sector a month and the decentralization of the finance industry is happening in front of our eyes. So, even the most established banks shouldn’t become complacent, and must embrace the opportunity to adapt.

Partner up

However, this seismic change can’t be tackled alone. Transformation begins with collaboration, and banks must find the right technology partners if they want to stay competitive.

Relationships work both ways, too, as tech giants lean on financial firms to offer their customers integrated banking solutions. For example, Apple Card was created in partnership with Goldman Sachs, or Amazon connecting pools of capital to underwrite the commerce that takes place on it.

Explore new platforms

In addition to the right partner, you need the right platform; one that’ll empower you to improve your visibility and understanding of what banking with you looks like for your customer. This is because digital experience is everything.

In Riverbed’s Global Digital Employee Experience (DEX) survey, 98% of FSI-leader respondents agreed DEX is important to remain competitive.

Keep your customers ‘appy’

So, let’s look at the user’s perspective for a moment. To them, this industry advancement is fantastic. They no longer need to distinguish payments, banking, and investing; as they are all integrated, usually in one easy-to-use, quick-to-navigate app or website.

They can send money to friends and family through social media messaging apps. They can save through payment apps. They can invest in treasuries through savings apps. They can oversee all their accounts, budgets, incomings, outgoings and more through financial management apps. Remember, if they can’t do all this on your app–preferably facilitated by a friendly robo-advisor–they’ll switch to a competitor, where they can.

Delve into your data

What lies at the heart of creating actionable insights and delivering these relevant digital experiences? Data.

It’s never been more vital to collect, analyze and extract actionable insight from full-fidelity data across all your platforms. To have great AI outcomes, you need great data.

The key is understanding what ‘normal’ looks like to you, so you can avoid falling below this benchmark and know when you’re achieving above it.

Invest in intelligent monitoring

By partnering with an AI observability leader and implementing their platform to monitor your data (that’s all the previous steps sorted), you can begin to establish that ‘normal’, discover the experiences of your employees and customers, and make improvements – all based on fact, rather than your best guesswork.

Automate, automate, automate

The best platforms in the market will even have integrated tools, powered by AI and machine learning, that automate those improvements. By automatically performing tasks like cutting complexity by creating logical shortcuts; addressing lower-level service desk issues; and fixing bugs; organizations can boost employee productivity and fine-tune apps. So, when new functionality does go live, it makes superior experiences for everyone.

Review your resources and make smart choices

By gaining insight into what resources you have, which you actually use, where you need more and how often they truly need updating or upgrading, you can make smarter choices, working more efficiently, and cost-effectively.

For example, consider:

  • Right-sizing hardware – do you really need all those laptops, or would VDI spaces do the trick?
  • Software – how many of those licenses actually have users assigned to them?
  • Storage – how much data are we honestly using?
  • And maintenance – do all our devices need replacing at once or are some still performing well?

These can all have a significant impact on your IT investment.

Monitor trends and patterns today and into the future

With all the data you are monitoring, AI can be used not only to see what’s going on today, but also to model what might happen tomorrow, and how you can stay on top of it.

Let’s say, for example, customers regularly visit your banking app on payday to spend, save and invest their cash. You can predict this pattern, allocating additional bandwidth towards the end of the month so network performance stays smooth.

Protect your budget and our planet

Investing in, and releasing, new tech can be costly. But in turn, these technologies can allow you to save money and cut your carbon footprint. By carrying out all the steps above–using data-driven insight to right-size, increase productivity and predict trends–you can improve ROI while using fewer resources.

Riverbed’s AI-powered observability solutions are transforming the digital financial landscape by providing seamless and secure user experiences across every channel–integrating with other technologies, and incorporating AI and machine learning, to provide proactive remediation, reduce costs and cut carbon impact. Get in touch with our expert team, so we can work together to bring your bank into the future.

]]>
Riverbed AppResponse Advanced TCP Metrics Has You Covered https://www.riverbed.com/blogs/riverbed-appresponse-advanced-tcp-metrics/ Thu, 13 Jun 2024 12:13:07 +0000 https://www.riverbed.com/?p=79264 Imagine peering into the intricate dance of data packets that power your critical applications. With Riverbed AppResponse, this vision becomes reality. Its advanced TCP metrics feature unlocks a wealth of insights, transforming network troubleshooting from a guessing game into a data-driven science. But how exactly does this translate to tangible business benefits?

Let’s delve into the key advantages and explore how AppResponse  empowers you to solve real-world use cases.

Main advantages of Riverbed AppResponse TCP metrics

Riverbed AppResponse allows for faster data-driven analysis:

  • Granular Visibility: Gain unprecedented detail into the behavior of TCP connections. Metrics like retransmissions, round-trip times (RTTs), and window sizes paint a vivid picture of network performance, pinpointing bottlenecks and inefficiencies with laser precision.
  • Proactive Problem Solving: Don’t wait for user complaints to identify issues. AppResponse proactively alerts based on TCP metrics to empower you to anticipate and address problems before they impact user experience or business continuity.
  • Application-Centric Insights: Correlate TCP metrics with specific applications, allowing you to isolate performance issues affecting individual applications rather than struggling with network-wide troubleshooting.
  • Faster Root Cause Analysis: The intuitive interface and insightful visualizations enable you to drill down to the root cause of performance problems quickly and efficiently, saving valuable time and resources.
  • Data-Driven Decision Making: Backed by concrete data on TCP behavior, you can make informed decisions about network optimizations, capacity planning, and resource allocation, ensuring optimal application performance and user experience.

Solving business use cases with AppResponse

Business outcomes drive today’s investment decisions. With Riverbed AppResponse, you can:

  • Proactively address application performance degradation: Identify and address performance issues before they impact users, preventing lost productivity and revenue. Instead of having to tediously pore over a trace file looking for the TCP window size issue,. Being able to quickly see where and when the breakdown in communications happened means you can speedily troubleshoot the application slowdown, and get your users back to productive work quickly and easily.
  • Optimize application performance for remote users: Identify and address network issues impacting geographically dispersed users, ensuring a seamless and productive experience regardless of location.
  • Troubleshoot VoIP and video conferencing issues: Gain deep insights into jitter, latency, and packet loss, ensuring smooth and reliable communication for critical collaboration tools.
  • Pinpoint bottlenecks in cloud migrations: Monitor TCP metrics to identify performance bottlenecks during cloud migrations, ensuring a smooth transition with minimal disruption.
  • Demonstrate ROI for network investments: Quantify the impact of network improvements on application performance with concrete data, justifying investments and demonstrating value to stakeholders.

No more sifting through Terabytes of data manually

Riverbed AppResponse makes it easy to diagnose Zero Window issues.
Riverbed AppResponse makes it easy to diagnose Zero Window issues.

By harnessing the power of TCP metrics with Riverbed AppResponse, you gain an invaluable ally in optimizing network performance and ensuring a seamless user experience. This translates to increased productivity, improved business continuity, and a competitive edge in today’s digital landscape. So, take control of your network and empower your business with the power of data-driven insights.

]]>
Satellite, Remote, and Cloud: Our Connectivity Insights from GITEX Africa https://www.riverbed.com/blogs/connectivity-takeaways-from-gitex-africa/ Mon, 10 Jun 2024 12:52:00 +0000 https://www.riverbed.com/?p=80946 GITEX Africa entranceGITEX Africa took place in Marrakech from May 29th to 31st, bringing together technology giants, startups and investors from over 130 countries, plus an estimated 50,000 visitors. Hosts and attendees were united in their mission to integrate Africa into the global AI economy. Riverbed was thrilled to join one of its leading regional distributors, Starlink, to showcase our Riverbed Application Acceleration portfolio with great success.

Engaging in meaningful discussions about Riverbed’s role in Africa’s emerging tech landscape with attendees from South Africa, Nigeria, Ghana, and other parts of the continent, we identified three key takeaways from this extraordinary event.

1. Satellite connectivity is key

Africa’s technology infrastructure is rapidly developing, yet it has some way to go to match the speed and standards common in the Western world. As operating from cities isn’t always possible, realistic, or necessary, many African companies, particularly those in remote or rural areas, rely heavily on satellite connectivity.

But even large organizations with urban locations depend on the technology. For example, one major bank we spoke to has sites in Rabat, Casablanca, and Marrakech with an MPLS link, and still uses satellite connectivity for their remote branches to transfer data. After all, it’s secure and functional. However, it can be made stronger, faster, more reliable, and more cost-effective by adding Riverbed’s Application Acceleration solutions.

Latency can be high when data is transferred on a satellite connection across distant locations. It can rise even further as the amount of data and connected devices businesses deal with increases. Organizations often boost bandwidth to try and remedy these issues, at high financial cost. Still, it’s a myth that more bandwidth guarantees higher throughput; in reality, latency hampers maximum throughput, making bandwidth irrelevant.

Another way companies try to overcome their latency issues is by installing tools designed for this purpose. These can be unpredictable, slow and costly, leading to data becoming stale in transit and arriving later than it’s needed.

Enter Riverbed’s Application Acceleration portfolio, which empowers companies to quickly and efficiently distribute data across satellite connections. It’s scalable and flexible, and has been specifically made to help firms overcome network speed bumps like latency, congestion, and sub-optimal last-mile conditions. All securely, with market-leading encryption for complete peace of mind.

With Acceleration, businesses can make double-digit improvements in data transfer speed. Unsurprisingly, many GITEX Africa attendees were interested in employing the platform primarily for this purpose.

2. The cloud hasn’t arrived… yet

At global events, cloud technology is often a hot topic, but this was not the case at GITEX Africa, as it hasn’t reached much of the continent yet. However, with rapid technological advancements across Africa, the cloud is expected to arrive soon, and businesses must be ready to seize this opportunity. Riverbed can support companies with local optimization and is prepared to help migrate data and streamline traffic to, from, and within the cloud or any network, on- or off-premises.

Several attendees were keen on future-proofing their operations, recognizing that the cloud is a significant development likely to emerge soon.

3. Remote connectivity must be considered

One topic that did prove a common conversation starter was the use case of data center disaster recovery (DCDR). Organizations in Africa are consistently looking to improve latency and connectivity through their disaster recovery and backup. This is critical when you have data replicating between the data center and the disaster recovery site, with remote workers connecting to data centers through laptops.

At Riverbed, we offer two-step support in this area. First, we address data replication and flow between the company’s sites. Second, we install client accelerators on agents’ laptops, speeding up and securing systems no matter which network they’re connected to.

Riverbed Stand at GITEX Africa
Thanks to GITEX Africa attendees for meeting with us at Starlink’s stand to learn more about Riverbed Unified Observability and Acceleration solutions.

In conclusion, big things are happening in the African market. Connectivity, the cloud, and remote working were all discussed in initial conversations, and we look forward to talking to visitors more in the coming weeks and months about their goals in these areas.

Whether you attended GITEX Africa or are working within the continent and interested in optimizing your operations with a market-leading Application Acceleration portfolio, get in touch. We would be delighted to support your ongoing journey.

]]>
Riverbed Unified Agent Simplifies Agent Management https://www.riverbed.com/blogs/riverbed-unified-agent-management/ Thu, 23 May 2024 12:35:38 +0000 https://www.riverbed.com/?p=80368 Is your Service Desk team suffering from agent fatigue due to the constant effort required to qualify, install, and manage your agent fleet? Are agent incompatibility issues causing collision challenges? Are your users’ digital experiences hindered by too many agents on their devices?

Consider this: Riverbed polled 40,074 Mac devices and found an average of 31 agents per device. Some of these agents take three minutes or more to boot, and they average just under one crash per month per device.

Does this sound familiar? What if you could cut through the alert fatigue, overcome staffing shortages, and reduce the number of software agents you need to manage?

A single Unified Agent solution

Riverbed Unified Agent is an essential element of the Riverbed Observability and Optimization Platform. It was built from day one to be a single agent solution for deploying and managing Riverbed agent-based modules, as well as select third-party offerings.

Unified Agent provides a combination of selectable services. Today, these services include:

  • Aternity EUE for end-user experience monitoring of device and application performance
  • Aternity Digital Assistant for polling user sentiment
  • NPM+ Core for monitoring TCP network and application performance (beta)

Additional modules will be coming soon.

Simply deploy Unified Agent once, then load the desired modules onto the devices you choose. This results in massive scalability and efficiency with less effort.

Riverbed Unified Agent simplifies agent deployment and management.
Riverbed Unified Agent simplifies agent deployment and management.

Unified Agent makes it easy to deploy, update, and manage agent modules. Deploy it once and get automatic updates of both the agent and modules (or opt to update them manually, it’s your choice).  Easily add or disable agent modules and see the status of all deployment of all modules.

Additional benefits of using a single agent include:

  • A single installation process
  • A single point of management for enabling module features
  • One-time validation of agent security
  • Easy addition or disabling of agent modules
  • Built-in governance to protect customer assets
  • Automatic updates of modules
  • Support for third-party modules certified by Riverbed

In short, Unified Agent enhances IT efficiency, reduces costs, and improves user experience.

Selectable modules support full-stack observability

Unified Agent future-proofs your agent strategy by providing immediate access to a library of selectable modules. These modules are controlled through a single SaaS-based management console. They capture full-fidelity data across the spectrum and direct key metrics to either Riverbed Aternity or the new Riverbed NPM+ cloud network observability service.

To learn more about Riverbed Unified Agent, click here.

]]>
Mobile DEX: The Next Frontier for Front-Line Employee Experience https://www.riverbed.com/blogs/mobile-dex-for-front-line-employee-experience/ Wed, 08 May 2024 11:35:30 +0000 https://live-riverbed-new.pantheonsite.io/?p=80085 Aternity Mobile delivery worker in fieldMobile DEX is truly the next frontier in ensuring excellent digital experiences for your front-line employees. Whether it’s an Amazon delivery person, a nurse at the Mayo Clinic, a Hertz representative at the car rental return garage, or a field service agent at USAA, they rely on mobile devices to do their jobs. And supply chain, healthcare, car rental and insurance are hardly the only industries that provide their employees with mobile devices.

Gartner projects that businesses will spend $61.5B on mobile devices in 2024, up 1.4% from 2023, according to Gartner Market Data Book 1Q24. Gartner also projects that companies will purchase 155 million mobile devices, including phones and tablets, for their workforce in 2024.

The scary part about Mobile DEX

Digital Employee Experience Management, or DEX, is a hot topic right now. Every observability vendor touts their abilities to gather telemetry on device and application performance and usage, associate that with employee sentiment and organizational context, and then employ AI or ML techniques to proactively identify and resolve employee issues, while providing insights into better performance and changes in behavior.

Gartner, DEX, digital employee experience
The Gartner DEX Tool Model. Source: Gartner Market Guide for DEX Tools, October 2023

Here’s the thing: most of these vendors are focused on ensuring excellent DEX for employees who use laptops, PCs, or thin-client devices. They don’t have the ability to ensure a positive Mobile DEX. With 155 million mobile devices in use, that’s a pretty sizeable blind spot. Especially since these mobile users are the point people–they’re the ones interacting directly with customers or patients or citizens. So, if IT lacks visibility into their Mobile DEX, they really have no idea of whether issues are affecting revenue, productivity, satisfaction, or even healthcare outcomes.

Mobile DEX for corporate-owned Android and iOS devices

For front-line workers and other employees who rely on mobile devices for their jobs, poor digital experience negatively affects productivity and customer service. IT is responsible for the digital experience of these employees, just as they are for employees who use laptops and PCs, but they lack visibility to proactively identify and resolve issues affecting the full range of mobile apps and devices used by the workforce.

Solving key challenges in assuring mobile digital experience

Riverbed’s Mobile DEX solution, Aternity Mobile, enables IT teams to proactively identify digital experience issues on Android and iOS mobile apps and devices and take prescriptive, targeted actions, improving employee productivity, customer service and business results. Aternity Mobile provides a comprehensive view of mobile app and device performance across Android and iOS for multiple vendors and enables IT to improve employee experience by engaging with them to get feedback and send contextual help.

Only Aternity, the digital experience solution of the Riverbed Platform for unified observability and optimization, provides a cohesive view of digital employee experience throughout their day, even as they switch between devices. Watch this video to see how it works:

Riverbed fills the Mobile DEX gap

Mobile DEX is a gap that most management solutions don’t address. Enterprise Mobility Management solutions don’t provide enough visibility into actual app and device performance. They can tell which mobile apps have been deployed to mobile devices, but they can’t see their performance. They know the device specifications, but they can’t do detailed device or network monitoring. Agent-based DEX solutions can’t instrument Android or iOS apps, so traditional DEX vendors can’t understand the performance of the majority of mobile devices being used–only Windows devices. And mobile SDKs can only monitor native mobile apps owned by the organization. Vendors of specialized mobile devices, like Zebra, have some mobile DEX capabilities, but only for their mobile devices, not others.

Mobile DEX, competitive matrix
Riverbed Aternity is the only digital employee experience solution that provides a unified view of actual employee experience across every type of device.

Proactively identify and resolve mobile issues

Aternity Mobile gathers more than 150 metrics on mobile device, app and network performance that enable IT to proactively identify and resolve digital experience issues. Unlike other solutions, Aternity gathers this performance data across Android and iOS for multiple device vendors, including rugged mobile devices and free-standing mobile kiosks.

With Aternity Mobile, IT can identify problems with hardware and battery health, device configuration or network connection and proactively take action to improve employee productivity and customer service.

Aternity Mobile, Mobile DEX
Detect and resolves device health issues affecting productivity and satisfaction.

Detect and resolve individual device issues

With Aternity Mobile, IT can drill down into a specific user’s device to identify and resolve issues affecting productivity. Aternity Mobile enables IT to analyze device health metrics such as storage, RAM, CPU and battery strength and drain rate. It also provides telemetry on signal strength and health of Wi-Fi and cellular networks used by the employee. Analyzing usage patterns of apps and websites enables IT to ensure compliance with corporate usage policies.

Mobile DEX
Track device & network health, Wi-Fi and cellular usage and signal strength, and mobile app usage for a full picture of performance.

Monitor mobile app performance to ensure employee productivity

Aternity Mobile also monitors usage and crashes for every corporate mobile app used by the workforce. Aternity provides detailed information such as the traffic generated by each app, the start and stop time of the app, and the domains the users were accessing with their mobile device, to ensure the mobile device is being used only for company-approved apps. This provides IT with deep insight into how mobile app performance and usage affects productivity and security.

Mobile app performance
Track mobile app usage across the enterprise to identify performance and security anomalies.

Improve employee engagement with bi-directional communication 

With Aternity Mobile, IT can send contextual and personalized information to employee mobile devices to gather feedback on service quality issues and to provide guidance on ways employees can improve their mobile app and device performance. Aternity Mobile enables IT to proactively inform users of outages, provide information upon app installation/first use, or based on user location, and send warnings when corporate policy usage limits are about to be reached.

Employee sentiment, Mobile DEX
Measure sentiment to improve employee engagement.

Learn more

Now with Mobile DEX, Aternity is the only digital employee experience solution that provides a unified view of actual employee experience, for every enterprise app running on any type of device – laptops, PCs, virtual and mobile–for Windows, macOS, Android, iOS and Chromebook. With Aternity, digital workplace leaders gain insights into the digital experience of their entire workforce, no matter where they work, to ensure employees are productive and engaged.

Register NowTo learn more about Aternity, please visit: riverbed.com/aternity-mobile

Or better yet, watch the replay of our Global Webcast where we cover this, along with other new capabilities from our recent product launch.

]]>
Riverbed Unwraps New AI-Powered Platform, Expands Observability and Intelligence Solutions https://www.riverbed.com/blogs/riverbed-unwraps-new-ai-powered-platform/ Tue, 07 May 2024 07:00:56 +0000 https://www.riverbed.com/?p=79428 Today, Riverbed unveiled one of our biggest launches in Riverbed history! It includes new observability and intelligence products and a revolutionary AI-powered Observability and Acceleration platform designed to enhance IT operations and improve digital experiences. With a focus on providing actionable insights, the new platform addresses the challenges posed by the ever-increasing complexity of IT environments.

As we all know, ITOps teams face the daunting task of managing vast amounts of data and alerts without sufficient context or actionable insights. Riverbed’s Platform empowers IT professionals by streamlining cross-domain data analysis and correlation to reduce the number of alerts your team must triage and then automates diagnosis and remediation.

The Riverbed Platform

The innovative Riverbed Platform approach combines observability and acceleration modules with enabling technology that supports accurate data collection and analysis, and our integration library of pre-built expert remediations, automations, and application integrations that automate problem identification and resolution. More specifically,

  • Platform modules collect high-fidelity data across the entire IT stack, including digital experience, infrastructure, network, cloud, and application observability and application acceleration solutions. Key metrics from across Riverbed Observability modules are ingested into our powerful AI automation service to identify service-impacting events and automate diagnosis and remediation.
  • The enabling technology layer supports accurate data collection using our Edge Collector, Riverbed Unified Agent and the Riverbed Data Store, while capabilities like Topology Viewer, AI, Automation and dashboards ensure accurate analysis of this data.​
  • The integrations library gives our customers access to out-of-the-box third-part integrations. Pre-built application integrations facilitate easy integration with popular third-party software into automation workflows, including ITSM, business process, business productivity, and security solutions. While low-code graphical workflow processes permit IT to build or customize remediations, automation and integrations to optimize them for their specific IT environment.

With the ability to integrate data sources into a single view, the Riverbed Platform makes it easy to deliver precise answers that keeps IT running. In short, the Riverbed Platform offers the means to not only cope with the evolving digital landscape but to thrive in it, relieving IT of the burden of manually collecting and analyzing data from across IT systems.

Riverbed Platform
The Riverbed Platform consists of three tiers: data collection modules that feed the AIOps engine; enabling technology that assists in data collection and analysis; and the integration library that delivers built-in integrations and remediations.

Only Riverbed delivers full-stack observability  

Modern IT and cloud environments are highly complex and dynamic, creating a significant need for observability solutions that leverage AI automation. Similarly, organizations are facing more digital experience challenges as they increasingly rely on mobile devices for employees to do their jobs. ​According to Samsung, 61% of organizations provide “corporate owned” mobile devices to a portion of their workforce. ​

Today’s announcement significantly expands Riverbed’s observability capabilities to include monitoring end user experience on mobile devices; support for integrated overlay and underlay visibility for popular SD-WAN solutions; and new cloud monitoring capabilities.

The new observability capabilities include:

  • Riverbed Aternity Mobile makes employees more productive and improves business results by enabling IT teams to proactively identify digital experience issues on enterprise-provided mobile devices to enable prescriptive, targeted remediation actions. No other DEM vendor supports these capabilities.
Aternity Mobile Screenshot
Aternity Mobile identifies chronic issues with mobile devices and apps by tracking trends in mobile device health over time.
  • Riverbed NPM+ is a new cloud observability service that overcomes traditional network blind spots created by remote work, public cloud, and encrypted architectures. Riverbed NPM+ ensures holistic network observability by extending visibility to previously unmonitored network locations. By collecting decrypted data at every user and server endpoint (including Kubernetes environments), NPM+ fills the visibility gaps caused by encrypted tunnels in Zero Trust environments.
  • Riverbed Unified Agent is an innovative common agent strategy to streamline deployment, management, and updates of Riverbed’s agent-based offerings. Using selectable modules, it helps IT reduce agent fatigue and agent conflict. The only agent management solution to support endpoint monitoring for both end user experience and network observability, Unified Agent helps realize the value of Riverbed’s AI-ready telemetry, delivering intelligence, observability and a seamless experience anywhere, for anyone.
  • Riverbed NetProfiler adds support for VeloCloud SD-WAN and Cisco SD-WAN (formerly Viptela). It integrates overlay and underlay views for clearer troubleshooting of SD-WAN health and performance issues.

New Intelligence 

AI is the next big transformation in IT and a number one priority of IT leadership. The Riverbed Platform is well positioned to help customers build successful AI and automation strategies with easy deployment and implementation. Our AIOps service enables IT teams to apply AI across their observability tools and embed AI-driven automation to increase IT efficiencies, scale to increased workloads, and reduce the time and cost of problem identification and remediation.

The cornerstone of the Riverbed Platform is Riverbed IQ, a SaaS-delivered AI automation. This 2.0 release enhances automation by enabling workflow processes to be scheduled or run on-demand, while custom tags support more detailed prioritization. The new Integration Library lets customers easily incorporate third-party data into their troubleshooting workflows using ready-made sub-flows. Finally, direct integration of Riverbed AppResponse improves the richness of IQ’s analytics and correspondingly improves automated diagnosis.

Also new is Intelligent Service Desk by Aternity that increases service desk and call center efficiency and availability. Unlike other DEM solutions that offer a multitude of remediation scripts designed to address narrow use cases, Aternity sets itself apart with AI-driven intelligent service desk for troubleshooting and resolving recurring device issues before they are raised as tickets. Using customizable workflow processes, Aternity replicates advanced investigations by correlating end-user impact and real-time granular performance data to identify incident root cause. Aternity dynamically mimics expert decision-making by integrating user sentiment with its remediation workflows using composable actions. Its flexible logic employs interactive feedback with optimal engagement levels to resolve simple and complex issues. For unresolved issues, a ticket is routed to the right level with the necessary context for swift resolution.

View this Intelligent Service Desk video to learn more:

For more information, please watch the launch webcast with Riverbed CEO Dave Donatelli and CTO Richard Tworek!

]]>
Is Your Organization Ready for Teams 2.0? https://www.riverbed.com/blogs/is-your-org-ready-for-teams/ Wed, 17 Apr 2024 12:17:30 +0000 https://www.riverbed.com/?p=79073 With over one million organizations worldwide using Microsoft Teams, it stands out as one of the most popular messaging platforms, enabling real-time collaboration and communication among employees.

Given the popularity of Teams, Microsoft has significantly enhanced it with the rollout of Teams 2.0, now generally available. Teams 2.0 promises to be faster, more efficient, and more user-friendly than its predecessor. As organizations migrate to the latest version of Teams, the question arises: how do you ensure a successful rollout across potentially thousands of endpoints while maintaining service quality? The answer lies in AI-driven automation.

Riverbed is a leader in unified observability, and our Aternity solution provides intelligent automation and AI-driven insights into digital experiences for employees and customers across all endpoints and devices. The introduction of Aternity’s Intelligent Service Desk has been a gamechanger, enabling organizations to dramatically reduce service desk tickets and enhance employee satisfaction. In fact, these capabilities played a crucial role in Riverbed’s successful migration from Slack to Teams 2.0.

Below, let us explore our own use case. 

Embracing digital transformation with real world impact

Like many organizations today, Riverbed’s IT team faces increasing incident volumes and complexity. Coupled with a dispersed, multi-geographical, hybrid workforce, it becomes challenging for IT to schedule time to meet with employees.

Riverbed has adopted the strategic approach of a “shift-left service desk,” focusing on real people, real devices, and real-life scenarios. This strategy ensures the best employee experience during any digital transformation by empowering end users and allowing IT to focus on higher support tiers while eliminating mundane tasks.

Riverbed made the business decision to migrate to Microsoft Teams 2.0 based on its enhanced speed, efficiency, and user-friendliness. With a more streamlined migration experience, Riverbed aimed to make the user experience equally seamless. 

Riverbed Aternity Intelligent Service Desk provides AI-enabled detection, troubleshooting, and logic-driven remediation without human intervention. This puts the employee in the driver seat by allowing them to run their own remediation scripts. With this core capability, Riverbed was able to successfully auto detect any issues employees might be facing while undergoing the migration. 

Simplifying IT intervention with a step-by-step approach

Riverbed’s IT team has seen tremendous value in automating the discovery of issues, thereby preventing resource bottlenecks. The process is straightforward:

  1. Identify which employees are experiencing issues with Teams 2.0.
  2. Generate a pop-up alert for these employees.
  3. Allow employees to execute a runbook script.

The most common issue with the migration to Teams 2.0 was the need to clear the Teams Cache, which could grow to several gigabytes. By identifying who was having an issue, Riverbed could automatically send a pop-up to suggest clearing the cache. If the user selected “yes,” the remediation would run to clear the cache and restart Teams—all within 20 seconds. This procedure was fine-tuned to avoid running if the employee was in an active Teams meeting, thereby enhancing the overall experience and minimizing disruptions.

Microsoft Teams pop up
Riverbed could automatically send a pop-up to users to suggest clearing the cache and, if Yes, remediation would run to clear the cache and restart Teams.

To date, 63% of Riverbed employees who encountered an issue with Teams 2.0 successfully executed the remediation, leading to fewer IT tickets and less troubleshooting required by the user.

Microsoft provides a variety of management tools for Teams, including Call Analytics, Call Quality Dashboard, and more. With its integration to Teams, Aternity provides similar capabilities, but also enables IT to resolve other troubleshooting issues, such as those caused by anti-virus software, rogue processes, or by the performance of peripherals such as headsets. In addition, capabilities within Aternity—like automated remediation, employee sentiment, and DXI—apply equally to Teams as they do for any other business critical application. 

Aternity prioritizes Teams like a business-critical application—because it is!

This real-life example underscores the value that Riverbed Aternity brings to customers using the Microsoft 365 portfolio. The Aternity agent proactively identifies user experience and performance issues, reducing costs and improving service by empowering the user to decide when to run the remediation.

Common use cases include:

  • Service Desk: Measure actual employee experience—do users experience issues across all apps, or just with Teams?
  • End User Services: Reduce problem MTTR by correlating Teams call quality with the performance of the underlying device.
  • Teams Owner: Gain cross-company insights—compare your Teams performance to the market to identify areas for improvement.
  • IT Executives: Focus on continuous improvement—determine where to invest for the greatest impact by tailoring targets against market benchmarks.

Riverbed Aternity enhances Microsoft’s monitoring capabilities by providing actual employee experience for every application in the portfolio. Aternity monitors end user experience for every enterprise application on any physical, virtual, or mobile device.

Download the Aternity solution brief to learn more about how Digital Experience Management can rapidly troubleshoot and validate changes in Microsoft Office, Windows, and Teams.

]]>
Our Top Takeaways from the Official UK Government Global Security Event https://www.riverbed.com/blogs/takeaways-from-the-official-uk-government-global-security-event/ Wed, 03 Apr 2024 12:30:50 +0000 https://www.riverbed.com/?p=78966 Last month, Riverbed attended Security & Policing, the official Government global security event, hosted by the Home Office.

The Riverbed team had the opportunity to engage with senior decision-makers and policy developers across the UK Government, plus many of those who work directly on the front line. This included experts across UK Defence & Security Exports, Border Force, the Joint Maritime Security Centre, Digital, Data & Technology and related departments, as well as over 350 other suppliers.

The theme of the event was “Collaboration. Innovation. Resilience.” and across the three days, stands were visited, talks were listened to, insights were shared, conversations were had, immersive live demonstrations were eagerly experienced. Here are our most significant takeaways from it all.

There are many ways to work in tech

At Riverbed, we are proud to provide industry-leading IT solutions that empower exceptional digital experiences. But the kind of technology we offer is totally different to some of the advancements we saw on other exhibitors’ stands. It was exciting and eye-opening to explore the innovations from companies that craft weapons, convert police cars, design tactical workwear, and more.

Yet today, each of these organisations is supported by their IT infrastructure; without robust and reliable back-end systems, employees can’t effectively do their jobs, risks can’t be mitigated, and research and development can’t happen at the pace the industry demands. It felt inspiring to speak to representatives from these kinds of organisations who we could, or do, support–knowing we’re contributing to the production of tools and equipment that keep our country safe.

Collaboration is on the rise

In the past, we’ve noticed many technology companies working in closed-off ways, wanting to keep customers in their ecosystem without losing them to competitors. But this attitude seems to be shifting, and it was refreshing to see so many major vendors collaborating to create end-to-end solutions that address all customers’ needs.

We’re proud to be included in this number, as we continuously become more and more open, incorporating third-party data into our platforms to generate the best outcomes for both our customers and peers. We were also delighted that the Government and Home Office were keen to chat with us, as they were looking for ideas and inspiration to improve the way they work.

Response times are of critical importance

When working alongside our customers, it’s often the engineering teams we liaise with–the people who run things, not necessarily those who use things. The event gave us an ideal opportunity to talk to the end users working on the front line, who actually utilise the systems we’re enlisted to transform.

It was fascinating to hear about their challenges and goals, and how their perceptions and experiences can differ from those of the engineering teams. Primarily, responsiveness is top of mind for this group–they don’t think about how smart or snazzy a tool is, as long as it allows them to enter a situation, search for what or who they need, and send an update in an instant. After all, in an emergency situation or warzone, every second counts.

Stand visitors were pleased to hear that Riverbed can help staff at all levels and roles. We’ve helped many customers in this sector to reduce costs, which can then be rerouted to capacity and technology improvements; reducing latency to accelerate system response times; enhancing end-user experiences; and mitigating the need for ‘war rooms’ to resolve issues. Moreover, we’ve empowered other customers to significantly improve their response times, enabling employees to keep colleagues in the loop quicker and more reliably than ever.

While AI can be seen as a threat, it’s also a force for good

Artificial intelligence was, unsurprisingly, one of the event’s hottest discussion points. It’s clear that the technology brings about unprecedented threat in areas such as terrorism, fraud, and economic- and cyber-crime. The Alan Turing Institute gave a particularly interesting talk entitled ‘The Rise of Deepfakes’, exploring how easily and convincingly images and video can now be manipulated, with potentially dangerous consequences.

However, the tech’s positive impact can’t be ignored, as was discussed in a thought-leadership panel our team attended: ‘Applications of Artificial Intelligence in National Security and Policing’. At Riverbed, we use AI and machine learning to automatically find and fix issues in organisations’ networks, often before the end-user even knows there’s been a problem. This enables security and defence companies to break free from blue screens, become more productive, respond to situations faster, and focus on what really matters: protecting people and saving lives.

Seeing high-energy stunts will always be awesome

There’s no doubt that the event provided a great platform for us to share our solutions with those who really need them, and to learn even more about a sector in which we’re deeply embedded.

Still, some of the most awe-striking moments were the live demonstrations performed by the likes of the Hampshire & Isle of Wight Fire and Rescue Service who showcased the latest equipment and skills to fight the most severe fires and rescues from complex car crashes. And the ADS Special Interest groups who presented interactive Counter Threat demonstrations on the UK’s premier capabilities in C-EO, CBRN drones, and Counter drones.

If you work in the Public Sector and find yourself constantly fighting fires across your IT infrastructure, get in touch. We’d be thrilled to move you to a place where you can focus on innovating rather than dealing with your ongoing IT challenges.

]]>
Three Signs Your Client’s SSL/TLS Traffic Needs Optimization https://www.riverbed.com/blogs/three-signs-your-clients-ssl-tls-traffic-needs-optimization/ Wed, 27 Mar 2024 23:03:17 +0000 https://www.riverbed.com/?p=78898 A recent Forrester report found that 60% of technology and business decision-makers planned to prioritize improving their digital employee experiences (DEX) to attract and retain talent and promote employee engagement. This statistic aligns with Riverbed’s 2023 Global DEX Survey research, which found that 91% of respondents reported plans to provide more advanced digital experiences in the next five years, especially to meet the demands of younger, “digital native” employees. This brings us to a crucial aspect of providing a great digital experience: access to applications.

With today’s distributed, hybrid workforces, ensuring reliable access to applications is business-critical for most organizations. When application performance is poor, it can negatively impact employee productivity, leading to revenue loss and a decline in customer satisfaction and brand reputation. Few, however, consSSL/TLS optimizationider that their unoptimized wide area network (WAN) traffic secured by Secure Socket Layer (SSL) or Transport Layer Security (TLS) protocols may be to blame for poor application performance and other issues in their network.

WAN acceleration—the technologies and methods used to improve the efficiency of data transfers between centralized data centers and remote locations—is the same regardless of whether the traffic uses SSL/TLS protocols for encryption. Many organizations, however, miss the classic signs that their secured WAN traffic needs optimization. To help start conversations with organizations about WAN acceleration for their SSL/TLS-encrypted traffic, we’ve outlined three signs that their traffic needs optimization.

Sign #1: High network latency

Network latency is the time it takes a data packet to travel from one place in the network to another. Ideally, this would happen at the speed of light, but in the real world, cabling, network equipment traversal, and signal loss hampers transfer speed.

For example, an increase in the number of “hops,” or devices that a packet must travel through, and poor application server performance can cause high latency.

Here are a few signs of high latency:

  • Websites load slowly or not at all.
  • Applications freeze or stall.
  • Interruptions in video and audio streaming.

Ideally, your clients’ should track latency metrics for all their applications across the network. However, this can be difficult. For example, most organizations in 2022 had an average of 130 software as a service (SaaS) applications in use. For most teams, manually tracking latency for 130 applications isn’t possible.

If left unchecked, high latency can hamper employee productivity, making it difficult to collaborate across departments and teams and ultimately impacting job satisfaction and company revenue.

Sign #2: Network Congestion

Network congestion occurs when the transmitted data exceeds the network’s processing capacity. It’s similar to cars in traffic. When the amount of traffic, or data, doesn’t have enough lanes for travel, or transmission, it backs up.

Congestion can slow data transfer and cause packet loss, and there can be many causes behind network congestion. Common causes include overactive devices and applications that use too much bandwidth, poor network configuration, and outdated devices and switches.

If the congestion continues unchecked, it can result in downtime when the network and its applications are unavailable to users. Downtime is expensive, with estimates landing at approximately $5,600 per minute.

Besides unplanned downtime, signs of network congestion include:

  • Increased error rates across the network.
  • Slow data transfer speeds.
  • Frequent application crashes.

Sign #3: Growing data usage and needs

Organizations are using more data. From artificial intelligence (AI) tools to growing data usage in analytics, monitoring, and cybersecurity, it is inescapable. In fact, 46% of organizations in a worldwide survey said they use big data analytics in research.

Earlier, we compared network traffic to traffic we experience on the road: Imagine what would happen if you dramatically increased the number of cars on your four-lane highway. Most companies are investing in big data—larger, more complex data sources—without considering how the increase in data will affect their network traffic.

Getting your clients to considering WAN optimization prior to, or early on, in the process of growing data usage can equip their networks to handle the increase in traffic without causing issues.

How to optimize WAN traffic

Many organizations see the signs and still struggle to optimize their SSL/TLS encrypted traffic. Optimizing WAN traffic encrypted with these protocols requires network teams to decrypt, optimize, and re-encrypt the traffic to ensure it remains secure. The process requires them to collect and manage certificates and private keys. With companies managing hundreds of applications, each with specific certificates, private keys, and expiration dates, the process can be challenging. Because of the challenge, many organizations won’t attempt optimization, or they’ll begin the process but not complete it. This is where you can help. SSL/TLS optimization

Riverbed offers WAN acceleration for SSL/TLS encrypted traffic that simplifies the process. You can recommend multiple deployment options that streamline certificate management with a solution that acts as a trusted man-in-the-middle to optimize traffic securely. The results? Organizations can see up to a 99% reduction in bandwidth consumption and a 10-times acceleration of SSL/TLS encrypted traffic.

Want to learn more about Riverbed’s acceleration solution with SSL/TLS optimization? Download the white paper.

]]>
Accelerating Pharma Innovation: How Unified Observability Powers IT Efficiency https://www.riverbed.com/blogs/unified-observability-powers-it-efficiency-in-pharmaceuticals/ Wed, 27 Mar 2024 12:35:51 +0000 https://www.riverbed.com/?p=78863 In the fast-paced world of pharmaceuticals, every second counts. When it comes to bringing lifesaving drugs to market, scientists and research technicians must feel confident that the tools they use daily are helping them accelerate (and not slow down) progress.

From clinical trials to delivering breakthrough medicines, R&D is the heartbeat of the industry. However, technology challenges can affect productivity, reputation, and ultimately, patient outcomes.

To drive long-term sustainable growth, leading pharmaceutical companies are diving into unified observability: bringing together data, insights, and actions to enable IT teams to improve the digital experience for scientists and technicians. Because time saved is time invested.

Create more time for R&D 

Optimizing the digital experience is imperative for scientific research programs. If lab technicians can’t access their LIMS or bioinformatics tools when they need to, research activities are delayed, increasing R&D downtime and impacting operations.

With unified observability, companies improve as much as 58% their application log-in times and provide a digital experience that gives time back to scientists, while IT teams can ensure that every second saved is a second invested in pushing the boundaries of scientific discovery. 

Unlock revenue potential from business-critical IT systems 

By strategically optimizing their IT environment, pharmaceutical companies can minimize errors, improve application performance, and unlock the full potential of their systems. This optimization leads to streamlined operations and increased productivity across the business.

Ultimately, allowing staff to do what they do best, pharmaceutical companies accelerate the delivery of drugs into the hands of patients, which translates into tangible revenue growth.

Allow your IT teams to focus on high-value projects 

Investing in automated, AI-driven remediation solutions becomes paramount in the shift from transactional to transformational IT. Technical teams can move past doing repetitive tasks to focusing on value-added projects that keep high-performing talent happy and productive.

With up to 70% reduction in MTTR for service desk professionals, unified observability allows pharmaceutical companies to focus on the adoption of new technologies, research, and development processes to improve business and advance growth.

Build brand trust by reducing risk and focusing on quality and sustainability 

There is a need for pharma teams to drive greater focus on consumer experience and communication to avoid loss of influence and authority. Protecting the business against risks through data integrity and a resilient technology foundation is not an option–it’s a necessity.

In addition, increased focus on environmental sustainability and supply chain optimization has become an integral part of all leading pharmaceutical companies. Unified observability allows IT teams to become a part of the solution, by monitoring CO2 emissions from IT assets, as well as implementing smart device refreshes to reduce unnecessary waste, and lowering energy consumption from egress traffic in cloud-based systems.

Unlock the future of Pharma 

In the quest for innovation and efficiency, Riverbed Unified Observability is fueling digital transformation for pharmaceutical companies. It is not just about improving IT infrastructure; it’s about empowering scientists, technicians, and every individual contributing to R&D. Time is money, and with Riverbed Unified Observability, every moment saved is an investment in the future of pharmaceutical excellence. To explore how Riverbed can transform your IT environment and elevate your business, learn more here. 

]]>
Making the Most of Automation: How To Maximize Investment and Accelerate MTTR https://www.riverbed.com/blogs/making-the-most-of-automation/ Wed, 20 Mar 2024 12:32:25 +0000 https://www.riverbed.com/?p=78436 Artificial intelligence (AI) has become one of the biggest buzzwords of the last decade, as businesses spanning every sector are seeking ways to innovate and transform their operations. The crucial question they must ask is: what outcomes do businesses want to achieve by adopting AI? To fully harness its potential, AI must perform meaningful actions that deliver tangible benefits.

One significant application is automation, which IBM defines as “the use of technology to perform tasks where human input is minimized.” This is particularly appealing to IT teams and the C-suite for several reasons.

Why automate?

Automation enhances IT delivery by automating manual processes that traditionally required human input. It also enables organizations to gain observability over their infrastructure, applications, and devices, facilitating improved content processing and management; workflow streamlining; data-driven decision-making; cost optimization; network performance enhancement; and proactive incident management.

Perhaps the most desired benefit is this last one: self-healing. As IT leaders take a more prominent role in the boardroom and face increasing scrutiny around digital transformation efforts–efforts that profoundly impact how business value is created and delivered–self-healing brings countless advantages.

It can reduce unplanned outages and eliminate performance issues from user journeys, elevating the customer experience. It can help businesses overcome forecasting challenges, optimizing resources to cut software and infrastructure overcapacity. And it can drive the development of higher-quality applications, reducing testing needs so organizations can bring products to market faster.

However, the path to self-healing isn’t as simple as installing an automation solution and waiting for something to happen, as presumed by many–who are then disappointed in the process when it doesn’t work as planned. To reach an optimum level of maturity and realize their goals, companies should ensure several key elements in the automation process are secured and understood.

It starts and ends with data

Automation is not only a result of AI but also a driver. Any intelligent tech is only as smart as the data you feed it, and to maximize its potential, this data must have clarity, integrity, and fidelity. Plus, it must be fully extracted across all your domains–from your infrastructure and networks to your apps and logs–to build a complete picture and accurate machine-learning model. After all, correlation based on incorrect or fragmented data is nothing more than coincidence.

Take, for example, the financial management apps currently growing in popularity. These connect to your mobile banking accounts and give insight into what you’re spending, where and when. If, let’s say, you have a current account, savings account, and debit and credit cards, it’s critical the app pulls in data from each of these to provide the whole reality of your expenses and lifestyle. Then, you can see what can be improved upon and which spending habits need to change.

This may seem unwelcome news, as so many organizations battle legacy systems that can’t communicate with each other and disparate data stored in silos. Ironically, the ultimate outcome of automation–observability–needs to be present to begin with to unify this fragmented data and get outdated tech talking. Luckily, there are solutions on the market, like Riverbed’s unified observability portfolio, which can do exactly this while unlocking all the advantages of automation.

Building a baseline

The first step on any business’s journey to self-healing is harvesting quality data, and plenty of it. Next, it’s imperative to set a baseline to understand and record what’s usual across systems and devices, so anomalies can be identified and addressed. It’s akin to going to the doctor for a blood test; without keeping accurate and timely medical records, all they could tell you was whether they’d found anything of note in that one particular sample, regardless of how well or unwell you were feeling. By examining your record and taking samples over time, healthcare professionals can detect deviations in data points, identifying if anything is amiss based on your unique baseline. They seek out one-off or recurring problems and prescribe medication to get you back to full health.

A baseline tends to be established based on mathematical machine-learning formulas, which aren’t one-size-fits-all. Application data, network data, end-user data and infrastructure data are all different, and should be treated and tracked as such. That’s why, for over 20 years, Riverbed has carried out packet analysis that gives us the flexibility to use the best formulas and data science in the best possible places, driving the incident correlations businesses need.

Next stop: self-healing

Once an effective baseline is established, automation can finally start. But to automate the healing process, businesses must first automate the detection process, introducing scripts that alert to incidents and where they’re coming from. This helps avoid false positives and human errors while allowing for the correlation of individual issues, identifying bigger problems and their roots to accelerate mean time to detect (MTTD) or mean time to know (MTTK). Then, finally, the self-healing process can begin.

After streamlining data collection, baseline setting, and incident detection and correlation, most of the hard work is done. Now, automation scripts can be used to speed up mean time to fix (MTTF) or mean time to repair (MTTR)–addressing situations before users complain and driving ongoing optimizations.

In summary, to receive the highest quality output from any automation system, it’s vital that businesses take the time and make the investment to give AI models the greatest possible input. Businesses must prioritize comprehensive problem management and continuous system feedback to achieve reliable self-healing capabilities.

Do you have the robust data and effective machine-learning mechanism you need to achieve foolproof self-healing? If not, Riverbed can help. Get in touch with us today to learn how.

]]>
Free Up Your Clinicians To Focus on What Really Matters: The Patient Experience https://www.riverbed.com/blogs/unified-observability-for-clinician-and-patient-experience/ Fri, 15 Mar 2024 12:54:52 +0000 https://www.riverbed.com/?p=78431 Now more than ever, effective and efficient healthcare IT is critical for clinicians and patients alike, as tech is increasingly relied upon to deliver life-changing outcomes. Yet, organisations worldwide face a myriad of challenges, including:

  • Maintaining seamless IT infrastructure performance.
  • Ensuring optimal utilisation of critical applications and medical devices.
  • Facilitating a productive environment for healthcare professionals.

AIOps is reshaping cost efficiency within healthcare organisations, revolutionising the way healthcare is delivered, and setting new standards for operational excellence. All the while, it supports a spend-to-save initiative in the healthcare sector, allowing organisations to fully utilise their investments and achieve more with less.

Here’s how these tools can help healthcare organisations overcome the aforementioned obstacles by providing real-time insights into the performance of critical systems, applications, and devices–enabling leaders to make informed decisions that drive cost efficiency, elevate productivity, and enhance experiences.

Maximise hardware and asset refresh budgets

Spending funds efficiently and with care can maximise impact on Trusts. Kent Community Health NHS Foundation Trust (KCHFT), one of England’s largest NHS community health providers, serving a population of about 1.4 million. It oversees more than 5,000 staff and hundreds of applications and hardware assets. KCHFT prides itself on being responsive to its patients’ needs and sought to gain end-to-end visibility into its application and hardware performance. To this end, achieve this, KCHFT’s IT team deployed unified observability, offering up-to-the-minute insight into device usage and overall health.

This approach allows the Trust to replace, refurbish, recycle or remove hardware based on its remaining life rather than physical age. Darren Spinks, Head of IT Operations at Kent Community Health NHS Foundation Trust, shared: “Riverbed showed us we wouldn’t need to replace 42% of our 1,784 devices aged five years or older. This has meant that we’ve already returned our investment.” Additionally, these platforms pinpoint the exact usage of licenses, allowing Trusts to establish which are truly required; two NHS Trusts have demonstrated they could save between £130k and £230k on their software licence costs.

Reduce service desk tickets and free up time to improve patient care

Within the NHS, there’s a growing need to “shift left” in the service desk using intelligent automation and self-healing. Unified observability meets this need by combining full-fidelity cross-domain data, machine learning, correlation and more for faster problem-solving. As a result, Trusts can accelerate incident response, minimise security risks, populate trouble tickets with supporting data, guide remediation of desktop issues, and automate recovery actions.

It initially took time for KCHFT to determine the root cause of performance problems, so it saw the benefit of this first-hand. Spinks told us: “Riverbed’s solution alerted us hours before our service desk received a call regarding the incident. Now, we don’t need to wait for a user to tell us there’s a problem. It informs us of the issue, its impact, and the implications regarding time and cost.”

Another UK customer, The Princess Alexandra Hospital NHS Trust (PAHT), serves a local population of around 350,000 people and provides a full range of general acute, outpatient, and diagnostic services. It’s gone from receiving 629 service desk tickets outside its SLA to consistent single figures using auto-remediation, which it employs around 10,000 times a month. This ensures that all staff members feels supported and more productive, freeing up their time for them to focus on what really matters: patient care.

Improve experiences for clinicians and patients

In today’s hybrid world, every interaction matters, and healthcare organisations must measure and manage digital experiences (DEX) for both staff and the people they serve–then strive to make them more compelling. Riverbed identifies and optimises DEX hot spots, creates efficient pathways to streamline workflows for clinicians, practitioners, nurses, and consultants, and empowers all employees to deliver the quality of service and responsiveness patients expect and deserve.

PAHT’s ICT team found that 947 hours were lost each quarter due to unresponsive blue screens, negatively impacting patient care. Riverbed’s unified observability platform has reduced this to 211 hours. Jeffrey Wood, Deputy Director of ICT at the Trust, explained: “After deploying Riverbed’s unified observability solutions, we reduced the number of application crashes (by hang) by nearly 50%. We’ve saved almost 700 hours, around 28 days per month, which we’re effectively giving back to clinicians.”

Furthermore, many applications used by staff were legacy-based, unstable, and challenging to maintain due to the age of hospital devices–65% of which were seven years old or older, and 85% of which were PC desktops rather than more mobile laptops. Riverbed’s Aternity solution provided full visibility across the end-user experience, uncovering the impact of outdated devices on their infrastructure. It also empowered them to proactively fix issues before clinicians even notice, boosting work rate by 25%, staff satisfaction, and patient outcomes.

Stay sustainable

Unified observability solutions can help organisations reduce their carbon footprint while cutting costs and meet long-term sustainability goals like achieving net zero. In March 2023, PAHT required the carbon equivalent of 4,000 trees to offset idle user devices. With our data and automation tools, they’ve reduced this to around 733 trees, realising £200k in-year savings on electricity bills in the process. Altogether, these could deliver more than £1.2 million in savings to the board.

Overall, the ICT team at PAHT will see a £3 million saving over a three-to-five-year period thanks to Riverbed’s unified observability, while improving outcomes–now spending just under 40% of their time fighting fires, down from 85-90%. On average, our NHS customers experience a 58% acceleration in application login times, a 40% average reduction in IT spend on hardware refresh, and a 70% reduction in Mean Time to Resolution (MTTR).

Trust us to help you achieve similarly incredible results–driving in-year savings, elevating productivity, and enhancing experiences for everyone. Reach out today to learn how. Together, we can unleash the full potential of your infrastructure and unlock the power of your investments, from server room to operating room to boardroom.

]]>
Three Signs Your SSL/TLS Traffic Needs Optimization https://www.riverbed.com/blogs/signs-your-ssl-tls-traffic-needs-optimization/ Wed, 06 Mar 2024 13:45:27 +0000 https://www.riverbed.com/?p=76503 A recent Forrester report found that 60% of technology and business decision-makers planned to prioritize improving their digital employee experiences (DEX) to attract and retain talent and promote employee engagement. This statistic aligns with Riverbed’s 2023 Global DEX Survey research, which found that 91% of respondents reported plans to provide more advanced digital experiences in the next five years, especially to meet the demands of younger, “digital native” employees. This brings us to a crucial aspect of providing a great digital experience: access to applications.

With today’s distributed, hybrid workforces, ensuring reliable access to applications is business-critical for most organizations. When application performance is poor, it can negatively impact employee productivity, leading to revenue loss and a decline in customer satisfaction and brand reputation. Few, however, consSSL/TLS optimizationider that their unoptimized wide area network (WAN) traffic secured by Secure Socket Layer (SSL) or Transport Layer Security (TLS) protocols may be to blame for poor application performance and other issues in their network.

WAN acceleration—the technologies and methods used to improve the efficiency of data transfers between centralized data centers and remote locations—is the same regardless of whether the traffic uses SSL/TLS protocols for encryption. Many organizations, however, miss the classic signs that their secured WAN traffic needs optimization. To help organizations start the process of WAN acceleration for their SSL/TLS-encrypted traffic, we’ve gathered three signs that your traffic needs optimization.

Sign #1: High network latency

Network latency is the time it takes a data packet to travel from one place in the network to another. Ideally, this would happen at the speed of light, but in the real world, cabling, network equipment traversal, and signal loss hampers transfer speed.

For example, an increase in the number of “hops,” or devices that a packet must travel through, and poor application server performance can cause high latency.

Here are a few signs of high latency:

  • Websites load slowly or not at all.
  • Applications freeze or stall.
  • Interruptions in video and audio streaming.

Ideally, your organization should track latency metrics for all your applications across the network. However, this can be difficult. For example, most organizations in 2022 had an average of 130 software as a service (SaaS) applications in use. For most teams, manually tracking latency for 130 applications isn’t possible.

If left unchecked, high latency can hamper employee productivity, making it difficult to collaborate across departments and teams and ultimately impacting job satisfaction and company revenue.

Sign #2: Network Congestion

Network congestion occurs when the transmitted data exceeds the network’s processing capacity. It’s similar to cars in traffic. When the amount of traffic, or data, doesn’t have enough lanes for travel, or transmission, it backs up.

Congestion can slow data transfer and cause packet loss, and there can be many causes behind network congestion. Common causes include overactive devices and applications that use too much bandwidth, poor network configuration, and outdated devices and switches.

If the congestion continues unchecked, it can result in downtime when the network and its applications are unavailable to users. Downtime is expensive, with estimates landing at approximately $5,600 per minute.

Besides unplanned downtime, signs of network congestion include:

  • Increased error rates across the network.
  • Slow data transfer speeds.
  • Frequent application crashes.

Sign #3: Growing data usage and needs

Organizations are using more data. From artificial intelligence (AI) tools to growing data usage in analytics, monitoring, and cybersecurity, it is inescapable. In fact, 46% of organizations in a worldwide survey said they use big data analytics in research.

Earlier, we compared network traffic to traffic we experience on the road: Imagine what would happen if you dramatically increased the number of cars on your four-lane highway. Most companies are investing in big data—larger, more complex data sources—without considering how the increase in data will affect their network traffic.

Considering WAN optimization prior to, or early on, in the process of growing data usage can equip your network to handle the increase in traffic without causing issues.

How to optimize your WAN traffic

Many organizations see the signs and still struggle to optimize their SSL/TLS encrypted traffic. Optimizing WAN traffic encrypted with these protocols requires network teams to decrypt, optimize, and re-encrypt the traffic to ensure it remains secure. The process requires them to collect and manage certificates and private keys. With companies managing hundreds of applications, each with specific certificates, private keys, and expiration dates, the process can be challenging. Because of the challenge, many organizations won’t attempt optimization, or they’ll begin the process but not complete it.SSL/TLS optimization

Riverbed offers WAN acceleration for SSL/TLS encrypted traffic that simplifies the process. Companies can choose from multiple deployment options that streamline certificate management with a solution that acts as a trusted man-in-the-middle to optimize traffic securely. The results? Organizations can see up to a 99% reduction in bandwidth consumption and a 10-times acceleration of SSL/TLS encrypted traffic.

Want to learn more about Riverbed’s acceleration solution with SSL/TLS optimization? Download the white paper.

]]>
The Power of Riverbed Aternity’s Intelligent Service Desk https://www.riverbed.com/blogs/the-power-of-riverbed-aternity-intelligent-service-desk/ Mon, 04 Mar 2024 13:46:51 +0000 https://www.riverbed.com/?p=77267 As we step into the year 2024, advancements in artificial intelligence have led us to the era of near fully self-driving cars, marking a significant milestone in how technology can transform daily life. This progress beckons a parallel evolution in IT Service Desks: the time is ripe for automation to play a crucial role in detecting and remedying issues in end-user devices.

Enter Riverbed Aternity’s Intelligent Service Desk—a game-changer designed to propel your IT Service Desk into a new world of:

  • Significantly Reduced IT Costs
  • Lower Mean Time to Repair (MTTR)
  • Enhanced IT Productivity
  • Streamlined Automated Remediation

The cornerstone of Aternity’s Intelligent Service Desk is its ability to trigger low-code runbooks upon any alert. These runbooks, with their drag-and-drop interface, allow for the definition of troubleshooting logic through nodes packed with pre-built code, enabling you to:

  • Navigate complex decision paths effortlessly.
  • Make external calls for swift remediation actions.
  • Retrieve third-party data seamlessly.

Let’s delve into a couple of real examples which showcase what this new capability can help your teams achieve.

Remedy low disk space

As trivial as it sounds, running low on disk space can be quite debilitating and drastically affect end-user productivity, comparable to the frustration of slow boot times. In the runbook shown below, each of the colorful nodes contain pre-defined code for various functions such as making external API calls, decision logic, visualization and more—and it works like a flow chart.

Riverbed Aternity Intelligent Service Desk - Low Disk Space Runbook
Riverbed Aternity Intelligent Service Desk – Low Disk Space Runbook

Once an alert triggers this runbook, execution starts. Next it does the obvious: promptly identifies potential space savings by clearing the usual suspects like temporary files and emptying the recycle bin. If that gives back substantial free space, then the lower path from the decision branch executes the remediation logic through a web call and displays the result as part of the runbook output. Conversely, if the initial cleanup does not free enough space and manual investigation is needed, the runbook calls an API to list the top few files and folders by size from the device (via the Aternity REST API), automatically generates a ServiceNow incident, and provides the top files and folders by size in the incident description. That way, when the technician looks at it, the next steps become obvious.

Aternity Intelligent Service Desk - Low Disk Space Output
Aternity Intelligent Service Desk – Low Disk Space Output

Resolve application startup problems

Consider the case with CAD applications, where large files frequently move between the user’s device and a SaaS back-end, posing unique challenges. Here is an example from an actual scenario faced by one of our customers experiencing frequent a flood of issues with the CAD native Windows application hanging and abruptly crashing.

Aternity Intelligent Service Desk - CAD Application Hang
Aternity Intelligent Service Desk – CAD Application Hang

 

This runbook springs into action upon detecting an application crash event in the user’s Windows event log. Upon executing, the runbook first tries to look for file timeouts (obtained via call to Aternity API). If it sees timeouts, it runs a traceroute to typical destinations for the SaaS backend hosts and conducts a speed test to check whether the user’s bandwidth could be a culprit.

With conclusive evidence of timeouts, the runbook compiles the results from traceroute, speed test, and file timeouts into a ServiceNow incident for further action. If no timeouts are detected, the runbook can send a notification to the user (similar to below) to ask their permission to automatically open a ticket on their behalf.

Aternity Intelligent Service Desk - Runbook Triggered User Prompt
Aternity Intelligent Service Desk – Runbook Triggered User Prompt

Empower your IT Service Desk

Gone are the days of manual troubleshooting, triaging, and remedying every service desk ticket. With the power of Riverbed Aternity’s Intelligent Service Desk, innovative low-code runbooks take on the tedious work of triage and troubleshooting, delivering insightful results. To explore more about how the Intelligent Service Desk can transform your IT operations, please visit our website.

]]>
Key Benefits of Aternity’s Intelligent Service Desk for Digital Employee Experience https://www.riverbed.com/blogs/key-benefits-of-intelligent-service-desk-for-dex/ Thu, 29 Feb 2024 13:00:13 +0000 https://www.riverbed.com/?p=77342 In today’s rapidly evolving digital landscape, enterprises face the constant challenge of enhancing digital employee experiences while optimizing their IT operations. The surge in workspace applications and heightened service expectations among digital nomads has led to a critical rise in incident volume and complexity. Overwhelmed Service Desks struggle with ticket loads, causing inefficient resource allocation, inconsistent IT service, and increased costs. Short-staffed IT teams often focus on non-impactful monitoring events, prolonging issue resolution and increasing error rates. Traditional automated solutions often fall short, offering limited capabilities and narrowly focused remediation scripts. Furthermore, without streamlined user feedback, employee frustration goes unnoticed, preventing IT from gaining a complete understanding of the situation.

Enter Aternity’s Intelligent Service Desk, a game-changer in the realm of IT automation. Aternity’s AI-powered Intelligent Service Desk proactively addresses recurring device issues before they become tickets. With the use of its LogiQ Engine and customizable runbooks, Aternity replicates advanced investigations by correlating end-user impact and real-time performance data to pinpoint incident root causes. Aternity dynamically models expert decision-making and integrates sentiment survey into its remediation workflows, resolving issues before human intervention.

Here are eight compelling reasons why your enterprise should embrace this transformative technology:

Prevent incidents

With its AI-enabled issue detection and correlation, Aternity’s Intelligent Service Desk proactively identifies application and device issues before they escalate into full-blown incidents. Aternity employs the right combination of remediation actions, decision-making and user feedback to effectively resolve an issue before a ticket is raised. This prevents service disruptions, keeping your workforce productive while eliminating costs associated with raising a ticket.

Improve AI outcomes

With Aternity’s full-fidelity telemetry, embedded AI and intelligent automation, IT can expect superior outcomes in incident resolution. Many DEX tools often lack the granularity required to pinpoint underlying issues accurately. To make the most of AI models, companies need DEX platforms that can ingest and correlate large amounts of data across devices, applications, and the network. Furthermore, effective AI/ML models require data that is centralized, complete, granular, and stable to map dependencies and build contextual models. With its ability to process high fidelity data, Aternity delivers intelligence and precision for remediation.

Intelligently ticket with your ITSM tools

Aternity seamlessly integrates with existing IT Service Management (ITSM) tools, such as ServiceNow. For any unresolved issues that are more complex or nuanced, Aternity will create, escalate and route a ticket with the right priority to the right team. By feeding user-centric and dynamic insights directly into tickets, Aternity streamlines the ticketing process, significantly reducing time associated with manual diagnostics while ensuring swift resolution.

Empower human ingenuity

Human decision fatigue is a major challenge due to the overwhelming volume of tasks and information. AI offers a solution by enhancing decision-making through intelligent automation and insights. By automating repetitive, low-value tasks, organizations can reduce decision fatigue, empowering Digital Workplace teams to proactively address digital experience issues and expedite decision-making. With its Intelligent Service Desk capabilities, Aternity frees up time for employees so they can focus on innovation and creativity.

Improve the voice of the user

By integrating user feedback into its Intelligent Service Desk workflows, Aternity ensures that the voice of the user is heard. Effective response to user feedback is paramount in driving positive DEX outcomes. Traditional feedback mechanisms often suffer from inefficiencies, with critical insights getting lost in the noise of irrelevant data. Sentiment surveys enable organizations to correlate and streamline user feedback processes. By prioritizing and resolving issues based on user feedback and impact, Aternity improves user happiness.

Improve energy efficiency

As part of its Sustainable IT capabilities, Aternity offers automation and actionable insights for managing energy consumption and carbon emissions at both the individual and organization level. By proactively addressing device issues and optimizing performance, Aternity helps improve energy efficiency across your enterprise. With Aternity Intelligent Service Desk, enterprises can automate power settings on devices based on consumption patterns or the user’s profile.

Reduce IT costs

By automating and resolving recurring issues, Aternity helps reduce IT costs significantly through incident prevention. Aternity’s Intelligent Service Desk capabilities have helped enterprises save more than $10 Million dollars annually through a reduction in ticket volume, proactive outreach and decrease in manual tasks.

Implement a VIP Service Desk

With Aternity’s customizable runbooks and self-service remediation capabilities, enterprises can implement a VIP Service Desk tailored to the needs of valuable users. By delivering higher service levels and personalized support based on user status or location, Aternity helps enhance the digital experience for VIP users, driving increased loyalty and satisfaction.

By leveraging intelligent automation, AI-driven insights, and seamless integration with ITSM tools, Aternity Intelligent Service Desk empowers organizations and Service Desk teams. This technology sets a new standard for proactive, user-centered IT service, driving efficiency, reducing costs, and ultimately fostering a more productive, satisfied workforce. In a world where digital agility and resilience are paramount, Aternity’s Intelligent Service Desk is essential for enterprises aiming to thrive in the competitive digital landscape.

]]>
The Future of Banking, Where Experience Is Everything https://www.riverbed.com/blogs/the-future-of-banking-experience/ Wed, 28 Feb 2024 13:27:45 +0000 https://www.riverbed.com/?p=78300 The banking industry is facing its biggest transformation, driven by changing consumer expectations, a work-from-home culture, and born-in-the-cloud start-ups offering slick online-only services. Firms are navigating a new “always-on” relationship with customers and a much-needed move to open banking. Simultaneously, increasingly stringent regulations require banks to have even tighter control of their finances, security, and data.

To remain relevant, win new customers, and increase revenue streams, banks must redefine all they know. That means maintaining and modernizing core internal processes, ramping up digitalization to meet changing customer expectations and increase profitability. Read on to discover how banks can combine data, insights, and actions to run operations more efficiently, enabling faster, more effective decision-making that drives transformational initiatives to improve the omni-channel experience for employees and customers—all powered by Riverbed Unified Onservability.

Reimagine your banking systems—and run your bank better

The technology that quite literally puts banking in consumers’ hands also threatens the traditional banking models. To realize the full potential of their banking systems–reducing errors, improving application performance, and making staff more effective–these firms must reinvigorate branches and refine the services they deliver.

Primarily, this involves removing barriers between tellers and customers, introducing self-service kiosks, and supercharging their online and mobile offerings. However, a reliance on tech requires systems to operate as smoothly and reliably as possible. This is challenging when faced with huge volumes of data and legacy infrastructure that anchor banks to the past instead of propelling them into the present.

Thanks to unified observability’s automated remediation and self-healing capabilities, banks can easily resolve problems across their entire architecture before customers notice a disruption, thereby safeguarding the business’s reputation and building all-important trust among consumers. In fact, one of our clients used these tools to proactively reduce their support tickets by 20-30% a month.

By detecting and fixing problems faster, banks can achieve strategic optimizations, increase efficiencies, accelerate mean time to repair, and reduce time spent on non-critical issues. What’s more, they can eliminate alert fatigue, significantly reducing the number of notifications staff must go through, thereby lowering stress levels and turnover rates. This is critical during a time of talent and skills shortages.

Banks can use unified observability to simplify their setup, streamline operations and directly cut costs, too, by identifying under-utilized tech and optimizing networks, devices, and apps that are relied on the most. Another Riverbed client saved an estimated £600,000 on each application they examined using our technology. One more bank found they could avoid a $16M upgrade by maximizing the hardware they already had.

Reimagine modernization—and change your bank faster

Today’s consumers crave convenience, generally favoring the digital banking experience only apps and online portals can provide. However, even as banks’ physical footprints shrink, the in-person experience is still crucial and shouldn’t be underestimated.

With unified observability, banks can empower the in-branch experience by providing live reporting of ATMs and colleague operations, the same automated remediation tools mentioned earlier, change validation, and more. A banking client of ours found their remaining branches were working disparately, delivering different levels of teller service in each location. Using our tools, they could identify this and stabilize the situation to offer one consistent experience. Another went from having 1,000 employees working from home to 22,000 in a mere four weeks. They used our platform to gain visibility on who was working where and when and see which offices they could close–reducing costs without losing productivity.

As consumers navigate between online, mobile banking and physical branch, contact center or ATM visits, it’s vital that banks create a tech environment where each interaction is efficient and fit for future-ready, customer-centric, omni-channel banking. Unified observability platforms ensure that every app, across every device, within every channel consistently delivers and exceeds the level of service necessary to drive business, rather than hinder it.

Reimagine the banking journey—and perfect the experience

Banks must optimize every customer touchpoint, whatever their requirement, moving from transactional to transformational and ensuring exceptional experiences. This means empowering their staff to unlock new revenue opportunities and improve customer support, with AI-driven end-to-end visibility across the omni-channel experience.

Unified observability allows organizations to transform disparate data into intelligent user insights, making these the driving force behind increased revenue streams. Many of our clients focus heavily on digital experience improvement, utilizing unified observability to identify potential upgrades to networks and devices, compare employee experiences and productivity levels, and track how satisfied each business unit is before, during, and after implementing changes, such as new software versions.

Reimagine security and sustainability—to adapt to the future

Increasing dependence on the cloud, SaaS, and shadow IT, along with the permanent shift to hybrid work, is evolving the regulatory landscape and expanding the range of security and compliance threats within the banking industry. Meanwhile, banks must meet stringent regulations to operate legally, maintain financial stability, manage risks effectively, protect their reputation, and build trust among customers and stakeholders.

We use our unified observability technology to help businesses build brand strength, underpinned by monitoring performance. They can ensure compliance and identify and address vulnerabilities simply and innovatively, by modernizing existing core banking applications with artificial intelligence and automation, transforming branch networks, adding governance and control tools, and basing decisions on essential data. Our solutions can audit app usage and even capture and store every packet and flow, aiding in threat-hunting investigations and analysis into suspicious network behavior.

As well as being security-conscious, today’s banks are increasingly focused on sustainability to stay relevant in a world where consumer loyalty is fleeting and making a difference to the planet means more than making a profit.

Unified observability can support sustainability strategies and guide banks on their path to net zero in many ways. One way it helps is by identifying devices and infrastructure with high energy consumption, alerting staff to the resource they’re using and reminding them to power down while their devices are idle. It can even remotely turn off energy-draining hardware, like laptops, when not in action.

Experience the future

By fusing data, insights, and actions, Riverbed helps banks enable faster, more effective decision-making and continuously improve the omni-channel experience for employees and customers. Get in touch today to learn more about how our unified observability solutions can empower you to unlock the future of banking, where experience is everything.

]]>
Propelling Airline Operations and Experiences to New Heights with Unified Observability https://www.riverbed.com/blogs/propelling-airline-operations-with-unified-observability/ Wed, 21 Feb 2024 13:11:31 +0000 https://www.riverbed.com/?p=77156 Today’s airlines operate in more locations and serve more passengers in more ways than ever before. In fact, according to an Oliver Wyman analysis, the global commercial aviation fleet is expected to expand by 33%, to more than 36,000 aircraft by 2033. This expansion signifies an increase in mission-critical applications, alerts, and complex data.

This data can be leveraged to deliver the exceptional experiences customers and staff demand–but only if it’s captured, monitored, and used to its full potential. If it isn’t, it can create more than just a headache, making it difficult to identify and address issues, which can affect passengers and your reputation.

While countless tools are available on the market to help you tame and transform this data into something useful, very few of them are comprehensive and holistic, giving observability across your entire infrastructure. Riverbed steps in with its unified observability platform, highlighting qualified and actionable events before they require reactive measures. Here are a few ways Riverbed empowers airlines to do more with data, focus on the broader picture, and achieve superior outcomes.

Enhancing the customer and employee experience

Your passengers expect easy and efficient experiences from the moment they choose to fly with you. Everything from booking tickets on your website to checking in on their smartphone, dropping bags at your kiosk and enjoying in-flight entertainment must be seamless and satisfying. Meet these needs, and they’ll already be looking forward to their next trip with you–fail to, and they’ll simply travel with someone else in future.

Riverbed can help you meet and exceed the most challenging customer demands by optimizing connectivity and application delivery for reliable reservation systems, ticket processes and check-in, and real-time flight updates.

Employees, too, are driven by their experiences. Their job satisfaction decreases, and they become more likely to leave when they are burdened with outdated IT systems, frequent downtime, and poor data management. Riverbed’s automation and self-healing capabilities free people from fighting fires so they can focus on what really matters, while its user-friendly dashboards make data easy to understand–and services and projects a breeze to prioritize.

Providing unmatched visibility and insight

For smooth operations, even during cloud migrations, you need real-time visibility across all your IT services and your entire infrastructure, applications, and back-end systems. One of the United States’ major airlines was having trouble gaining this across common-use areas. That is, areas of airports that aren’t owned or leased by airlines, where pilots and other staff will go to check manifestos, schedules, route changes, incidents and more. Often, connections would drop between the airport and the airline, offering no reason why–or instructions on what staff needed to do next.

Riverbed IQ, part of our Unified Observability Platform, has empowered the airline to bridge the gap, identifying problems and providing workarounds to deliver team members the information they need to do their jobs effectively. In future, we’ll be working together to move to an AI operations platform–using automation to speed things up, provide even more insight, eliminate the need for war rooms and lower mean time to recovery.

One of the airline’s IT leaders said: “Riverbed IQ is helping my team realize a self-healing network. Using the built-in features of runbooks and AI/ML, we can reduce the number of alerts and capitalize on our automation processes to perform corrective actions to the network before users experience impact. Riverbed IQ is a game changer in monitoring and unified observability.”

Maintaining the performance of critical apps

As we touched on earlier, your apps are more plentiful and vital than ever. One airline we work with has between 1,200 and 2,000 apps – all of which are needed day-to-day – from flight-planning systems and online ticketing to call-center monitoring.

We drove digital transformation and cloud growth for another major US airline, which was tied down by legacy data centers and struggling to innovate as a result. Using Riverbed AppResponse and Riverbed NetProfiler tools, the organization was able to gain full telemetry and proactive problem-solving across their critical reservations system–keeping it running and running well. Since deployment, the airline’s been able to:

  • Set a baseline for performance, and see when this isn’t being met
  • Map application transactions and become more process-orientated
  • Carry out network performance analysis and optimization using both up-to-the-minute and historical data
  • Troubleshoot issues and remedy them automatically, at the root
  • Integrate our telemetry with customer incident management tools and processes
  • Receive real-time alerts that really matter, reducing notification fatigue
  • Achieve all this across every one of its cloud and hybrid environments

Boosting sustainability

The aviation industry is under increasing pressure to reduce its carbon footprint–depending less on jet fuel and improving waste management. Riverbed captures actual performance data from all applications and devices, translating it into actionable environmental insights to help you reduce your carbon emissions and protect our planet.

This could mean identifying hotspots in energy consumption, or printers and the number of pages they’ve produced, and automatically minimizing their impact. It may involve finding idle devices that should be shut down–notifying the user to do so and powering off remotely if they don’t respond. Or it might look like quantifying the carbon footprint of user activities, such as sending emails. The possibilities are countless, each supporting you on your journey to net zero while improving your reputation with stakeholders, passengers, and the media.

There’s no doubt that Riverbed’s solutions are changing the game for airlines, elevating digital experiences and putting organizations’ data to work for them. Get in touch with our team now and find out how we can empower you to satisfy the people who matter, stay out of the news, streamline your operations, and speed past the competition.

]]>
Colonel (Retired) Joseph Pishock’s Insights for Unlocking Cybersecurity Manoeuvrability https://www.riverbed.com/blogs/insights-for-unlocking-cybersecurity-manoeuvrability/ Tue, 20 Feb 2024 23:54:15 +0000 https://www.riverbed.com/?p=77269 The organisation responsible for special operations effects around the globe shouldn’t struggle to deliver a simple email to desktops.

“Charting cyberspace is all about people, processes and technology.”

COL(R) Joseph Pishock close up presenting MilCIS AU

Colonel (Retired) Joseph Pishock knows more than most about the topic of Cybersecurity Manoeuvrability. Presenting to a packed out auditorium at MilCIS 2023 in Canberra, he discussed the challenges of managing a regulated and compliant (DODiN) network that had had “25 years of building one thing on top of another.”

Pishock spent 25 years in the United States Army before becoming the Director of Global Networks & Services in US SOCOM (Special Operations Command) in August 2020. At the time, Pishock considered cyberspace effectively uncharted, guided only by Visio diagrams (that may or may not have been accurate) that made it difficult for his team to troubleshoot effectively.

Pishock’s military experience meant he was very familiar with the saying, “move, shoot, communicate”, which is all about being in control and manoeuvring on the battlefield. However, when it came to supporting SOCOM, Pishock discovered that although he had a “wall of plasma”, he had no control. He knew he’d have to delve deeper into why a simple email couldn’t be delivered in a timely manner and why this had become a problem in the first place.

When it comes to managing cyberspace, there are three main considerations:

  1. People
  2. Processes
  3. Technology

Pishock had highly intelligent people from top universities (including MIT and Columbia) in his team. But he realised that while they were great at reading checklists and following processes, they lacked the right tools and data to make decisions. True manoeuvrability would be impossible without this, which led to a collaboration with Riverbed to begin the process of mapping cyberspace.

“It’s important to provide the right tools to the right people and empower them to make decisions.”

Pishock and his team worked closely with Riverbed to deploy the latter’s Riverbed software and hardware, including Network & Infrastructure Performance Management, to instrument SOCOM applications and services. To create visibility across the network and empower decision-making, data visualisation was key. It was important that information be as consumable as possible for the intended audience, as different data means different things to different teams.

Infrastructure tooling collected detailed information about the network to build out a visual representation of the assets and the connections between them – known as a service map. Sensors also recorded temperature and other environmental data so that trends could be established.

Network tooling enabled SOCOM to visualise where communications were moving across the network and how efficiently it could do it. SOCOM could now also see who was consuming each service, where the service was, and how it was performing.

On September 23, 2022, Hurricane Ian hit Tampa, Florida, where SOCOM headquarters is located. Being just 11 feet (3.3 metres) above sea level and 100 feet (30.5 metres) from the coast, the threat required a complete site evacuation and a move to COOP (Continuity of Operations) locations, including hotels and alternate bases to keep people safe. If all of this wasn’t enough, Pishock faced the challenge of supporting a live mission and maintaining services with the data centre located about two feet (600mm) above seawater.

In a crisis there are always single points of failure and in this case, it came down to a rat chewing through an air conditioning system power cord in the on-premise data centre. The air-conditioning failed and the temperature in the data centre rose to a dangerous level. It was not safe to send people into an evacuated site during a hurricane, so a decision had to be made as to which services were essential for the live mission and which services could be turned off.

Fortunately, they were about six months into the Riverbed deployment, and Pishock felt in control for the first time. He was able to see which services were located where, and his team were able to determine which services could be turned off to slow the steadily rising temperature. They had a map of cyberspace and were able to save the data centre infrastructure from damage caused by overheating, successfully supporting the mission during the crisis.

Riverbed’s expertise ensured a speedy implementation that worked first time. After facing some initial pushback from team members who saw change as a threat to their role, he addressed this by creating a sense of security and camaraderie amongst his staff and building a blameless culture. He leveraged the Burke Lewin model for organisational change to embed the solution into the fabric of SOCOM and ensure that the solution was maintained and supported into the future.

“The team at Riverbed blended into the project and became integral to the success of the project.”

After the hurricane, Pishock focused on further improving services within SOCOM and enhancing the organisation’s ability to support customers, which included end-user experience management. Device Mobility became a priority to understand what delays occurred from the point a CAC card was inserted into the laptop to opening the first email.

Prior to the end of the session, Pishock fielded questions about data and customer centricity versus network centricity and how shared infrastructure can help reduce the amount of technology that needs to be sent into the field during a mission.

“Email is not a crisis. Make the real crisis the new crisis.”

Colonel (R) Pishock featured in Australia’s Defence Connect Podcast where he discussed his experience of moving past the linear concept of PACE in order see well enough to actually manoeuvre through cyberspace.

An article was also published after an interview with Colonel (R) Pishock.

]]>
Unified Observability Drives Efficiency, Growth, and Sustainability for Insurance Firms https://www.riverbed.com/blogs/unified-observability-for-insurance-firms/ Wed, 14 Feb 2024 13:19:07 +0000 https://www.riverbed.com/?p=76971 At Riverbed, we have years of experience working with some of the world’s biggest insurance companies, giving us insider knowledge of the industry’s toughest challenges, key drivers, and pain points. We’ve noticed a trend: organizations are striving to enhance efficiency and increase agility, with technology at the core. But with indecision, shrinking budgets, and internal skills shortages, often push these businesses towards outsourcing their network and application management.

However, there’s a simple way to maintain control of and access your infrastructure: unified observability solutions like the portfolio from Riverbed. With its innovative automation and self-healing capabilities, unified observability makes managing your tech in-house easier than ever. The platform pulls together big data from all your digital touchpoints, prioritizing tasks to minimize manual processes, mitigate alert fatigue, and accelerate mean time to repair.

Here’s how it’s empowering insurance firms like yours to achieve their most critical goals, right now:

Revolutionizing the customer experience

Every insurance company wants to be the go-to brand for customers, and front of mind when prospects are choosing who to protect them. To achieve this, it’s crucial to equip your staff the tools they need to understand and connect with the people you serve and understand their behaviors and habits. Essentially, improving the customer experience depends on enhancing the employee experience; allowing staff to do their jobs seamlessly, effectively, and productively, and they’ll be satisfied and better equipped to fulfill the evolving digital preferences of customers.

Perhaps a customer wants to pay less for health insurance, as their smartwatch data proves they live an active lifestyle, or maybe another only drives for work and needs to pause car insurance fees over the weekend.

Whoever your customer is and whatever their unique situation, they’ll also expect perfect uptime and availability of your services–on whichever connected devices they access them, from intelligent home devices to the standard smartphone. And if they have any queries or problems, they’d ideally like to deal with you through a clued-up chatbot rather than hanging around on hold for hours.

The chances are you already have the necessary data to meet these seemingly impossible demands. You just have too much of it, stored in disparate silos, and no way of making sense of it all. Enter Riverbed Unified Observability, which uses AI and machine learning to instantly and automatically show you this information on easily understandable single-pane dashboards. This means you can focus less on manual processes and customer service woes and more on strategy, tailoring your products to real desires rather than your best guess.

One of our global reinsurance customers used Riverbed for exactly that, saying: “Now that we have Riverbed’s Digital Experience Management solution, we have already seen benefits in terms of gaining insights we can act upon to improve delivery of services to our customers.”

Enhancing efficiencies to compete and grow

Today, your competitors aren’t the long-established, tried-and-trusted insurance brokers of old. They’re the snappy FinTech start-ups taking the market by storm–born in the cloud, without legacy tech or the burden of premises holding them back. To keep up, you’ll need to make changes, like moving critical applications to the cloud and reducing your physical footprint (which can also help cut your carbon footprint).

With Riverbed, you can become more streamlined than ever before, putting you not just on the same footing as these responsive newcomers but also ahead of the game. By capturing, analyzing, and giving up-to-the-minute insight on every action taken by employees and customers– then automatically creating shortcuts, remedying issues, and highlighting or even switching off underutilized machines–the platform takes care of the everyday so you can look after what really matters, like planning for your future growth. It also works effortlessly across hybrid environments, supporting you as you transition to the cloud.

The reinsurance customer mentioned earlier also used Riverbed to integrate cloud provider data with endpoint metrics, providing accurate customer experience information and identifying CPU processing issues across 80 locations. They said: “Riverbed’s Digital Experience solution allowed us to identify issues quicker, troubleshoot and validate them. Sometimes, issues seem fixed, but from a customer perspective, the problem persists. With Riverbed’s solutions, we are able to collaborate with other IT departments and vendors to get the issue resolved more effectively.”

Becoming more sustainable

All businesses must become more eco-friendly, not just to protect our planet but to uphold their reputation, too. Customers want to use companies they believe in, support, and trust, especially when it comes to insurance, By proving you care about the environment, you show you care about the people within it, too. It’s about driving genuine, meaningful change–such as leading on climate action, forging lasting community ties, operating sustainably, and reaching that net-zero milestone in the not-too-distant future.

Unified observability can help you reduce the amount of office space and server-room storage you need by supporting your transition to the cloud and hybrid working. It can give insight into unused or underperforming devices, allowing you to sweat the assets–recycling or refurbishing them only when they reach the end of their working life. It can also tell you when staff have stepped away from laptops for a prolonged period without shutting down, nudging them to do so with a notification or powering off remotely. As well as providing undeniable environmental benefits, these actions can save you money. One of our UK-based insurance customers is aiming for cost savings of £750 million by the end of 2024–a reality made possible with Riverbed’s Unified Observability and Acceleration portfolios.

Meanwhile, our global reinsurance customer used this function in Riverbed’s Unified Observability Platform to extend the lifecycle of its machines from three years to four. They said: “Moving forward, Riverbed can help us make decisions on hardware replacement, whether to replace laptops or desktops entirely, or whether to replace individual hardware components. This has the potential to increase the effective use of our finite resources and target customers who really need our assistance.”

To learn how your insurance company can maximize existing investments, delight staff and customers, embrace digital transformation, and remain competitive now and in the future, get in touch with our dedicated team now.

]]>
Overcoming Three Key Digital Retail Challenges with Unified Observability https://www.riverbed.com/blogs/overcome-digital-retail-challenges-with-unified-observability/ Thu, 08 Feb 2024 13:30:51 +0000 https://www.riverbed.com/?p=76842 It’s no secret that the retail industry has always been highly competitive. But in recent years, it’s faced new and unique challenges, including:

  • Rapidly evolving consumer behavior: The COVID pandemic has shifted customers from physical stores to online platforms, demanding exceptional digital experiences. Even with brick-and-mortar stores back in operation, consumers continue to expect seamless service across websites and apps. According to Riverbed’s 2023 Global Digital Employee Experience (DEX) survey, which polled 1,800 business and IT decision-makers across 10 countries and seven industries, 96% of retail leaders believe that offering a seamless digital experience is crucial (with 59% considering it critically important) for competitive advantage.
  • Technological advancements: Artificial intelligence. Augmented reality. Cloud. SaaS. Virtualization. The pace of technology has never been faster, spurred by the rise of hybrid work and escalating customer expectations. It can seem impossible for retailers to know what to invest in and where they’ll see the best return on investment, without adding to an already-complex technology stack.
  • Financial constraints: It’s expensive to run a retail store. Let alone several, let alone several hundred. It’s even costlier when staff can’t work to their full potential or productivity due to shortages and skills gaps. Despite this, teams are still expected to reduce costs through IT optimization while satisfying both customers and employees.

If these challenges sound familiar, you might have explored or invested in digital tools to address them. Yet, deploying more (and more complex) solutions can introduce new issues. Below, we outline three common digital hurdles and how Riverbed Unified Observability can help.

Operational inefficiencies

Better tech doesn’t necessarily mean better organization, and retailers often struggle with operational inefficiencies that can impact productivity and profitability. These inefficiencies can stem from factors like supply chain complexities, poor inventory management, and disjointed communication among employees.

Unified observability steps in as a comprehensive monitoring and analytics platform, providing real-time insights into operational performance. By capturing and analyzing data from end-user devices, applications, and networks, it helps identify bottlenecks, streamline processes, and optimize resource allocation, leading to improved operational efficiency.

End-User experience gaps

You know you must deliver seamless, personalized experiences to attract and retain customers. But you’ve also got to provide your employees with these exceptional experiences, and 95% of retail leaders recently surveyed believe they’ll need to provide more advanced digital experiences as new generations of employees enter the labor market.

This is a crucial point for you to consider, as Millennial and Gen Z employees comprise a larger portion of the retail workforce than other industries. 61% of survey respondents in retail said there would be a disruptive and reputational impact on their company if digital natives’ experience expectations weren’t met; they also claim that 73% of younger-generation employees would consider leaving the company if they didn’t have a seamless digital experience. Less than ideal, considering staff shortages are rife in the industry (39% agree)–and costly to your business.

Unified observability aids in keeping customers and staff–and keeping them happy–by monitoring the end-user experience. It collects data on application responsiveness, transaction times, and user interactions for everyone using your systems at each end, enabling retailers like you to identify and proactively address performance issues. This comprehensive visibility and data into your user journeys help retailers optimize their digital channels, enhance website and app performance, boost staff productivity and satisfaction, and ultimately provide an exceptional experience, no matter the end-user.

Optimizing IT infrastructure

The more technology, the more complexity. Maintaining a robust and agile IT infrastructure is vital for retailers to support their operations effectively–but managing a highly complex IT environment can be challenging, especially when it comes to identifying and resolving performance bottlenecks.

The Riverbed Unified Observability Platform offers a holistic approach to IT infrastructure monitoring. By analyzing the performance of network infrastructure, servers, and databases, it enables retailers to detect and diagnose issues in real-time, reducing outages and guaranteeing uninterrupted operations. This proactive monitoring and troubleshooting capability allows IT teams to resolve problems before they impact critical systems or disrupt the customer experience.

As the retail landscape continues to evolve, retailers must embrace innovative solutions to overcome challenges and stay competitive. By leveraging Riverbed’s monitoring and analytics capabilities, retailers can transform their businesses and thrive in the dynamic retail landscape of today and tomorrow. Get in touch with us today to learn more, view our retail infographic for more interesting DEX survey findings or visit our retail website page.

]]>
How Modern Enterprises Can Securely Optimize SMB Traffic with Riverbed Acceleration https://www.riverbed.com/blogs/optimize-smb-traffic-riverbed-acceleration/ Tue, 30 Jan 2024 13:57:13 +0000 https://www.riverbed.com/?p=76466 In the fast-paced world of modern enterprises, employee productivity relies heavily on seamless access to files, irrespective of their location. File sharing is a cornerstone of collaboration, especially for organizations dealing with data-intensive applications like computer-aided design (CAD), computer-aided manufacturing (CAM), or managing extensive client documents. Slow file transfers can cripple employee experience and hinder productivity, making efficient solutions crucial.

The role of Server Message Block (SMB) protocol

Three professionals are gathered around a laptop, engaged in a collaborative work discussion.

Server Message Block (SMB) protocol is a popular choice for network file-sharing, providing secure remote access to files, printers, and devices. However, the challenge arises when SMB encounters Wide Area Network (WAN) latency, which affects network and application performance. As organizations increasingly operate with decentralized workforces and diverse resources, optimizing file-sharing becomes paramount. This is where WAN acceleration becomes a key player to reduce latency, optimize bandwidth, and ultimately enhance employee productivity.

Why WAN acceleration for SMB protocol?

WAN acceleration involves technologies that enhance data transfer efficiency between centralized data centers and remote locations across a WAN. Common performance killers like network congestion and high latency can significantly impact application performance, leading to decreased productivity and revenue loss.

The challenge of SMB WAN acceleration

Despite the clear need for optimizing SMB traffic, many organizations face challenges due to the protocol’s security features, such as signing and encryption. Optimizing SMB traffic requires decrypting and re-encrypting data. Security rules and regulations often complicate this interaction, hindering deployment and impacting application performance.

Optimizing SMB traffic with Riverbed solutions

Riverbed Acceleration solutions offer a way forward for organizations seeking to optimize SMB traffic securely. Riverbed SteelHead, Client Accelerator, and Cloud Accelerator deliver faster and secure application performance across distributed enterprises:

  • SteelHead: A vital component of Riverbed Acceleration solutions, SteelHead optimizes and accelerates network traffic and application performance, reducing bandwidth consumption by up to 99% and accelerating SMB traffic up to 40 times, even when signed or encrypted.
  • Client Accelerator: Extending SteelHead’s capabilities to remote employees, Client Accelerator operates on end-user laptops, providing access to critical applications with speed and security, irrespective of location.
  • Cloud Accelerator: Ensuring quick and secure delivery of workloads, Cloud Accelerator optimizes cloud-based file and workload transfers, accelerating migration, meeting SLAs for cloud backup, reducing data egress costs, and stabilizing cloud workload performance.

Riverbed’s SMB acceleration deployment models

To ensure full latency and bandwidth optimization, Riverbed offers three deployment models that cater to different security postures and IT structures:

  • Model 1: SteelHead directly connected to Active Directory
    • Simple configuration with widget support for each appliance.
    • Supports both Kerberos and NTLM authentication.
  • Model 2: Deploy the Riverbed WinSec Controller
    • Offers better security posture with a Tier 0 appliance.
    • Compliant with the Microsoft Enterprise Access Model.
    • Supports greater latency (up to 110 ms) between WinSec Controller and domain controller(s).
  • Model 3: Domain Independent SteelHead Kerberos Only (DISKO)
    • Does not require SteelHead appliance to join the domain.
    • Exclusively supports Kerberos authentication.

Riverbed customers find success with SMB optimization

A man and a woman sitting at a desk in a brightly lit office settingMany of Riverbed’s customers have found success in optimizing SMB file sharing and have gained significant improvements in efficiency and cost savings. An engineering firm experienced a remarkable increase in download speeds by 86%, coupled with a drastic reduction in file transfer times, plummeting from 28 minutes to less than 4 minutes.

Similarly, Quarles & Brady, a legal service provider, witnessed a transformation in their document transfer speeds, which were cut down from minutes to mere seconds. This acceleration in performance was complemented by a simplification of their technological infrastructure, leading to reduced expenditures on bandwidth and operational fees, further showcasing the substantial impact of Riverbed’s SMB optimization solutions.

The benefits of Riverbed WAN acceleration

Riverbed WAN acceleration solutions employ advanced optimization techniques, including transport streamlining, data streamlining, and application streamlining to improve the performance of protocols and applications over WANs dramatically. These technical benefits translate into increased employee satisfaction, IT savings, and improved application performance.

Choose a best-in-class solution for optimizing SMB traffic

SMB traffic optimization is no longer an option but a necessity for modern distributed enterprises.

Riverbed provides a comprehensive solution, offering multiple deployment models based on security needs and helping organizations achieve faster SMB transfer speeds and significant data reduction. The benefits extend to increased employee satisfaction, reduced IT costs, and improved application performance.

Want to learn more about Riverbed’s WAN acceleration solutions for SMB optimization? Download our latest white paper.

]]>
Get “Stuff” Done Faster: Unleash the Power of Digital Experience Management https://www.riverbed.com/blogs/get-stuff-done-faster-unleash-the-power-of-digital-experience-management/ Tue, 09 Jan 2024 18:05:45 +0000 https://www.riverbed.com/?p=76443 In this crazy fast-paced digital world, businesses are always on the hunt for ways to save cash and get stuff done faster.Digital Experience Management makes you a superhero

Fortunately, there’s a supercharged solution to do just that: Digital Experience Management (DEM).

DEM isn’t just about making employees and customers happy. It’s also an absolute powerhouse when it comes to slashing costs and cranking up efficiency. Buckle up as we dive further into the wild world of DEM and its impact on businesses.

Wrapping Our Heads Around Digital Experience Management

First things first, let’s get a grip on what DEM is. Think of it like a secret weapon that makes digital interactions with customers and employees bloody brilliant. It’s about making the technology support the human at exactly the time the human needs it—and leaving everyone grinning from ear to ear.

The Dynamic Duo: Cost Reduction and Efficiency

When it comes to Getting Stuff Done in business, cost reduction and efficiency go hand in hand. Cost reduction is all about squeezing those expenses and making your finances happy, particularly when “Money’s too tight to mention.” Efficiency, on the other hand, is about getting stuff done with minimal time and resources. To conquer the business world, it’s imperative to find the sweet spot between these two bad boys.

DEM Saves the Day: Slashing Costs Like a Superhero

Now, here’s where DEM is the caped crusader who saves the day. By unleashing the power of DEM, businesses can optimize resources and cut down on operational expenses. How? By automating boring, repetitive tasks that eat up time and money.

And you know what? Those savings don’t have to go into the corporate coffers: they can be used to fund growth and innovation—and that can help improve the digital experience of customers and employees.

DEM Supercharges Efficiency: Faster Than a Speeding Bullet

Hold onto your hats because DEM can rev up your efficiency like nobody’s business. With the right DEM tools and platforms, you can streamline your processes, eliminate those pesky bottlenecks, and get things done faster than a speeding bullet. Say goodbye to wasting time and hello to supercharged productivity. And guess what? DEM also promotes teamwork, communication, and lightning-fast data-based decision-making.

Unleashing the Full Power: Winning Strategies for DEM Domination

To make the most of DEM’s mind-blowing potential, you do need a game plan. Here are some tips to conquer the DEM game:

First, choose the right DEM tools that match your needs and goals:

  • What business critical apps are the life blood of your business?
  • Do you only need DEM for IT use cases?
  • Are there roles for DEM to play outside of IT?
  • What processes are the Kryptonite?

Like the perfect superhero suit, it’s got to fit right!

Second, don’t forget to keep tabs on how DEM is performing. Track those digital experience indexes and listen to what your users have to say. That way, you can keep fine-tuning your DEM strategies and creating value to conquer the digital universe.

Conclusion: Unleashing the Power for Global System Integrators and Channel Partners, Because Why Settle for Less?

We’ve journeyed through the wild world of Digital Experience Management (DEM), and now it’s time to talk about the glorious benefits that Global System Integrators (GSIs) and Channel Partners can reap when they deliver this mind-blowing outcome to their customers.

Hold onto your hats because this is going to be epic!

  1. Rise Above the Rest: By including DEM as part of your arsenal, you can soar above the competition because you’ll become the go-to expert, guiding businesses towards jaw-dropping digital experiences. Who needs ordinary when you can be extraordinary?
  2. Enduring Loyalty: When you deliver the power of DEM, you’ll become the heroes your customers need. The result? Unbreakable loyalty and a partnership that’s stronger than Vibranium.
  3. Cash Flow Explosion: DEM opens a treasure chest of revenue opportunities. Businesses are hungry for DEM expertise, and those who deliver it become the masters of their domain, unlocking new consulting and cross-selling opportunities, not to mention long-term contracts. Cha-ching!
  4. Superhero Squad: By embracing DEM, you can form unbeatable alliances with other vendors and providers, breaking down the traditional IT Silos to form a league of extraordinary IT superheroes.
  5. Thought Leaders Unite: Take the spotlight and shine like the stars you are. With your DEM expertise, you can become the thought leaders in the industry. Share your wisdom, showcase your success stories, and guide others towards the path of DEM greatness. The world needs your insights, and you’re here to deliver!
]]>
Five Tips to Implement Unified Observability for Mission-Critical Defense https://www.riverbed.com/blogs/observability-for-mission-critical-defense/ Tue, 09 Jan 2024 13:19:33 +0000 https://www.riverbed.com/?p=76060 US Special Operations Command (USSOCOM) has access to some of the best minds and talent in the world. Therefore, you would expect their enterprise network to be designed for the same level of maneuverability, command, and control as their military forces on the battlefield.

However, when Colonel (Retired) Joseph Pishock assumed the role of Director of Networks and Services at USSOCOM in the summer of 2020, he encountered a starkly different reality. USSOCOM’s enterprise network, the fourth largest in the US Department of Defense, serving over 80,000 personnel spread across more than 20 time zones, was plagued by frequent interruptions. Basic services, like delivering emails to headquarters, were sluggish and unpredictable. The network’s performance suffered due to legacy systems and the disparate needs of its users. 

Why network visibility matters when it’s mission-critical 

What Colonel Pishock realized was that USSOCOM lacked the necessary visibility into its own network. IT faced the incredibly complex task of connecting personnel with data, devices, applications, and communications across non-classified, secret, and top-secret networks. However, without a clear understanding of network dependencies, boundaries, relationships, and authorities, troubleshooting became a time-consuming, reactionary process. Consequently, Colonel Pishock turned to Riverbed for a solution that could provide the unified observability required for mission success.

Together, USSOCOM and Riverbed built a command-and-control structure capable of delivering real-time updates to authorized personnel. Previously, leaders only received information on network performance from the previous day. Thanks to Riverbed’s involvement, USSOCOM now has full and immediate visibility. Personnel in the help desk now share the same insights as network engineers, enhancing collaboration and problem-solving.

Colonel Pishock shared his insights at the 2023 Military Communications and Information Systems (MilCIS) Conference and Expo in Canberra, Australia. Drawing from the success of Riverbed’s partnership with USSOCOM, he offered five tips for how technology companies should approach US Defense.

Five tips to achieve visibility for mission-critical defense  

1. Don’t get stuck on laptops and desktops–include mobility as well

When it comes to enterprise networks, it’s easy to make laptops and desktops the focus. But Colonel Pishock’s advice is to pay equal attention to mobility. Because it isn’t centrally managed, cell/mobile phones can be a point of vulnerability. Riverbed’s Aternity provides visibility into the end-user experience across desktops, laptops and mobility devices. With it, you can identify the cause of delays to the network, devices or an app’s back end to diagnose and fix issues impacting UX. This is critical to optimizing the productivity and aspects of command, who rely heavily on mobile devices. 

2. Partnerships with professional services are essential 

Perhaps the most difficult lesson learned was that government organizations, no matter how qualified, can’t handle everything internally. Initially USSOCOM wanted to create a bespoke solution to solve its network issues but became its own worst enemy. After embarking on a failed in-house deployment which delayed the project by six months, Colonel Pishock advocated for USSOCOM to partner with Riverbed professional services to tap into their network expertise. 

3. Government needs to take an active leadership role  

Breaking down the barriers to support professional services throughout implementation was a weekly occurrence. Rather than leave the project to the IT teams, uniformed and civilian government leaders and operators needed to take an active role. This was crucial in getting Riverbed’s solution into the hands of personnel who would actually use it. Training was tailored for different users—creators, doers, and watchers—because they all utilized the same tool for different purposes.

4. Orient everyone around a real problem and create a baseline to measure success 

To create a baseline for performance, Colonel Pishock said it is important to set up a vignette. For this project, he kept it simple. How long does it take from the time you put in your ID (CaC) card to achieve a functional Outlook? Internal teams had no idea what the start-up sequence was, or what systems talked to what. It exposed a lack of understanding of the dependencies and interdependencies within the SOCOM network. Working hand in glove with experts from Riverbed, Colonel Pishock was able to streamline, simplify and create a baseline (1 minute). The baseline changed every time a modification was made. Now, USSOCOM can actively monitor the network, make changes and proactive decisions to optimize performance. 

5. Ensure your solution integrates with existing cyber tools

Chances are, Defense already uses other technology solutions. Any technology company approaching Defense needs to find a way to integrate their solution with existing tools. Don’t create a situation where there is a complicated divestment decision to be made because Defense is often locked into multi-year agreements. The ideal solution can be integrated with existing systems, setting Defense up for divestment in the future, which is a more realistic goal.

Mission success depends on providing personnel with real-time data and insights to make fast, informed decisions. As the digital infrastructures that support defense grow in complexity, closed networks become barriers, rather than facilitators, of data flows.

Therefore, having clear visibility across these landscapes is not just important–it’s mission critical. Riverbed can help you monitor network needs and adapt in real time–supporting operational continuity and mission delivery. Find out more about what Riverbed Unified Observability portfolio can do for defense organizations today.

]]>
What a $28 Billion Deal Indicates about the Future of Observability https://www.riverbed.com/blogs/tech-acquisition-future-of-observability/ Mon, 08 Jan 2024 13:34:07 +0000 https://www.riverbed.com/?p=76063 Observability hit the headlines again after Cisco’s intention to acquire Splunk was announced. The $28 billion deal is Cisco’s largest ever purchase and is the second largest tech acquisition of 2023 so far. This agreement marks a big strategic change for Cisco as it continues its move from hardware to software.

But there’s a bigger story here that should interest every IT decision maker: the imperative of observability is both a very real and complex problem to solve, and Cisco, like many others, are making attempts to solve it.

Why Cisco bought Splunk

Observability has its roots in the mid-20th century tracking of satellites, rockets, and aircraft. These new technologies were monitored via telemetry, the remote collection of data. As devices and networks have grown more and more complex, telemetry has become more sophisticated, and in the 1990s, the term “observability” began to be used to describe the data-driven measurement of an IT network’s state. Since then, the rise of cloud-based networks, mobility, and hybrid work patterns has seen its importance soar further still.

Splunk specializes in logging, security information, and event management (SIEM) and machine data analytics. It has carved out a significant market with customers, including Coca-Cola and Intel, as well as Cisco itself. The deal aims for Cisco to diversify away from its networking equipment business, which has been slowing. This year has seen Cisco make a key shift in its monitoring solutions by moving resources from AppDesign to ThousandEyes, and Splunk is an important acquisition for them. According to Cisco, the acquisition strengthens its security and observability capabilities, which are two of the most important areas for its customers.

What the deal means for observability

The deal comes at an exciting time for observability. In July, web analytics company New Relic was taken private in a $6 billion deal. Meanwhile, in September, Riverbed announced additions to their suite that enable users to track energy efficiency across networks.

Generative AI is going mainstream, and observers noted that Cisco’s acquisition announcement used the word AI a whopping eight times. Yet while the deal may be on-trend, how successful it is in the long run remains to be seen. The details have been approved by both boards, but there are still regulatory hurdles that need to be crossed. Perhaps a bigger problem is how Cisco manages the overlap between its products and sales channels – and whether that distraction impedes their ability to execute.

The future of observability

Other shifts in the industry continue. Alongside the rise of AI, thought leaders have a new favorite acronym. MELT (Metrics, Events, Logs, and Traces) sums up the resources that can be combined to give a holistic view of a digital ecosystem. Some experts are seeing traces beginning to overtake logs in importance. Grafana’s open-source eBPF solution enables the tracing of applications without instrumentation. Meanwhile, the OpenTelemetry (OTEL) framework is increasingly used across the industry and can enhance infrastructure, application, and end-user monitoring. At Riverbed, we believe all telemetry sources are important to see the full picture.

The future of observability arguably belongs to whoever can bring all these telemetry streams of data together best and present the results in a way that is consistent and shareable. While individual models have their uses, a mature unified observability solution offers a view of the entire digital ecosystem as one cohesive vehicle. Although many claim to do this, few actually do.

This is a specialty of Riverbed, whose observability suite has transformed the world’s largest organizations and how they deliver applications and services to both their customers and digital workforce. By bringing data together and using AI-powered automation, Riverbed IQ can aid problem-solving, empower decision-making, and keep users productive. It also supports NetProfiler, NetIM and Riverbed’s real-time digital experience management via Aternity. Riverbed’s open observability suite offers a richness that sets it apart from the rest.

Surveys reveal a change in workforce expectations, with a focus on creating digital employee experiences powered by automation and informed by actionable insights. An acquisition of this magnitude shows the criticality of observability and emphasizes Riverbed’s vision and position as a leader.

]]>
The Evolution of Network Monitoring https://www.riverbed.com/blogs/the-evolution-of-network-monitoring/ Thu, 04 Jan 2024 16:29:29 +0000 https://www.riverbed.com/?p=76282 It’s not nearly as boring as you may think
Try monitoring this!

“Oh my!” I hear you cry… that is a very boring topic. But let me make a radical argument: a journey through monitoring is a journey through the entire evolution of IT.

For decades—from the mainframe era to now—network monitoring has played a critical role in ensuring the smooth functioning and security of computer networks, adapting along the way to ever-increasing complexity and scale. And my own career matches this history.

Mainframes

I started work in 1988 as a trainee mainframe operator, the last of the breed. The traditional mainframe was near the end of its over 30-year run. Computer networks were centralized hub-and-spoke affairs, with a large mainframe at the centre.

Network monitoring primarily focused on system performance metrics and error detection within these limited-scale environments. We identified issues manually with basic monitoring tools such as system logs and rudimentary diagnostic utilities.

Client-Server

As my career advanced, I moved into client-server, with organizations deploying networks of interconnected servers and workstations. This architecture created a significant shift in network monitoring as it expanded to include traffic analysis, performance monitoring, and fault management. Simple network management protocol (SNMP) became widely adopted for gathering data from network devices. Administrators could now remotely monitor and manage network elements, paving the way for centralized monitoring solutions.

The Internet Revolution

In the mid-1990s, I moved out of internal support roles and into a client-facing mix of pre- and post-sales support. This coincided nicely with the explosion of the internet and vast interconnected networks. Monitoring solutions had to evolve to handle vast amounts of data and more diverse network topologies. We could monitor network traffic in real-time with network probes and packet sniffers, enabling administrators to identify potential bottlenecks, security threats, and anomalies.

Distributed and Virtualised environments

In the early 2000s, I took a job in the new area of WAN Optimisation, as “employee #1” in the company’s new UK offices, then moving into product management and strategy.  This coincided with the development of distributed, virtualised environments, which introduced yet another set of monitoring challenges. With dynamic and elastic infrastructures, traditional monitoring tools struggled. In response, we adopted more sophisticated solutions, such as network flow analysis, which provided insights into traffic patterns, bandwidth utilization, and application behaviour.

Cloud and Software Defined Networking (SDN)

As I moved through product management into strategy, another set of challenges arose as hosting, virtualisation and cloud computing coupled with the emergence of early software-defined WAN (SDWAN) and software-defined networking (SDN) challenged our network monitoring once again. In these environments, monitoring tools had to adapt to infrastructures lacking traditional physical boundaries. SDWAN and SDN allowed administrators to manage and configure network resources dynamically, necessitating monitoring solutions that could adapt to these new environments.

Big Data and Analytics

After a brief period working in Radio Access Networking and IoT, I was lured back to the wonderful world of Observability largely because the rise of big data and analytics had a profound impact on network monitoring. With networks generating enormous volumes of data, monitoring solutions now need to leverage machine learning algorithms and artificial intelligence to identify patterns, detect anomalies, and predict potential issues proactively. Cross-silo data ingestion and real time analytics enable help-desk operators, support engineers and administrators to make fast, accurate, data-driven decisions, enhancing network performance and security.

Security-Centric Monitoring

There is one area in which, believe it or not, I’ve not been directly involved: as cyber threats become more sophisticated and pervasive, network monitoring has evolved to include intrusion detection systems (IDS), security information and event management (SIEM) platforms, and behaviour analytics to detect and mitigate potential security breaches.

It’s never boring

I continue to approach each day working in IT with a sense of wonder and amazement. IT is not “boring” or “run of the mill.” It is constantly changing, evolving, and improving, as we can see from the evolution of network monitoring, which has necessarily mirrored the rapid advancements in computer networking itself.

From basic performance monitoring in early centralized systems to complex analytics-driven Unified Observability solutions in today’s distributed and virtualized networks, network monitoring has become indispensable for ensuring network availability, performance optimization, and robust security. As networks continue to evolve, network monitoring will undoubtedly keep pace, leveraging emerging technologies to meet the ever-growing demands of the digital age.

]]>
Creating a Sustainable Device Lifecycle Management Practice https://www.riverbed.com/blogs/sustainable-device-lifecycle-management/ Tue, 02 Jan 2024 13:02:19 +0000 https://www.riverbed.com/?p=76198 E-waste is quickly piling up. According to the United Nations University Global E-waste Monitor, e-waste is the fastest-growing waste stream in the world—with mobile phones and PCs making up nearly 10% of that total stream.

End-user devices, in particular, require organizations’ attention when it comes to mitigating environmental impact: Gartner reports these devices constitute a majority of IT’s carbon footprint.

Poor device lifecycle management is a major contributor to that footprint. There are several preventative, proactive practices they can take to extend the current lifecycle of their devices—which optimizes their use and curbs their environmental impact. In fact, 83% of business leaders report that successful sustainability initiatives create significant short- and long-term value for their organization.

IT teams may worry making changes to their device lifecycle management processes can result in downtime, inefficiencies, and performance issues. However, robust device lifecycle management actually enables higher productivity and better performance by mitigating the waste of time, resources, and actual physical devices. Here’s how organizations can ramp up their device lifecycle management to improve their environmental, social, and governance (ESG) outcomes.

Reduce intake: conduct a comprehensive device inventory audit

Organizations can use existing inventory to minimize their contribution to e-waste and optimize their current resources. By conducting a comprehensive device inventory audit, IT leaders can gain the visibility they need into the devices they have, which helps prevent unnecessary device additions or performance-affecting device reductions.More specifically, an inventory audit can help leaders:

  • Track lifecycle stages. Audits can catalog devices by their lifecycle stage, which purchase date, warranty status, and maintenance history all inform. This helps IT leaders gain a more accurate understanding of how usable or up-to-task each device is, which helps teams maximize their lifetime use. This can make the difference between throwing away a perfectly good device because it’s “old” or increasing an “old” device’s lifetime value, which preserves money and resources.
  • Optimize device usage. Understanding device inventory allows organizations to assess device usage. IT leaders can easily reallocate underutilized devices, prevent unnecessary new purchases, extend the lifespan of existing assets, and reduce e-waste.
  • Streamline device budgeting. When IT leaders know all the devices in their inventory, what they’re capable of, and where they are in their lifecycle, they can forecast future device needs with greater accuracy. This allows for better budget allocation and prevents overspending on unnecessary resources, which also reduces an organization’s carbon footprint.

Inventory audits also reframe corporate attitudes around older devices, and therefore, mitigate waste by extending their lifetime use. For starters, older devices are not necessarily useless devices. Based on Gartner’s research, while most organizations still set three to four-year refresh cycles for employee laptops, organizations have found that only a small fraction of those devices have performance metrics that would justify replacement within that time frame. Extending their life span represents millions of dollars in potential cost savings.

Repair and retain: Fix devices when you can

Organizations should err on the side of repair instead of throwing devices away. Unilaterally getting rid of devices when they get “too old,” even when they still have considerable lifetime use in them, wastes resources, IT support time (because IT teams need to replace them), and money. 

In fact, an overwhelming majority of older devices could have their lifetime use extended with simple repairs and maintenance. As such, IT leaders should institute priorities around:

  • Hardware performance insights. IT leaders can utilize digital experience management (DEM) to optimize device lifecycle management. DEM can help organizations focus on actual device performance as opposed to a static calendar timeline, which helps extend device lifespan.
  • Flexible life span policies. Treat each and every device as a unique circumstance. Organizations should avoid adopting singular life span policies that risk throwing away usable existing devices. IT leaders manage life span policies with DEM data that informs device performance.
  • Employee-focused energy reduction. How an employee treats and maintains their devices is a major factor in device reliability. Organizations can mitigate device wear and tear by instituting policies around employee use, including battery preservation, power management settings, and sleep settings.

Restructure and recycle: Bake sustainability into device selection and procurement

Incorporating sustainability goals into procurement strategies can build a stronger foundation of device lifecycle management by uplifting environmental priorities from the beginning. IT leaders can incorporate sustainability into their procurement processes by:

  • Seeking out vendors that ship devices in responsible packaging.
  • Ensuring devices have specific ecolabel certifications, such as 80 PLUS, Energy Star, and EPEAT.
  • Initiating tests to compare the energy efficiency of different device models.
  • Sourcing devices from responsible providers with commitments to sustainability.

Organizations can also leverage third-party assistance to collect and evaluate data needed to assess their vendors’ ESG performance to ensure they’re meeting ESG goals. They should also ensure that their vendors are engaging supply chains with similar priorities around sustainability to enable multi-level reduction in e-waste.

Improving device lifecycle management from the ground up

Incorporating sustainability initiatives throughout device lifecycle management can be helpful in reducing e-waste and optimizing performance. 

Few strategies can match the impact of implementing an eco-conscious mindset from the very beginning of every device lifecycle. IT leaders with greater sustainability ambitions can take device lifecycle management to the next level by keeping sustainability top of mind during procurement, defining employee responsibilities for energy conservation, and, of course, gaining and maintaining visibility of all devices for better insights into their utilization. Those three strategies combined shrink carbon footprints and maximize ESG outcomes.

Find out how to make sustainable IT a reality by checking out our white paper, The Role of Unified Observability in Sustainable IT.

]]>
What Does SEC T+1 Rule Mean for IT Teams in Financial Services? https://www.riverbed.com/blogs/sec-t1-rule-for-it-teams-financial-services/ Wed, 20 Dec 2023 13:53:07 +0000 https://www.riverbed.com/?p=76037 The financial services industry is no stranger to change, and the SEC T+1 rule is no exception. Effective from May 28, 2024, the new regulation will reduce the settlement time for U.S. securities transactions from two business days to just one.

In this blog, we delve into the SEC T+1 rule, explore how Riverbed’s Network Observability solutions can help IT network teams in meeting the associated challenges, and provide guidance for current Riverbed customers to prepare for the SEC T+1 changes.

A quick summary of the SEC T+1 rule

SEC T+1 is a rule amendment that will shorten the settlement cycle for broker-dealer transactions in securities from two business days after the trade date to one. The SEC believes this will benefit investors and reduce risk in securities transactions.

There are other nuances of this rule amendment that address processes and record keeping requirements but for the network teams at financial services institutions, cutting the allotted time to settle a transaction in half will have the most impact.

Challenges for IT network teams

The new T+1 rule puts a lot of additional pressure on IT and network teams at financial services organizations to ensure their networks can handle the increased network demands and data processing that will come along with the shortened transaction processing window.

This means it’s critical that financial services organizations have broad and deep visibility into their network so they can proactively identify and quickly resolve network performance issues. This visibility is also crucial for adhering to T+1 requirements, answering questions like “how much traffic is being consumed?” and “how is traffic being prioritized?”

Riverbed NetProfiler and AppResponse can help address those challenges. Riverbed NetProfiler provides network flow analytics that can quickly diagnose network issues before they impact performance. Meanwhile, Riverbed AppResponse offers the robust network and application analytics needed to shorten the mean time to repair network issues.

NetProfiler and AppResponse customers

For existing Riverbed customers using Riverbed NetProfiler and AppResponse, it’s important to note that adapting to new SEC T+1 rule may lead to increased data generation and has the potential to stretch the limits of your NetProfiler and AppResponse capacity. To ensure your network observability and data retention, now is a good time to double check your existing licensed capacity and system storage.

Determining if NetProfiler is oversubscribed

You can determine your flow status by going to the NetProfiler or Flow Gateway ADMINISTRATION > System Information link at the top of the screen, and then clicking on SYSTEM. The video below provides an overview and you can read this blog, Determining If NetProfiler Is Oversubscribed, for more detail.

Checking AppResponse capacity

To understand how much additional packet and analysis horsepower remains in your appliance, head over to Administration > Traffic Diagnostics from the appliance webUI. This built-in insight is packed with critical charts of hardware and software components that power the packet capture and analysis capabilities. On the “General” tab, the bottom four charts will indicate if you have reached your AppResponse.

If you are seeing packet drops in any of these charts you should investigate how much traffic is being fed to the appliance by visiting “ADMINISTRATION” > “Capture Jobs/Interface” page. This page will list all the network capture hardware cards (or software/virtual interfaces) that are installed on the appliance along with their rated link speed.

Once you are familiar with all the capture cards installed and their link speeds, head back to the “Traffic Diagnostics” insight where the top two charts titled “Throughput” and “Packet Rate” will show how much traffic is going through the installed interfaces. Each interface must only be fed traffic below its line rate at all times. If in these charts (which go back seven days) show spikes in traffic which surpass the line rate for an interface, work with the infrastructure feeding traffic to AppResponse and ensure it spreads the load of packets across the other interfaces. In some cases, you may need to add another AppResponse to handle the peak rate of traffic.

The video below provides an overview of the process for both NetProfiler and AppResponse customers in detail:

 

While the SEC T+1 rule amendment will benefit investors and reduce risk in securities transactions, it comes with some challenges for network IT teams at financial services organizations. Network Observability solutions can help provide these teams with the in-depth network visibility needed to address these challenges by providing proactive identification of network performance issues and faster mean time to resolution. It’s critical that existing Riverbed customers evaluate their current usage levels to ensure they are prepared to handle the increased network demands from SEC T+1.

Learn more about how Riverbed can help financial services organizations.

]]>
Measuring More Than Sentiment with XLAs https://www.riverbed.com/blogs/measuring-sentiment-with-xlas/ Tue, 19 Dec 2023 13:57:17 +0000 https://www.riverbed.com/?p=75793 Imagine you’re at the supermarket, and there’s been an update to the company’s loyalty app. You open it up to find the barcode that registers your account has moved. Now, instead of it being on the home screen, you need to click into another section to find it.

Mildly annoyed, you finish your shop and head to the checkout. It takes you three more seconds than usual to grab that barcode and complete your transaction. No big deal, right?

Wrong. As you walk away, a queue of 10 customers forms behind you, each taking those three extra seconds swiping to their barcodes. That’s 30 seconds added to the cashier’s processing time. Multiply it by the 1,000 shoppers the store might see in an hour, and an entire 50 minutes is wasted.

At head office, the IT team’s high-fiving. The update’s rolled out, the interface looks fantastic, and everything’s running as planned. If only they knew what was going on for the employees and customers actually using their app.

And they could, had they implemented XLAs.

What are XLAs?

As their name suggests, experience-level agreements (XLAs, sometimes known as ELAs) are a variation of service-level agreements (SLAs) that focus on the end-user experience. Whether that end-user is your employees, your customers, or both, the way they engage with–and feel about–your solution is critical.

And XLAs aren’t simply an exercise in gathering sentiment. The most effective agreements combine the telemetry of your service with the feedback you receive on it and the emotional impact of it.

Essentially, you must ask yourself three questions: how reliable is my service? How well does it perform? And how do my users perceive it?

Why are XLAs important?

So, you’ve got performance management software, and you’ve set telemetry SLAs that are always met. Your IT team knows what it’s doing, and it makes sure your users can access the systems they need, when they need them. But does it know what’s going wrong once those users get into those systems, and what’s going right?

You obviously can’t ask every single employee and every individual customer how they’re feeling. Still, you can give them a platform for their thoughts–then combine this with your telemetry data to find trends, remedy issues, and make the people you serve happier and more productive.

This combination is key. Because if your employees consistently and collectively rate one of your systems negatively, you may assume it’s because it isn’t working to standard–when in reality, it’s just taking everyone too long or requiring too much effort to do what they need to get done.

It’s all about shifting your IT experts’ mindsets to journey-based services rather than services that perform their function and nothing more. It’s about considering the time it takes people to access the systems they need, how the platform looks and feels to use, and what behaviors users display.

It’s about having the data-based insight to make informed decisions on prioritizing investments, identifying skills gaps, and improving policies and processes. And it’s about adding the relevant shortcut if it takes 10,000 clicks to process an order.

How can I implement XLAs?

The good news is that there are digital experience, which can gather all the information you need– qualitative and quantitative–in one easy-to-find, simple-to-interpret dashboard.

Aternity’s sentiments capability goes beyond basic surveys, allowing you to use human-defined or department-agreed thresholds to establish what a positive end-user experience looks like for your business and your service at a granular level. For example, completing a process in three seconds might feel too slow for some users, too fast for others, and perfect for a few.

Not only can tools like Aternity flag specific issues like this, but they can also measure input across every stage of the delivery chain. Think telemetry data from your entire IT platform, feedback from your employees, sentiments from your customers, and everything in between.

So, you get a complete picture of how different elements perform in different scenarios throughout different journeys for different people–and what those people would ideally like you to do about it. And sometimes, it’s an action as small as moving your loyalty app’s barcode back to where it used to be.

In today’s increasingly digital, ever-demanding world, speedy response times and slick branding aren’t enough. To achieve your goals, stay competitive, keep your employees, and satisfy your customers, you need to transform and innovate. All based on authentic user experiences rather than IT’s assumed outcomes.

XLAs are one powerful metric that can empower you to do exactly that. Get in touch with us today and discover solutions that assess not just how well your systems are working, but how well they’re working for the people that matter.

]]>
The Benefits of AIOps in Network Management https://www.riverbed.com/blogs/aiops-in-network-management/ Mon, 18 Dec 2023 13:24:09 +0000 https://www.riverbed.com/?p=74086 IT organizations are improving network management capabilities through the integration of artificial intelligence (AI) and machine learning (ML). A recent report by Enterprise Management Associates, AI-Driven Networks: Leveling Up Network Management, sheds light on this approach of utilizing AI/ML in IT operations solutions, commonly known as AIOps.

AIOps combines big data and machine learning techniques to support IT operations functions. Its primary aim is to improve root cause analysis, enable predictive insights, and automate responses, all while significantly reducing mean-time-to-resolution (MTTR) and elevating the digital experience.

graph displaying the Top 5 AIOps Use Cases in network management
Top five AIOps use cases, according to EMA

EMA asserts that confidence in AIOps remains high, with nearly 92% of organizations believing AI/ML-driven network management can lead to better business outcomes. In fact, 40% of organizations have already integrated AI/ML technology into nearly all aspects of their network management processes.

Drivers of AIOps adoption

The top priority for using AI/ML is network optimization. Organizations are looking for ways they can tune the network to best meet specific business needs. What’s worth noting is that IT executives are increasingly placing their faith in AI/ML techniques to facilitate this critical endeavor. Additionally, other important use cases for larger organizations include automated troubleshooting, intelligent alerting and escalations, and predictive capacity management.

Top benefits of AIOps-driven networks

Most organizations apply AI/ML and AIOps to network management via their network management and network infrastructure solutions. Although domain-agnostic AIOps products such as Moogsoft and Big Panda exist, they are somewhat less prevalent in network management use cases.

graph displaying the Top 5 Benefits of AIOps in network management.
Top five benefits of AIOps, according to EMA

AIOps offers significant advancements to monitoring the network. The biggest opportunity is network optimization. The network operates at its best when AI/ML identifies and correlates events in real-time, resulting in a smoother overall system. The report also indicates benefits in network agility, security, and resiliency.

Riverbed NetIM adds AI/ML techniques to improve results

With the addition of dynamic thresholding in Riverbed NetIM infrastructure monitoring, all Network Observability products support AI/ML techniques. NetIM now uses dynamic baselining that automatically and continuously updates historical performance baselines to identify significant changes in behavior. Instead of setting and tuning per device static thresholds for utilization, memory, and CPU, Riverbed NetIM dynamically baselines these metrics to identify significant changes in behavior. As a result, it significantly reduces “noise” stemming from non-actionable alerts and minimizes ongoing maintenance related to manual threshold tuning.

For a deeper dive into Riverbed NetIM IT infrastructure monitoring, click here. To explore the myriad of benefits and applications of AIOps, download our ebook today.

]]>
NetIM Health Sunburst: Easy Discovery of Poor Device Performance https://www.riverbed.com/blogs/netim-health-sunburst-device-performance/ Tue, 12 Dec 2023 13:45:27 +0000 https://www.riverbed.com/?p=74404 The Riverbed NetIM Health Sunburst automatically calculates your overall health score so you can see instantly how your infrastructure is performing. Instantly identify infrastructure health and availability gaps, then drill into the worst performing areas for fast root cause analysis.

Immediately identify device hot spots 

The NetIM Health Sunburst automatically identifies infrastructure hot spots that are impacting network and application performance. It isolates data by country, region, and city or by sites. This level of visibility enables fast root cause analysis by supporting fast drill down into a list of worst performing devices.

Alluvio NetIM Health Sunburst dashboard
Device Health Sunburst shows worst performing devices by country, region, city.

The Sunburst color codes (orange, yellow, green) areas that need improvement, so you can act fast to make appropriate changes. It helps:

  • Provide an immediate picture of overall infrastructure health and the factors that contribute to it to prioritize remediation efforts.
  • Identify infrastructure hot spots with color-coded health scores to speed up problem investigation.
  • Drill-down into problem area to identify poorly performing devices.

The new sunburst health visualization provides an easy alternative to the geographic heatmap and geo-topology visualization options.

Health Sunburst dashboard shows worst-performing devices
Health Sunburst lets you drill into a region to see the worst-performing devices.

How Health Sunburst works

A device’s geographic location data, which can be set in the Device Manager, is used to aggregate by Country, Region, and City. The size and color of the slices in the sunburst is based on the relative number of devices and worst device health, respectively. When you mouse-over a slice, you get a summary of the devices in the slice. Clicking on a slice provides the list of devices in the slice.

Alternatively, you can use device site membership and site hierarchy to aggregate and display health by site with the separate but related Site Health Sunburst visualization panel.

NetIM for comprehensive infrastructure monitoring

Riverbed NetIM is a holistic solution for discovering, mapping, monitoring, and troubleshooting your IT infrastructure. It captures infrastructure topology, detects performance and configuration changes, maps application paths over the network, diagrams your network in real-time, and helps troubleshoot infrastructure problems and plan for capacity changes.

NetIM is built on a modern, containerized architecture for scalability, ultra-high performance, and cloud deployment for operational agility. As an integrated component of the Riverbed NPM platform, customers can manage infrastructure issues in the context of overall  performance health.

For more information on Riverbed NetIM, click here.

]]>
Conquer Prolonged Boot and Login Times with Riverbed Aternity https://www.riverbed.com/blogs/prolonged-boot-login-times-alluvio-aternity/ Mon, 11 Dec 2023 13:26:17 +0000 https://www.riverbed.com/?p=75755 In the realm of modern technology, prolonged boot times on end-user devices are an undeniable reality for any organization’s digital estate. Unfortunately, the true impact of these boot times is often underestimated. Drawing from my 15 years as a consultant specializing in end-user-experience monitoring, I’ve observed firsthand the impact that slow boot and login times can have on users’ productivity, morale, and even their willingness to install crucial security updates.

Proactively measure boot times across the digital estate

Security updates are imperative, yet the forced reboots accompanying these patches are sometimes perceived by users as a significant hindrance to their daily productivity. Riverbed Aternity steps in with a dedicated boot time dashboard designed for troubleshooting. Prior to using the Riverbed Aternity boot time dashboards, slow boot and login times were only investigated when users noticed a decline in their individual boot and login times and reached out for assistance.

Instantly see the slowest devices or users by machine boot, user login or the total boot time.
With Riverbed Aternity, IT can easily view and troubleshoot boot times from a 30K foot view.

With Riverbed Aternity, you gain valuable insights into addressing slow boot times through “out of the box” dashboards. These empower you to accurately pinpoint the causes of delays, whether they stem from outdated drivers, burdensome startup tasks, or other underlying issues.

Deep dive into phases of the boot process

A standout feature of Riverbed Aternity is its ability to provide a detailed history of recent boots and logins for individual users. This feature is especially beneficial for Service Desk technicians, enabling them to analyze a user’s boot process in-depth. They can precisely identify the stage, from Power-On to the end of the login sequence, where the delay occurs. If necessary, tickets can be assigned to the correct resolver team.

quickly Identify the most impactful process or driver to the login or boot
Drill down into root cause of slow boot times with Riverbed Aternity.

With Riverbed Aternity, getting a handle on the entire boot-to-login process is a just a few clicks away. Dive deeper into the details of each phase of the boot process, identifying sluggish drivers, services, startup processes, or group policies. Determine which department has the worst average boot time or even which laptop model ensures the quickest user logins.

Aternity DXI: Identify boot problems quickly

All problems get worse at scale and in end-user device footprint spanning a few thousand devices, identifying boot problems calls for a dependable single metric, Aternity’s Digital Experience Index (DXI) offers a broader perspective by scoring the average boot time across all users in your environment. Compare this metric with the average boot times of other Aternity customers for a valuable benchmark. Such comparison is crucial, as it helps you understand the relative impact of boot and login times within your environment compared to others.

Compare the average boot time of your environment to other Aternity Environments to set realistic benchmarks.
View the DXI metric comparison with industry benchmarks.

Learn more

With all of these available dashboards and troubleshooting workflows, make sure your IT teams are properly equipped to handle boot and login problems, regardless of the scale of your end-user device infrastructure. Learn more about this topic on Riverbed.com.

]]>
The Next Generation Workforce Demands Sustainable IT https://www.riverbed.com/blogs/next-generation-workforce-demands-sustainable-it/ Wed, 06 Dec 2023 13:16:40 +0000 https://www.riverbed.com/?p=75666 PwC reports that Millennials and Gen Z currently comprise 38% of the workforce—a number predicted to jump to 58% by 2030. However, Millennial and Gen Z workers have some critical differences from their Baby Boomer counterparts that require organizations to make major shifts in their tech status quo. Namely, both generations list environmental friendliness and sustainability as top priorities – especially when choosing where they work. 

51% of Gen Z U.S. business students stated they’d accept less pay to work at an environmentally responsible company—which means companies would do well to invest in sustainable IT for the new workforce. Implementing workflow optimization and automation is the best way for organizations to achieve the workplace that the next generation demands by improving digital employee experience (DEX) and integrating sustainable IT practices into daily operations.

How sustainability provides workers with the digital employee experiences they crave

Even if they’re not outright saying it, the Millennial and Gen Z workforces crave a positive digital employee experience (DEX). Meeting sustainability expectations is critical to providing that positive digital employee experience for Millennials and Gen Z. According to Deloitte, 50% of Gen Z workers and 46% of Millennials are currently pushing their employer to drive change on environmental issues. 

Here are a few ways sustainable IT can enhance DEX:

  • Improve engagement and morale. Gen Z and Millennial employees are a values-driven generation. When they feel that their values align with their organization’s, it can significantly increase overall engagement and morale. 
  • Enable better workflows with efficient and reliable technology. Sustainable IT inherently aligns with adopting more modern, efficient technologies and processes. When workflows are optimized and automated to mitigate unnecessary repetitive tasks, it can significantly reduce the chances of frustration or burnout. 
  • Enhance communication and collaboration. Sustainable IT practices promote the adoption of tools and platforms that help mitigate unnecessary and redundant human intervention, streamlining communication and fostering better teamwork.

The challenges of integrating sustainable IT practices

While integrating sustainable IT practices is necessary for organizations that hope to retain the next generation of talent, moving toward sustainability presents challenges that highly impact workflow productivity. These challenges include:

Data volume and velocity

Matillion and IDG found that organizations experience monthly data growth of 63% per month on average. More IT systems are connected to a single company than ever, creating a massive volume of data at high velocities. Managing and processing this data in real-time and manually without overwhelming teams and leading to performance bottlenecks is virtually impossible

Data consistency and quality

Organizations must often process data from diverse and disparate sources, which makes it difficult to standardize company-wide data aggregation, collection, and analysis. More likely than not, there will be inconsistent data formats, missing values, and errors seriously impacting the accuracy and integrity of any data collected—which then forces teams to sink time and energy into identifying and remediating mistakes.

Resource and expertise constraints

Data moves fast in modern digital ecosystems, so workers must move even faster. However, it can be challenging to build the infrastructure and expertise needed to aggregate data (which isn’t always accurate or high-quality) from disparate sources—and then mine that data for insights. While investing in certain tools, personnel, and training can help mitigate resource strain, companies with tighter budgets may still experience significant bottlenecks here. Additionally, more is not always better. Simply adding tools or solutions to a tech stack doesn’t necessarily guarantee greater productivity.

These issues all negatively impact a major value of Millennial and Gen Z workers: convenience. Our 2023 Global Digital Employee Experience Survey Report found that 68% of Millennial and Gen Z are likely to go elsewhere if their employer’s DEX, which includes convenience and ease of use, does not meet their standards.

How workflow optimization and automation can help sustainability and DEX

While balancing IT device performance and reducing environmental impact might seem like unrelated ideas, they are in fact deeply intertwined. These goals can functionally support each other when approached with this critical mindset: Sustainable IT is better IT. 

Here are a few places where workflow optimization and automation can help organizations meet sustainability goals and ideals:

Energy efficiency

IT workflow automation can help significantly optimize (i.e. reduce when it’s not necessary) energy usage. With the proper automation, teams can schedule maintenance tasks during off-peak hours or even turn off devices, lights, and other energy sources when they’re not in use or needed. Organizations can leverage AI to enable energy-efficient algorithms and other automated processes that reduce overall power consumption, shrinking their carbon footprint and lowering their operational costs.

Resource optimization

The right workflow optimization and automation strategies solve experience and resource gaps. Workflow optimization inherently involves streamlining processes by reducing unnecessary steps and mitigating the need for human intervention—which can also help eliminate bottlenecks. Optimization and automation are most efficiently applied here when they remove the need for humans to carry out repetitive tasks that machines could do. When organizations standardize these automation procedures, IT systems can utilize their existing resources more efficiently, thereby reducing waste and excess energy expenditure.

Automated data validation and real-time data cleansing

When organizations automate data validation, they can identify and rectify data discrepancies or anomalies with rigor and timeliness. This ensures higher data accuracy, significantly reducing unnecessary human intervention when vetting data for quality and relevance. Additionally, automated workflows can address real-time data cleansing and enrichment. This helps identify and rectify data inconsistencies with greater accuracy and speed, reducing the environmental impact of operational inefficiencies.

New generations, new expectations, new technology

Millennials are about to enter their prime earning years, while Gen Z prospects flood the workforce in droves. As both generations actively seek out organizations that meet their values (and are willing to leave those that don’t), companies will need to live up to expectations of sustainability.

Implementing workflow optimization and automation in the right instances can significantly reduce environmental impact by streamlining inefficient processes. With automation enabling sustainable IT practices, companies can eliminate unnecessary human intervention, creating a more positive, productive, and environmentally friendly environment for all generations of workers.

Want to learn more about the key to implementing sustainable IT? Check out our white paper, The Role of Unified Observability in Sustainable IT, to take a deeper dive.

]]>
How FSI Orgs Can Exceed ESG Goals with Unified Observability https://www.riverbed.com/blogs/fsi-orgs-exceed-esg-goals/ Tue, 05 Dec 2023 13:06:14 +0000 https://www.riverbed.com/?p=75657 Sustainability is one of the biggest buzzwords in the world right now, including in the financial services and insurance (FSI) sector. There’s good reason, too. With increasing social awareness, financial enterprises recognize their pivotal role in contributing to a cleaner, greener world. Stakeholders are looking for meaningful eco-friendly initiatives aligned with organizational core values.

So, what does sustainability mean for your FSI business, and how can you navigate the stringent environmental, social, and governance (ESG) goals you’ve set? Riverbed Solutions Engineer, Jaspreet Sandhu, sheds light on this in our video on the topic: “Sustainability is all about balancing the needs of our people, planet, and profit to succeed both environmentally and financially.”

Sustainability is a top priority for businesses worldwide. A recent report by Forrester reveals that 51% of the Fortune Global 200 have set carbon-neutral or net-zero aims. Despite this commitment, many are struggling to get there, as data accuracy creates the biggest obstacle in ESG reporting.

To help overcome these hurdles, the suite from Riverbed empowers FSI organizations to:

  • Satisfy stakeholders with exceptional digital experiences.
  • Protect the planet with energy-saving functionality.
  • Save money while aligning with stakeholder expectations.

Scaling hardware and software to cut e-waste (and cost)

With siloed systems and disparate data, it can be hard to get a full picture of what hardware and software are being used the most, least, and not at all within your FSI business. Industry-leading unified observability tools like the suite from Riverbed gather and correlate large amounts of granular data on machine and license usage from internal and external sources–everything from employee PCs to ATMs.

They then transform this into actionable insights and workflow automation to drive effective decision-making, allowing you to end licenses that aren’t needed and recycle or remove hardware nobody would miss. This can add up to an enormous saving in both physical and carbon footprint. After all, data centers alone account for almost 4% of the entire world’s energy usage (and no doubt cost you a fair amount to manage).

One success story comes from Tate & Lyle, realizing substantial savings by using Riverbed Aternity. By freeing up expensive, unused software licenses, the company can now allocate the correct number to its user community, delivering excellent return on investment.

Replacing machines based on performance, not time

If you have a policy of replacing hardware every two, three, four, or five years, the chances are you’re getting rid of machines that don’t need updating or upgrading yet–and dealing with lag and blue-screening from some that may be ready to retire earlier, leading to productivity losses and employee dissatisfaction. By deploying Riverbed , you can access up-to-the-minute insights and predictions on the performance of every single device in your fleet, displayed in easy-to-read dashboards–enabling you to replace hardware based on how much life it has left in it, rather than how long its life has been.

Riverbed Aternity allowed Kent Community Health NHS Foundation Trust’s IT team to make better-informed decisions about their investments in this way. “We have revised our asset refresh plan based on device performance,” says Darren Spinks, Head of IT Operations at the trust. “Aternity showed us we wouldn’t need to replace 42% of our 1784 devices aged five years or older. This has meant that we have already returned our investment in Riverbed Aternity.”

The UK team at Energy firm EDF uses Aternity in a similar way. Donna Lloyd, the company’s Senior Enterprise Manager of Platforms & Enablement, elaborates: “When we upgraded to Windows 10, we looked across our estate to see where our slowest machines were, so we could target those replacements first. When we replaced them, we used performance statistics to assess and demonstrate the positive impact.”

Resolving problems before they even begin

Another multifaceted benefit of using a platform like Riverbed is being able to proactively resolve issues, saving time, money, and a whole lot of stress.

Coming back to the Kent Community Health NHS Foundations Trust, the Aternity solution provides the organization with auto-remediation actions. These reduce service desk calls and automatically resolve IT-related problems–before users even become aware there’s an issue. What’s more, the IT team can monitor the power consumption of its endpoints and adjust energy usage accordingly to help reduce its carbon footprint. This supports the NHS’s sustainability agenda of becoming the world’s first net-zero carbon health service.

Reminding staff to power down from afar

One sneaky way energy peaks (and bills) can creep up on you is when staff spend time away from their PCs or laptops without shutting down. And one small-but-mighty way Riverbed can help is by monitoring device usage, then sending those employees alerts when it’s time to power off–for example, when they’re heading for lunch or winding down for the day. If they ignore these notifications, Riverbed can even allow you to shut their machines down remotely, so you can be a sustainability hero from afar.

Keeping a general eye on your carbon footprint

In conclusion, Riverbed has many features and functions that can help you improve sustainability by simply being more aware of what’s going on in your business, knowing what you can do about it, and having the power to take those steps.

The Princess Alexandra Hospital NHS Trust implemented Aternity with the aim of cutting costs and its carbon footprint. Not only is the organization projected to save £2.5 to £3 million over a five-year period–it’s also able to interrogate data more intelligently and see where general improvements can be made, leading to sustainability wins across the board.

Jeffrey Wood, Deputy Director of ICT at the trust, says, “We now have a single source of truth for clinician experience, improved application performance, reduced ICT spend and at the same time we have reduced our carbon footprint. Riverbed Aternity has exceeded my expectations on all fronts.”

We’ve specially collated a dedicated sustainability resource page for companies like yours, which you can visit here. Once you’ve explored it, get in touch with our team, and let’s talk about your FSI organization’s specific, unique ESG goals–as well as how we can empower you to meet and even exceed them.

]]>
Five Takeaways About the Future of AI from Gartner IT Symposium https://www.riverbed.com/blogs/five-takeaways-about-the-future-of-ai-from-gartner-it-symposium/ Tue, 21 Nov 2023 13:04:58 +0000 https://www.riverbed.com/?p=75539 Riverbed's Charbel KhneisserVP Solutions Engineering, EMEA
Charbel Khneisser
VP Solutions Engineering, EMEA, shares Riverbed’s Unified Observability solutions at Gartner IT Symposium

It’s hardly news that AI is the hottest trend in the tech world. Most of us have dabbled with ChatGPT, become a designer for the day using Midjourney, or taken advantage of the many applications we hardly even consider artificial intelligence–from unlocking our phone with our face to performing a smart Google search.

But having just attended and exhibited at the Gartner IT Symposium in Barcelona, where there was a huge gathering of CIOs and IT executives, it’s clear that AI is revolutionizing the digital sphere in more ways than we could have imagined. And with that comes new opportunities and new risks. Here are some of the things we discovered about AI at the event.

AI is changing the game, and our everyday

If you’re familiar with Gartner, you’ll know about their Hype Cycle methodology. As it says on the Gartner website, ‘clients use Hype Cycles to get educated about the promise of an emerging technology within the context of their industry and individual appetite for risk.’ Gartner has curated several Hype Cycle iterations around AI, including generative AI: the technology that powers the likes of the aforementioned ChatGPT.

The organization’s research has shown four primary use cases for AI:

  • Everyday Internal use in a company’s back office
  • Everyday External use in the front office
  • Game-changing External use, helping businesses become more forward-looking via innovative products and services
  • Game-changing Internal use, supporting arduous regular activities and boosting core capabilities.

According to Gartner, it’s critical that companies start considering where they want to be as an organization–and, therefore, where they’d like to use AI. Because when you know the use case, or use cases, that would align with your business’s future strategy and goals, you can apply the technology more effectively and gain the biggest return on your investment.

At Riverbed, we use AI in our unified observability solutions across all four use cases. We provide our partners with scripted automation and proactive issue detection and remediation, both internally and externally, so they can have greater control and productivity in their everyday. And we offer customer and employee journey analysis, looking at behaviors and trends to drive innovation–including product releases–so they can change the game in their industry.

AI works best when you’re open

As well as AI, digital experience management (DEM) was named one of the top five areas organizations should be focusing on – and an area that AI can help accelerate and enhance. We’re an expert in this field, and as we’ve already touched upon, unified observability solutions like ours can empower companies to better understand their employees and customers, improving their experiences using data-driven insight instead of guesswork. But did you know Riverbed’s platforms are open, seamlessly integrating with any IT setup and infrastructure to complement and enhance–not replace–third-party tools a business has already deployed? Attendees who came to our stand at the Gartner Symposium were particularly impressed with this.

One visitor commented, “I’m procuring an APM solution right now. When I found out you could integrate with it, I felt relief that my decision won’t impact the results I can achieve with Riverbed.” Another added, “Riverbed can help me capitalize on and protect my investments, which is important to me and my business.” For us, offering an Open Suite means being able to leverage open APIs to collect or feed data and telemetry from/to third-party tools. This is achieved through the adept use of AI and ML capabilities, resulting in actionable insights. By integrating emerging technologies and tools, we provide our clients with the reliability, resilience, remediation, and more necessary to deliver exceptional experiences to their customers.

AI is creating regulatory conundrums

Imagine a world where you can open a bank account, claim benefits, secure a new job, or access healthcare at the swipe of a screen. Or how about a future where you go to buy an item of clothing online, and the company already knows all your measurements, so every garment you purchase is crafted to be a perfect fit? While this may sound futuristic, fun, and functional–removing the prospect of the tedious returns process–it’s terrifying to consider how much data would need to be gathered, stored, and shared about you for this to happen.

For this reason, the government will need to play a major role in operational excellence, citizen engagement, and driving outcomes. The government will need to regulate the use of AI and will have to play major roles in Identity Use Cases: Identity for Compliance, Identity for Personalization, Identity for Fraud Detection and more. Double down on some of the use cases, they must facilitate confidentiality and compliance for people to feel confident engaging with these new ways of living and working. To that point, behavioral and customer analytics solutions–like the ones Riverbed can provide will add extreme value in better understanding and benchmarking User Behaviors, Journey, Sentiment and Experience. Ultimately, we’re about to see the birth of a new era of AI data culture and security, and we’re ready for it.

AI is only as good as what it’s given (for now)

While AI is already starting to help us in our daily lives, you only have to give Midjourney or DALL·E 3 a few prompts to see that they’re not foolproof (why do the hands always look so strange?). There are many reasons these models aren’t quite clever enough to outsmart us yet – and one of those is that, for now, the tech is only as intelligent as the information it’s given.

That’s where reinforcement learning through human feedback (RLHF) comes in. The RLHF approach involves creating a machine-learning model and continuing to educate it by asking for human input. This could mean, for example, getting people to score an AI chatbot’s response using various criteria. How funny was the chatbot? How natural sounding? How informative?

The ‘reinforcement learning’ (RL) part means AI agents are trained on reward and punishment mechanisms. So, they’re rewarded for correct moves, punished for wrong ones, and therefore incentivized to hit the mark every single time.

Then, onto the ‘human feedback’ bit. This is where real-life annotators compare various outputs from the AI agent and pick those they prefer–essentially, responses that are closer to the intended goal. By combining the elements of traditional RL and plenty of input from us, AI will evolve, eventually completing tasks with perfect performance.

AI is all about the human-machine relationship

It’s a tale that’s been told in endless sci-fi movies for countless years: humans want to be more like computers, while computers want to be more human. In truth, it’s the relationship between mortal and machine we need to get right so we can both progress. Soon, for example, there’ll be a machine-customer market, where robots can purchase consumables for themselves so they can operate for us to the best of their ability. It’s vital that CIOs own these new relationships, having awareness and influence over AI-ready principles, data, and security.

Riverbed booth at Gartner IT Symposium
Riverbed’s team of experts at Gartner IT Symposium in Barcelona

At Riverbed, we like to keep things interactive–so on our stand at Gartner Symposium, we hosted two games, our own branded Pac-Man game and golf putting. One customer, while playing golf and scoring well, commented that they’d never even been on a ‘green’. They then joked that in the future, they’d likely be able to download software to their brain and learn everything they needed to achieve like this in any new sport.

But was this statement a joke, or could it be an accurate prediction?API If Gartner’s insights are anything to go by, we could be looking at the latter.

To learn more about how Riverbed’s solutions use AI to proactively predict and remedy issues, build useful shortcuts, save companies time and money, and give customers the experiences they’ve come to expect, chat to us today.

]]>
Five Ways Riverbed’s Portfolio Drives Retail Performance https://www.riverbed.com/blogs/five-ways-alluvio-drives-retail-performance/ Fri, 17 Nov 2023 13:32:20 +0000 https://www.riverbed.com/?p=75504 In today’s competitive retail landscape, delivering exceptional customer and employee experiences and optimizing operational efficiency are critical for success. In our recent 2023 Riverbed Global Digital Employee Experience (DEX) Survey, which polled 1,800 IT and business decision-makers across 10 countries and seven industries, 93% of the retail leaders who responded agreed IT is more responsible for driving business innovation now than it was three years ago. Still, 89% claim that slow-running systems and applications and outdated technology are directly impacting the growth and performance of their organization.

Retailers like you need to follow the 90% planning to accelerate digital experience adoption and implementation, seeking out advanced tools that provide real-time insights, enable proactive monitoring, and empower data-driven decision-making. Especially when preparing for a busier Black Friday, Cyber Monday and Singles Day, where you’ll undoubtedly see traffic explode.

In simple terms, this means implementing unified observability solutions: the tools that give you a complete picture of every process, system, and level of the technology stack in simple-to-read, easy-to-understand dashboards. Ninety-two percent of retail survey respondents agree there must be greater investment in unified observability solutions that provide actionable insights for better employee and customer digital experiences.

This is where Riverbed’s unified observability product suite can help. Here’s five of many ways in which it can:

Support real-time performance monitoring

Customer expectations have never been higher–and your competitors have never been in easier reach. That’s why it’s more important than ever that your digital infrastructure performs flawlessly, delivering the exceptional digital experiences consumers demand.

Riverbed provides real-time performance monitoring, enabling retailers to promptly track and optimize their systems. One organization using Riverbed improved page load time by 30%, leading to a 32% rise in customer engagement (that is, the number of customers completing transactions).

Enhance customer satisfaction

It’s impossible to meet customers where they are without… well, knowing where they are. Gaining a deep insight into customer interactions and behaviour allows you to see what shoppers want, need and value the most; where things are going right; and where experiences could be improved.

By analyzing user journey data, optimizing website flows, and personalizing experiences, retailers can meet and exceed customer expectations, leading to improved brand loyalty and higher customer retention rates. In fact, our retail customers have experienced up to a 30% increase in customer satisfaction scores.

Reduce mean time to repair

There’s little more that’s frustrating or costly than downtime and performance issues. But having intelligent monitoring capabilities enables proactive issue detection and resolution. Don’t just take it from us either, the Head of IT Operations at our customer Halkbank said: “With automated alerts, mean time to resolution is almost at zero. We can see an anomaly as it happens and resolve it before it impacts service. This actionable insight ensures optimum performance and a great customer experience.”

By proactively identifying and addressing performance bottlenecks, retailers can maximize revenue opportunities and provide uninterrupted service to their customers–moving from reactive to proactive problem-solving.

Optimize store operations

Your success depends on the efficiency of your store, both online and in your brick-and-mortar branches. Research shows that in 2023, the omni-channel retail experience is in high demand, especially as holiday shoppers skate between sites, apps, and physical stores to purchase gifts and goodies. This has made IT more complex for retailers as they scramble to provide the same premier service across every touchpoint. But unified observability can make things simpler and smoother; 98% of retail leaders agree it’s important (58% say critically important) to stay competitive and deliver seamless user experiences.

Riverbed helps retailers monitor and optimize various aspects of operations, including point-of-sale (POS) systems, inventory management, and employee productivity. This allows retailers to enhance the overall shopping experience and drive profitability while boosting employee satisfaction and freeing up staff members’ time to work on more strategic tasks or complete training. This would be particularly useful, given that 39% of retail survey leaders believe they’re understaffed, while 38% have enough employees but not enough with the key skills to do their jobs.

Empower data-driven decision-making

Data-driven decision-making is a competitive advantage in the retail industry. Riverbed equips retailers with powerful analytics, user journey data and reporting features, so they can make informed decisions, optimize business strategies, and stay ahead of the competition.

In an anonymous, impartial Gartner Peer Spot review, one of our customers told us: “Riverbed enables us to see exactly what users see as they engage with applications. So rather than the user complaining, we get to know in advance and will see what the hiccups are. We can correlate the user experience. It makes troubleshooting easy.”

In short, Riverbed’s unified observability product suite is a game-changer for the retail industry, equipping businesses with the tools to transform their performance and drive success. Offering real-time performance monitoring, enhanced customer satisfaction, proactive issue resolution, optimized store operations, and data-driven decision-making, it truly revolutionizes how retailers operate.

Get in touch with our team today, and let’s chat about how Riverbed can empower the experience for your retail business.

]]>
How FSI Orgs Can Achieve More Revenue, Less Risk with Unified Observability https://www.riverbed.com/blogs/fsi-achieve-revenue-unified-observability/ Tue, 14 Nov 2023 13:19:47 +0000 https://www.riverbed.com/?p=75313 Is your Financial Services and Insurance (FSI) organization still analyzing its telemetry ad hoc? If so, it’s unlikely you’re giving your customers or employees the best experience possible. With that comes risk, potential damage to your reputation, and cost inefficiencies–all issues you can’t afford to ignore in your industry, especially when competitors are just a tap, swipe, or job application away

In our 2023 Global Digital Employee Experience (DEX) Survey, which polled 1,800 global IT and business decision-makers across 10 countries and seven industries, we found that 98% of FSI leaders agreed that delivering an exceptional DEX is essential to remain competitive, with 62% describing it as ‘critically important.’

With a continuous influx of data from various sources, from online banking and ATMs to call centers and retail branches, and data scattered across siloed systems, it’s impossible to manually monitor each transaction and identify issues (or successes). Not to mention, this manual data review also incurs significant costs, both in terms of money and productivity.

Matters are made even harder when long-standing employees leave or retire, taking years or sometimes decades of knowledge with them. The Millennial and Gen Z workers filling their roles won’t stand for inefficient processes or outdated tech, either; they are digital natives accustomed to using the best tools for the job. Failure to meet their expectations may result in a costly and time-consuming hiring loop and a significant skills shortage.

But don’t just take it from us–the FSI leaders we surveyed believe 69% of employees would consider leaving the company if adequate DEX was not provided, and 68% say failing to meet digital expectations would disrupt operations, affecting reputation, productivity, and organizational performance. Plus, if you’re anything like the 84% of FSI decision-makers we surveyed who’ve acknowledged the increasing relevance of IT within the C-suite, you’ve undoubtedly taken a more prominent role in the boardroom over the last few years. This has been accelerated as the pandemic has pushed the world towards hybrid working, and tech and data have started to be seen as a strategic driver. With these challenges, the responsibility falls on IT leaders to answer to the broader business.

This is where unified observability solutions come into play: intelligent tools that can liberate you from the complexities of your infrastructure, reduce risk, enhance your reputation, cut costs, and retain your most valuable talent. Unified observability is already considered essential in the industry; 94% of FSI leaders in our survey believe that greater investment in unified observability solutions, providing actionable insights for better employee and customer digital experiences, is necessary. Riverbed’s Unified Observability product suite is differentiated and unique in the industry.

Here’s how you can use this technology to empower exceptional digital experiences for everyone across every touchpoint:

Diagnose problems in the customer journey

It can be tough to look forward when you’re constantly fighting fires. This is something that one of Turkey’s largest and longest-established banks, Halkbank, was all too familiar with. When the COVID-19 pandemic hit, and customers began using more digital channels overnight, the organization’s mobile banking platform needed to scale to handle more than double the volume of traffic–growing from one million mobile customers to 2.5 million in a short timeframe.

“If mobile banking went down for even a few hours, customers wouldn’t be able to access their accounts or process transactions,” explains Namık Kemal Uçkan, Head of IT Operations at Halkbank. “Our goal is to provide 100% availability for all services, so we need a solution that helps us be proactive rather than reactive when it comes to network management.”

Today, the bank uses Riverbed’s Network Observability solution to monitor critical services across its network and data center through Riverbed Portal, which consolidates and displays data in user-friendly dashboards. It identifies any performance issues across over 40 business-critical applications, before these impact end-users.

Examine user trends

As well as proactively pointing out problems, powerful unified observability solutions like Riverbed’s will examine trends like transaction types and patterns, seek out lengthy processes employees or customers are having to follow, and create shortcuts automatically. So, not only can everyone enjoy the peace of mind that your systems will always work, but they can also rest assured that they’ll work in the quickest, easiest and most logical way, saving them time and you money.

What’s more, having visibility of these trends allows you to work more strategically and make smarter decisions, driven by data–knowing which areas of the business to invest more in and which to scale back, without the usual associated risk. As a simple example, if fewer customers in a certain locale are using ATMs and more are opting to head into a branch, you can remove underused machines and divert their maintenance costs to hiring more in-store staff or offering additional in-person services.

This doesn’t just apply to hardware, either. Riverbed’s unified observability tools provide a unique perspective on users’ actual application usage. That means you can find underutilized licenses and software and reclaim or uninstall where you see fit, saving a considerable amount of IT costs in the process.

Meet compliance regulations

By troubleshooting poorly performing transactions and flagging potential anomalies, Riverbed’s Unified Observability portfolio will allow you to meet increasingly stringent compliance regulations. They’ll also crawl for suspicious web activity, keeping your people safe while they surf.

This is another feature Halkbank put into action, as Uçkan explains, “We were excited to use new features such as SSL certificate monitoring to help keep data secure when browsing the web.”

Identify the slowest machines in your fleet

In our recent research, 88% of FSI leaders say slow-running systems and applications, plus outdated technology, are directly impacting the growth and performance of their organization. Yet often, in businesses of all kinds, hardware is replaced based on how long it’s been in operation rather than by how well it’s working. ATMs are no exception, and neither are the many machines you no doubt use at head office.

The best unified observability solutions will analyze your physical tech’s speed and efficiency, including transaction performance, so you can update and replace where it’s truly needed. This cuts waste and gives employees and customers a consistent and reliable experience. The same principle applies where software is concerned. Riverbed’s Aternity Digital Experience Management solutions can cost-justify and measure the impact of strategic IT projects like cloud mobility and data-center transformation, along with routine changes like OS and application upgrades.

In conclusion, if you’re looking to optimize your hybrid infrastructure while ensuring fast, agile and secure delivery of any application, over any network, to users anywhere, explore our website to learn more about Riverbed’s Unified Observability product suite.

]]>
Overcome Data Collection Hurdles to Empower Sustainable IT https://www.riverbed.com/blogs/overcome-data-collection-hurdles-sustainable-it/ Mon, 13 Nov 2023 13:45:56 +0000 https://www.riverbed.com/?p=75391 In the quest to reach sustainability goals, organizations are discovering a powerful ally in their IT departments. IT can play a pivotal role in curbing resource and energy consumption, thereby reducing carbon emissions, minimizing e-waste, and shrinking an organization’s environmental footprint. These sustainability efforts not only benefit the planet but also contribute to a healthier bottom line.

However, the path to implementing sustainable IT is fraught with challenges, and one of the most pressing is the issue of data collection. In this blog, let’s delve into the complexities of data collection in the context of sustainable IT and introduce a compelling solution: Unified observability, bolstered by Aternity Digital Experience Management (DEM).

The data collection dilemma

Effective data collection is the linchpin of sustainable IT solutions. Data empowers informed decision-making by providing insights into resource consumption and environmental impact. Also, it facilitates benchmarking, identifies optimization opportunities, and ensures the efficient allocation of resources for impactful sustainability initiatives. Additionally, organizations can leverage data to promote transparency, behavioral change, and compliance while enabling continuous improvement in the pursuit of greener IT practices.

Without robust data collection, the path to sustainable IT would lack direction and the means to measure and enhance environmental impact. However, collecting data is challenging for several reasons, including:

  • Data fragmentation: Data spread out across an array of platforms complicates the process of consolidating data into a unified and coherent format.
  • Compatibility issues: Cloud-based and on-premises systems often use different technologies and standards, making collection hard.
  • Data security and privacy concerns: Different data sources may have varying levels of security measures and privacy regulations, complicating collection.
  • Data volume and velocity: Managing and processing large amounts of data in real time can overwhelm infrastructure and lead to performance bottlenecks.
  • Data consistency and quality: Data from diverse sources may not always adhere to the same standards of consistency and quality.
  • Resource and expertise constraints: Building the infrastructure and expertise needed to aggregate data from various sources can be resource-intensive.
  • Scalability: Scalability challenges emerge when trying to accommodate the growing number of data sources and the increasing volume of data they generate.
  • Vendor lock-in: Vendor-specific data formats and APIs can make it difficult to extract data for aggregation or to switch to alternative solutions, limiting flexibility.

Unified observability to the rescue

Unified observability is a game-changing solution to the challenge of data collection. It offers a comprehensive and real-time perspective on IT systems, facilitating informed decision-making regarding environmental impact. Here’s how it works:

  • Comprehensive data foundation: Unified observability platforms meticulously collect granular, timestamped, and complete records of every event across the IT infrastructure. This data forms the bedrock for accurate decision models tied to sustainable IT initiatives.
  • Actionable insights: These platforms deliver user-centric, actionable insights with relevant context to the right stakeholders, enabling organizations to identify areas with the most significant impact potential.
  • Intelligent automation: Unified observability platforms leverage AIOps to provide expert decision-making and automation, resolving issues proactively before they escalate into incidents. This streamlines sustainable IT initiatives, enhancing operational efficiency and reducing the carbon footprint.

Practical applications of unified observability for sustainable IT

Unified observability isn’t just a theoretical concept; it yields tangible benefits for sustainable IT initiatives. The solution promotes energy efficiency by delivering granular insights into applications and infrastructure interaction. This empowers businesses to pinpoint inefficiencies, redundancies, and areas of over-provisioning. Also, real-time data analysis informs decisions about workload consolidation and virtualization, leading to reduced energy consumption.

The key to leveraging unified observability to drive sustainable IT lies within DEM platforms. Such solutions examine performance data and user feedback so organizations can gauge the environmental impact of routine tasks, establish sustainability benchmarks, and inspire employees to participate in sustainability initiatives.

Enhancing sustainable IT with Aternity DEM and prebuilt energy efficiency dashboards

Riverbed Unified Observability includes a mighty sidekick in the form of Digital Experience—a digital experience management solution that aggregates insights based on application and device performance data, human reactions, and benchmarking across industry peers.

Aternity DEM now features an energy efficiency dashboard that offers valuable insights by gathering and correlating detailed telemetry data from various devices. This dashboard provides a clear view of device uptime and energy-related metrics, enabling IT teams to pinpoint areas where avoidable energy consumption can be reduced. Additionally, it allows for the measurement of carbon footprint at both individual and organizational levels. By measuring uptime, IT organizations can identify opportunities to educate employees about conserving energy during idle device times.

Key features include:

  • Computation of essential environmental metrics such as device uptime, electricity usage, carbon emissions, and electricity expenses.
  • Granular breakdown of metrics by device usage duration, geographical location, power plan, business unit, and more.
  • The flexibility to customize calculation parameters to align with specific objectives and operational requirements. This customization empowers organizations to leverage Aternity as a robust tool for embracing and advancing sustainable IT practices.

Drive positive environmental impact

Aternity DEM is at the forefront of driving positive environmental change. With its real-time insights powered by unified observability, Aternity helps organizations overcome data collection challenges to promote more energy-efficient operations. As a result, we see that sustainable IT isn’t just good for the environment, it’s good for business. Check out our white paper, The Role of Unified Observability in Sustainable IT, to take a deeper dive.

]]>
Five Universal Perspectives from GITEX Global, the World’s Biggest Tech Event https://www.riverbed.com/blogs/five-perspectives-from-gitex-global/ Thu, 09 Nov 2023 13:03:13 +0000 https://www.riverbed.com/?p=75311 From October 16-20, Riverbed was proud to exhibit at GITEX Global, the world’s biggest tech show, held at the prestigious Dubai World Trade Centre. This year’s theme, ‘The Year to Imagine AI in Everything,’ set the stage for innovative discussions around groundbreaking technologies. Our primary goal for participating in this event was to empower organizations in attendance to deliver exceptional digital experiences with industry-leading Unified Observability and Acceleration solutions.

Riverbed booth at GITEX Global 2023 event
At GITEX 2023, Riverbed’s team of experts had the privilege engaging with the leading companies in the region.

To achieve this, we engaged in enlightening conversations with IT leaders from businesses spanning countless industries. Whether they were energetic start-ups or established multi-million-dollar enterprises, what struck us most was that their fundamental challenges and priorities were the same.

With that in mind, these are our top five takeaways from the event, shedding light on the key insights and trends that emerged:

1. End-user experience is key

When your competitors are just a click away, the one thing that’ll universally, unequivocally set you apart – and drive true business and revenue growth–is the end-user experience. By that, we don’t simply mean the customer experience, but that of your employees, too. After all, keeping staff is just as important and valuable as building brand loyalty and advocacy externally.

However, it can be difficult to see the full picture of how satisfied your workforce and customer base are when you don’t have insight into their everyday experiences, pain points, and journeys. Sure, you can analyze small data samples to see how things are going, but that’s hardly representative. Sending out surveys to gauge sentiment isn’t the most productive move, either; by the time you’ve gathered feedback, it’s too late, and people have already made up their minds about you and your services.

2. Throwing money at it just won’t work

A common approach to overcoming this obstacle is making enormous investment in the latest and greatest tech. But even with the most modern, state-of-the-art architecture and the highest bandwidth, the end-user experience can still be subpar. The performance of your applications lies less in their response times, and more in how you define and build the ecosystem that drives them.

That encompasses all levels of the tech stack–and requires you to think about everything from your users, network and infrastructure to your database, storage, and UI. More complexity means more likelihood of failure, and if you can’t pinpoint where that failure’s happening, you can’t fix it. This lack of insight is not only detrimental to your customer experience, but it also creates security and compliance risk, as you can’t see where potential breaches or anomalies are occurring.

3. No two businesses are alike

Sadly, you can’t fix these problems with an off-the-shelf product. Even though your organization may be running the exact same infrastructure and applications as others, your end-user experience–and the behavior of your apps–can be north and south. This is because your workflow and business logic will always be different, and this is no doubt the area on which you spend the majority of your time.

The Covid pandemic has only created more disparity in how companies operate, accelerating the uptake of hybrid working and forcing more IT teams to move their architecture to the cloud. While every business faces the same challenge in empowering their people to work effectively and efficiently from wherever, whenever, the level at which this is possible and encouraged–and the tech and tools that need to be made remotely available–varies massively.

4. The simpler the solution, the better

Things become even more convoluted when the majority of today’s businesses–from huge corporations to small enterprises–have more than five different tools doing either the same job, or with overlapping features. For instance, they may have network traffic monitoring solutions and network devices analyzing things separately, or app and server monitoring platforms working in tandem (but not necessarily together).

With no unification, correlation, or communication between any of these tools, time and investment are wasted, and employees are plagued with alert fatigue as endless notifications pop up and ping. Plus, in the boardroom, when questions are asked–like, “What’s the cause of that issue?”, “Where’s it happening?”, “What’s it impacting?” and “Is just one service or network affected, or all of them?”–leaders can’t give a comprehensive answer.

These silos are brilliant for covering individual teams, or team players, but not for protecting the company’s reputation or revenue. Or leaders from the scrutiny of the wider business.

5. Unified Observability can give you the edge

So, organizations are looking for a simple, connected solution that’ll help them proactively resolve performance issues by providing state-of-the-art monitoring, end-to-end visibility, and intelligent analysis for their critical services across all tiers. That means unified observability–and the smartest of these solutions use AI and machine learning, not just to identify issues but to solve them proactively and automatically.

This includes solutions from Riverbed, which are completely scalable and uniquely tailored to each organization. Riverbed’s tools have been specifically designed to provide the optimum end-user experience by finding and remedying problems at their very root, throughout the entire tech stack–before hardware and software are even rolled out, updated, or upgraded. No more symptomatic alerts creating hassle and headaches.

Moreover, these solutions provide data-driven insight that allows you to look forward and work strategically instead of purely fighting fires. For example, they can find trends and patterns in the end-user journey to create shortcuts that make everyone’s lives easier or assess where machines or licenses are going unused to drive cost efficiencies.

See you next year!

We enjoyed every moment of our week at GITEX. It was a privilege engaging with the leading companies in the region and helping them understand how IT can eliminate data silos and alert fatigue, improve decision-making, and deliver seamless, secure digital experiences with Riverbed. If you missed it, here is a look back at our week:

Thanks to everyone who stopped by our booth. If you’d like to connect with our team, book your free, no-obligation one-to-one demo with one of our experts today. We look forward to hearing from you—and seeing you at GITEX 2024.

]]>
Take Network Monitoring to Its Full Potential with Riverbed Professional Services https://www.riverbed.com/blogs/network-monitoring-professional-services/ Thu, 12 Oct 2023 12:14:55 +0000 https://www.riverbed.com/?p=74968 In today’s hyper-connected world, networks serve as the backbone of every organization, facilitating seamless communication and data transfer.

However, maintaining a robust and secure infrastructure is no small feat. With the increasing complexity of networks and the ever-evolving threat landscape, organizations are turning to monitoring solutions–like Riverbed Unified Observability from Riverbed–to ensure their IT suite runs smoothly and securely.

And while we pride Riverbed on being a comprehensive with an intuitive user interface, empowering IT teams and making their jobs simpler, it’s easy to miss out on the full potential of the powerful portfolio. That’s where Riverbed Professional Services comes in.

Why choose a network monitoring system?

Network monitoring solutions are critical tools that allow organizations to gain real-time insights into the performance, availability, and security of their network infrastructure. These solutions collect data from various network devices–such as routers, switches, firewalls, and servers–and provide administrators with a holistic view of network health.

Monitoring solutions can be used to:

  • Proactively detect issues, identifying and addressing problems before they impact productivity or cause downtime
  • Improve security, scouring networks for anomalies to prevent breaches and unauthorized access
  • Optimize resources based on real-time data, leading to better overall performance
  • Analyze historical data, which can be valuable for trend analysis and capacity planning
  • Meet regulatory compliance requirements by providing detailed reports on network activity

Why add Professional Services?

Deploying any network monitoring solution, including Riverbed Unified Observability, involves several complex tasks. These include hardware and software installation, deploying virtual instances, configuration, integration with existing network infrastructure, and ongoing maintenance.

Riverbed is also highly personalized to each organization’s unique setup and needs. So, certain elements need to be tinkered with and tailored to boost their output and effects. All of this can be tricky without the relevant in-house resource–and for so many businesses, their only internal IT support comes in the form of security staff, engineers, and a help desk.

Luckily, Riverbed Professional Services is here to help. The service sees our experts work in collaboration with customers, exploring their pain points and current architecture before deploying the necessary solutions in a way that works for them, solves their biggest challenges, drives valuable digital transformation, and provides tangible return on investment moving forward.

Professional Services encompasses a wide range of activities that are crucial for the optimum deployment of any Riverbed solution, including Riverbed Unified Observability, Riverbed Network Observability, and Riverbed WAN Optimization. These activities comprise:

  • Assessment and planning: Our Professional Services professionals start by assessing an organization’s network environment, goals, and specific monitoring needs. They then create a tailored deployment plan to meet these objectives.
  • Hardware and software selection: Choosing the right hardware and software network monitoring components is critical for the success of any monitoring solution. Our network pros can evaluate and recommend the best-fit options for the organization, based on specific requirements and budget.
  • Installation and configuration: Once monitoring components are selected, our consultants handle the installation and configuration of the solution, ensuring it integrates seamlessly with existing infrastructure.
  • Customization: Every organization has unique monitoring needs. Our experts can customize the solution to monitor specific devices, protocols, and performance metrics.
  • Integration: In many cases, monitoring solutions must integrate with other elements–like ticketing systems and security information and event management (SIEM) platforms. Our consultants ensure smooth integration to minimize downtime and maximize value.
  • Training and knowledge transfer: Proper training is essential to empower in-house IT teams to use their monitoring solution effectively. Professional Services provides training and hosts knowledge transfer sessions to ensure the organization can get the best out of its network, today and tomorrow.
  • Ongoing support and maintenance: Finally, our experts offer ongoing support and maintenance services, including updates, patches, and troubleshooting assistance. They also help organizations adapt the Riverbed monitoring solution to their evolving network needs.

Maximize your investment with Riverbed Professional Services

A global logistics company deployed Riverbed Network Observability to monitor business-critical applications and services, troubleshoot performance bottlenecks across the company, and gain end-to-end visibility across its IT environment. The organization opted to use Riverbed Professional Services, and is seeing the benefit. Five years ago, 20% of its revenue was digital; now, it’s over 90%. As the company continues on its digital journey, having a scalable and reliable network infrastructure is critical.

In conclusion, network monitoring solutions, like those delivered by Riverbed, are indispensable for modern organizations seeking to maintain a stable and secure network. But deploying and customizing these solutions requires a deep understanding of network architecture and the specific needs of the organization. By engaging Professional Services, organizations can ensure their monitoring solutions are tailored to their unique requirements and deployed effectively–proactively managing their networks, enhancing security, and optimizing performance. This ultimately leads to increased efficiency and competitiveness in the digital landscape, so it’s well worth the investment.

The Riverbed portfolio is constantly being developed and improved to support the latest network security standards and provide visibility into today’s rapidly evolving, complex IT environments. With the help of Professional Services, it can take your organization even further–fast and fuss-free. Visit our website to learn more.

]]>
Achieving Sustainable IT with Riverbed Aternity https://www.riverbed.com/blogs/sustainable-it-with-alluvio-aternity/ Wed, 11 Oct 2023 15:17:08 +0000 https://www.riverbed.com/?p=74711 Sustainable IT focuses on reducing the environmental impact of your technology landscape. Embracing sustainability in IT benefits both the environment and the financial performance of companies that adopt it.

RiverbedUnified Observability portfolio is ready to tackle the energy efficiency and sustainability demands of modern businesses. With its focus on Sustainable IT, the Riverbed portfolio aims to support energy-conscious practices and meet the energy reporting requirements worldwide.

Sustainability consciousness is on the rise

In an era of heightened environmental consciousness, corporations face increasing pressure to decrease their carbon footprint and adhere to sustainability regulations like the European Green Deal and the international Greenhouse Gas (GHG) Protocol.

In early September 2023, the state of California introduced landmark legislation, SB 253, which mandates environmentally conscious disclosure obligations for thousands of U.S. public and private companies. The EU had already announced similar requirements as part of their CSR Directive in early January 2023.

With growing adoption of these disclosure laws worldwide, Riverbed Aternity’s Sustainable IT dashboards have been developed with flexible configuration and customization to be adaptable to local needs. Watch this video to see how Aternity provides your IT teams with the full suite of tools to tackle Sustainable IT requirements and practices:

How Riverbed Aternity’s energy efficiency solution helps

Aternity equips your IT teams with a comprehensive suite of tools to address Sustainable IT requirements and best practices. This includes out-of-the-box dashboards that compile essential energy data, automation workflows, built-in surveys, and notifications for end-users.

Riverbed Aternity has introduced a new “Sustainability” category of dashboards, with regular additions of new  “Sustainability” dashboards as they become available.

Alluvio Aternity Sustainability Dashboards
Riverbed Aternity Sustainability Dashboards

Now, let’s explore an example “Energy Efficiency” dashboard of a company with a global workforce, including remote employees. This dashboard vividly illustrates how seemingly innocuous power settings on Windows laptops can significantly impact power consumption across the board. It also provides a clear understanding of the nuanced details of the energy demands of user equipment.

Alluvio Aternity Energy Efficiency Dashboard
Riverbed Aternity Energy Efficiency Dashboard

Looking at all the user devices in this company collectively, it’s apparent that around 46% of the time in a month, the devices consume power despite users not actively interacting with them. Aternity refers to this time as “Inactive.”

All Devices Inactive Versus Active
All Devices Inactive Versus Active

 

Let’s delve into how this 46% “Inactive” time was calculated:

An hour of uptime is considered “Inactive” when no keystrokes or mouse movements are detected by Aternity, and the screen or monitor is either in sleep mode or locked while consuming full power. However, if there is any keyboard or mouse activity, that hour counts as “Active.” As the calculation of idle time is very conservative, there could easily be a lot more power wastage occurring.  In some tools, the “Inactive” time is also called “Idle Time.”

Optimize energy conservation with Balanced Power Plan

Looking into the details of Windows devices, especially those using the “Balanced Power Plan,” a Windows out-of-the-box setting for reducing power consumption, it’s evident that collectively, almost 53% of the time, these devices remain idle while still consuming power. In fact, the percentage of time these devices were “Inactive” is even higher for devices using the “Balanced Power Plan.”

Balanced Power Plan Inactive Time
Balanced Power Plan Inactive Time

In summary, “Inactive” time closely relates to what would traditionally be perceived as “Idle Time” for these end-user devices. Fine-tuning the device power plans could result in saving kilowatt-hours of electricity by suspending or shutting them down when they are “Inactive.” However, are they genuinely doing nothing?

Collect information with Sentiment Surveys 

There may be legitimate cases where long idle times are expected.

Aternity’s Sentiment Surveys are seamlessly integrated into the product, offering an effective way to survey a selective group of users or the entire digital estate from within the product. These surveys help administrators combine survey data with energy efficiency, Sustainable IT data, or performance data collected by Aternity to provide a comprehensive view of which areas of their digital estate need attention. Aternity offers a range of out-of-the-box survey templates, and users can also create their own from scratch.

Aternity Sentiment Survey
Aternity Sentiment Survey

These sentiment survey templates raise awareness, understand employee behavior, and drive cultural change.

Take action with remediation scripts

Once insights are gleaned from Aternity dashboards, Aternity provides remediation scripts and end-user notifications to implement configuration changes, such as updating the Windows registry or customizing a device power plan. Aternity also informs end-users of this activity through notifications. There are various out-of-the-box remediation scripts available, and administrators can create custom scripts from scratch if needed.

Learn more

Riverbed recognizes the growing need for sustainability in IT worldwide. Riverbed Aternity empowers customers to achieve their sustainability goals by offering curated energy-focused dashboards and user-conscious workflows, including automated remediation, to take control of energy expenditure. To learn more about how Riverbed can assist with Sustainable IT, please visit Riverbed’s Sustainable IT page.

]]>
Three Key Considerations for Boosting Your Agency’s Next FITARA Score https://www.riverbed.com/blogs/improve-your-agency-fitara-score-with-alluvio/ Thu, 28 Sep 2023 12:46:18 +0000 /?p=22275 The latest FITARA scorecard is out. Whether your agency is satisfied with its grade (or not), there’s a good chance you’re already thinking about ways to improve before the next scorecard. Riverbed can help you improve many of your metrics—particularly in the category of Portfolio Review.

Portfolio Review: What it means to succeed

FITARA grades agencies on their performance across seven areas tied to how they buy and manage their IT. One of these areas is Portfolio Review—also known as PortfolioStat—which assesses how well agencies are reviewing their IT portfolios to save costs, increase efficiency, and reduce waste and duplication. As part of their review, agencies must also demonstrate how their IT investments align with their mission and business functions.

An agency’s PortfolioStat grade is based on its ratio of IT cost savings to total IT budget over the last three years. The higher the ratio, the higher the grade—but FITARA also scores on a curve, so that the eight agencies with the highest ratios receive A’s, the next six receive B’s, and so on. This means that to raise your grade, you not only need to do better than before—you also need to keep pace with other high-performing agencies.

Whatever your most recent grade, there are steps you can take now to make measurable progress by the next scorecard—and equip your agency for continuous improvement.

Top 3 ways to raise your PortfolioStat score

 1. Know what you have

Before you can make decisions about your IT environment, you need to understand everything that’s in it—end users, endpoint devices, applications, tools, network devices, etc..

Many organizations have massive environments, and getting an accurate inventory is no small task. To do this, you need firm, quantifiable data. If you rely on assumptions, estimates, or anecdotal information, you’ll never discover things you didn’t know you had. And if you try to conduct your audit manually, it will be daunting, time-consuming and not necessarily exact.

But with the right system recording every component in the environment, agencies can audit at speed and with high accuracy. Not only will this inform your long-term planning, but it can also help you make immediate improvements. For example, you could quickly identify duplicate technologies, devices over a certain age, or devices with low usage. This will make it easier to eliminate tools you don’t need and instantly eliminate costs.

Riverbed Unified Observability solutions can help your agency do this by providing fast, full visibility to your entire environment. It allows you to quickly discover and document every asset in your IT environment – from infrastructure to applications. This gives you the quantifiable data you need to make timely, effective decisions with zero blind spots.

 

2. Know who’s using it and how often

Once you know the assets you have and where they are, you need to understand their usage. Specifically, which ones are being used, how many people are using them, and how frequently.

This is key to rationalizing your investments and determining whether they align with your mission; if it’s not being used, it doesn’t align with your mission. These insights also help you identify potential wasted costs by spotlighting investments that are either underutilized or not serving a purpose, or if multiple tools are being used to achieve the same thing.

IT  Asset Cost Reduction can help you measure the utility of every asset in your portfolio. We have the unique capability to see traffic across the entire environment—allowing you to understand which resources are being used, by whom, and how often. From there, you can quickly identify assets that are redundant or unnecessary, which ones could be better utilized, and even which ones are delivering the most value.

 

3. Know how it’s performing

Next, you need to understand the health and performance of your network, apps, and endpoints. This will help you save costs and improve the end-user experience by enabling you to fix what needs to be fixed and replace only what needs to be replaced.

This means identifying the true root cause of an issue so you can take informed action to solve it instead of wasting time and money on the wrong solution. For example, if UX is poor due to slowness, you need to know if that slowness stems from the network, an application, the device, or possibly even a multi-factor situation. Without solid performance data, an organization may spend money on replacing a device or wasting time implementing a fix that only partially addressed the issue because   the real cause was actually something else. But with end-to-end performance monitoring, you know exactly which component needs attention and how it may be impacting other parts of your infrastructure.

You should also validate the performance of your existing assets to avoid arbitrary replacements. Every replacement decision you make should rely on performance data—not perceptions or assumptions. For example, don’t assume a device needs to be replaced solely because of its age. An agency may have a policy to replace devices every five years; but if those devices are working fine, and you can gain quantifiable data showing them to be healthy there’s no need to invest in new ones and significant cost can be deferred.

Riverbed can help your agency monitor and measure performance of every component in your infrastructure—allowing you to make cost-effective decisions and solve issues faster. Unlike other tools, we cover the entire infrastructure with one platform. Others may require multiple point solutions to monitor the individual pieces of your environment—delivering disparate data for network, applications, and devices that you’d have to correlate on your own. Meanwhile, Riverbed captures and correlates all performance data from all domains and puts it into context. This gives you a complete, current picture of performance end-to-end, so you can quickly identify root causes and effects.

Unlike its competitors, Riverbed uses full-fidelity data with no sampling. When other tools sample, by definition, they’re not collecting all the data. But Riverbed captures all packets and all data flows to provide the most granular and accurate insights possible.

Remember, you can’t improve if you don’t know your starting point

To make and demonstrate improvement, you need current, complete and quantifiable data to  understand your current IT landscape. This will give you an accurate starting point, so that every decision is informed, and every outcome is measurable.

Riverbed supports precision-guided IT service delivery, and we can help you establish that starting benchmark. From there, we can help you gain insights you need to make the most informed decisions about which assets to keep, consolidate, replace, remove or invest in.

Outcomes should drive investment

Consider the goals of FITARA in general and the goals of Portfolio Stat specifically: cost savings and avoidance, efficiency, compliance, and clear alignment with the mission. With these in mind, let the data tell you where to invest—and invest only in what will help you achieve these outcomes.

Riverbed uniquely supports all those outcomes in a single suite of product—allowing your agency to achieve compliance at least cost. Whether your agency is already using Riverbed or you have yet to experience its benefits, we welcome the opportunity to help you leverage its capabilities to their full potential to add near-immediate value to your portfolio review.

Want to increase your score? Contact us to learn more!

]]>
The State of DEX: Navigating Next-Gen Expectations, Challenges, and Strategies https://www.riverbed.com/blogs/dex-survey/ Tue, 26 Sep 2023 12:39:35 +0000 https://www.riverbed.com/?p=74540 The Riverbed Global Digital Employee Experience Survey
In the Riverbed Global Digital Employee Experience Survey, 1,800 leaders share views on user expectations, hybrid work, IT and obstacles & strategies for DEX.

In the current landscape, delivering an excellent digital employee experience (DEX) is more important than ever. DEX plays a critical role in an organization’s day-to-day operations, and enterprises must invest in this area as a key business strategy. In fact, outdated technology, a driving factor of poor DEX, is costing American organizations over a trillion dollars in lost productivity.

While the case for improving an organization’s DEX is clear, it’s often an uphill battle. Digital infrastructures and tech stacks are more complex with hybrid workforces, combinations of cloud and on-prem services, and a mix of modern and legacy technology. Companies must also navigate the higher digital expectations of the next generation workforce. This shift has essentially led to chief information officers (CIOs) playing a key role in talent management and retention too, a reflection of the growing importance of IT departments in business innovation.

In the Riverbed Global Digital Employee Experience (DEX) Survey 2023, Riverbed polled over 1,800 global IT and business decision-makers to better understand generational expectations for the digital experience and obstacles and strategies in delivering an outstanding DEX. Let’s examine the key findings:

Digital experience expectations are higher for Millennials and Gen Z

The DEX survey found that 91% of decision-makers believe they’ll need to provide more advanced digital experiences to meet the needs of younger employees, and 89% say younger employees place increased pressure on IT resources. Failure to meet the DEX needs of younger generations can result in business disruption or reputation damage according to 63% of those surveyed. Additionally, leaders say that if digital experience demands of Millennial and Gen Z employees are not met, 68% would consider leaving a company.

Delivering a better digital experience is getting harder

Almost unanimously, 95% of surveyed decision-makers identified at least one major obstacle to delivering a seamless DEX. While no two companies are the same, the survey results revealed that most companies are struggling with five common issues:

  • Budget constraints36% 
  • Talent shortages—35% 
  • Inadequate observability tools—29% 
  • Lack of suitable cloud services and SaaS applications—29% 
  • Too much data—28% 

There is an interesting intersection in these issues. We found that 86% of survey respondents believe unified observability tools and automation can help bridge their skills gap. However, many decision-makers identified inadequate observability tools and budget constraints as obstacles. This could indicate that companies need to find a way to optimize their investments and prioritize initiatives that significantly impact DEX, like adopting more comprehensive unified observability solutions.

IT is pushing business innovation forward

Increasingly, organizations are acknowledging the importance of consulting with their IT departments; over 80% of ITDMs surveyed said they have a seat at their organization’s C-suite table. Much of this move to center IT departments can be traced back to the COVID-19 pandemic when organizations shifting to hybrid work modes quickly realized how critical the IT perspective is in day-to-day business decisions and the ongoing development of strategy.

Companies increasingly relying upon the skills and expertise of their IT departments coincides with a greater investment in technology. We found that 88% of respondents plan to invest in technology over the next 12-18 months to support the hybrid workforce. Ninety-six percent of leaders believe that doing so will help support their organizations’ ability to recruit and retain talent, and remain competitive.

Unified observability is crucial to DEX

Throughout the Global DEX Survey, respondents consistently noted the importance of end-to-end visibility and unified observability in their digital infrastructure. A significant 94% of IT and business leaders acknowledged that unified observability is essential to their company’s ability to stay competitive and deliver DEX. And 86% of leaders agreed that not having unified visibility over their digital employee experience was one of the greatest risks to their organization’s ability to grow and maintain talent and customers.

Investment in emerging technologies is business critical

In addition to the important role of unified observability, the survey found that 45% of leaders believe that artificial intelligence (AI), followed closely by cloud (43%), automation (35%), digital experience management (35%), and application acceleration (33%) will play a critical role in enhancing their business operations in the next 18 months.

Investing in these technologies can provide crucial support to DEX and digitizing processes, streamlining workflows, and improving efficiency — helping organizations “shift left” and save time, money, and effort.

Data tells a story

When it comes to DEX, it’s important to keep track of trends and changes across organizations. After all, when companies fail to invest in providing a hassle-free digital experience, it’s often a death knell for their business. It’s crucial for IT and business leaders to understand the common challenges affecting decision-makers to spark conversations in their organizations (and beyond) and improve the global digital employee experience. One final encouraging point the survey uncovered – 92% of leaders say investing in DEX is among their top priorities over the next five years.

Want to learn more about the Riverbed Global DEX Survey 2023 and what it tells us about the current state and future of digital employee experience? Access the full report and infographic here.

]]>
How Network Analytics Boost Performance and Security https://www.riverbed.com/blogs/network-analytics-performance-and-security/ Thu, 14 Sep 2023 21:39:03 +0000 https://www.riverbed.com/?p=73876

At Riverbed, we often talk about “you can’t protect what you can’t see.” Having the ability to monitor everything that is happening in your network is the first step in improving the security, performance and reliability of your environment. But how you capture, interpret and respond to that sea of data that from your network allows you to truly take control of your operational environment. This is where real-time network analytics comes in to play.

This holds especially true for complex, overtaxed or high-security networks. Additionally, being able to capture and store network data allows historical network performance reports to be generated–a vital tool in maintaining system health, data security and optimized I/O transfer speeds between connected devices. IT teams can also quickly identify, isolate and quarantine incoming malware, viruses or worms by using real-time packet scanning to identify threats.

Network analytics help IT teams manage and secure data networks, improve security, fine-tune performance, troubleshoot network problems, predict traffic trends, perform forensic investigations for incidents and open new business opportunities in some cases.

Real-world network analytics applications

Though every enterprise network can benefit from analytics, for some industries the benefits can be manifold. For example, telcos can use network analytics to manage high volumes of user traffic in mobile communications and broadband connections. The same technology can assist mining and oil and gas companies to monitor remote IoT devices that regulate pipelines, drilling and reservoir facilities. The automotive and high-tech industries can extensively use real-time data analytics to develop self-driving vehicle networks and implement Artificial Intelligence (AI) and Machine Learning (ML) guidance for autonomous vehicle navigation.

Streaming real-time data analytics opens new innovation opportunities across all industries based on Big Data applications, AI and ML.

How does it work?

Network analytics works by providing insights into various aspects of network performance:

  • Latencies for traffic through its entire path with hop-by-hop analysis.
  • Bit rates through a particular network port, broken down by application.
  • Collision and packet drop rates at a port.
  • Number of packets or flows from any location, device, application, or identity.
  • Number of packets or flows affected by specific security policies.
  • Infrastructure monitoring for SNMP, WMI, and increasingly streaming telemetry.

The visibility and insights presented by network analytics can be used for several tasks, such as spotting bottlenecks, evaluating the health of devices, root-cause analysis, issue remediation, identifying connected endpoints, and probing for potential security lapses.

Safeguarding networks and driving business growth

Network analytics offers a wide range of benefits beyond traffic analysis:

  • Enhanced Security: Network analytics improves cloud resource and device security by allowing real-time scanning of data packet transmissions. Administrators can track I/O data packet resource consumption by IP address to detect anomalous changes in activity and quickly identify intruders, malware, and infected devices. It also speeds up the detection of security threats, preventing hacking attacks from spreading deep into the corporate infrastructure. Network analytics can not only track the path of a compromise through the network in real time but also can be used to retrospectively investigate once a new attack vector has been identified and understood.
  • SNMP and WMI Filtering: Data can be used to diagnose network device problems and reduce remediation time.
  • Real-time Analytics: Integration with AI and machine learning provides real-time and historical insights into network data, enabling tailored operations.
  • Streamlined Business Processes: Analytics optimizes enterprise-wide IT operations, security, and efficiency while streamlining business management.
  • Performance Monitoring: Administrators can monitor performance, including historical usage patterns that help predict future data center needs.
  • Track KPIs: Network monitoring tools can analyze KPIs and present them to administrators, simplifying complex cloud network reporting and alert processes. IT teams can track specific KPIs for their specific industry application.

At Riverbed, we have been deploying network analytics solutions for over 15 years. As networks have become more complex and security requirements have increased, we need an automated way to correlate, interpret, analyze and respond. Or, to put it another way, we need more “IQ” out of Network Analytics  solutions. That’s why we have more recently built out the Riverbed IQ to address the needs of todays’ complex and high speed environments.

]]>
Leveraging Observability Data for Downfall and Inception Vulnerability Analysis https://www.riverbed.com/blogs/downfall-and-inception-observability/ Thu, 07 Sep 2023 12:07:20 +0000 https://www.riverbed.com/?p=73628 In early August 2023, both Intel and AMD confirmed vulnerabilities in their CPUs. Specifically, a security expert named Daniel Moghimi at Google discovered a vulnerability dubbed “Downfall” (CVE-2022-40982) in Intel’s chipset. This vulnerability allows attackers to exploit it, potentially gaining access to data from other applications or memory areas. Similarly, researchers Daniel Trujillo, Johannes Wikner, and Kaveh Razavi from ETH Zurich discovered a comparable exploit in AMD’s chipset, which they named “INCEPTION” (CVE-2023-20569).

Fortunately, both exploits have been classified with a severity rating of “Medium” by Intel and AMD. The risk only becomes significant if an attacker manages to execute a piece of code on the vulnerable computer. This can happen, for example, through malware. Once executed, this code can read sensitive information, such as passwords, from the compromised device.

The situation becomes more dangerous when the vulnerable computer is used by multiple individuals, such as in a cloud-based environment. In such cases, a legitimate user can intentionally or unintentionally distribute the code and thus gain access to other users’ data.

How many computers are affected in my organization?

The challenge when that happens for corporate administrators and security officers is to figure out exactly what this means and how many computers in your organization are affected. This becomes especially crucial as both Intel and AMD are already rolling out firmware updates to address the security gap. Prompt installation of these updates is paramount.

Organizations using observability solutions like Riverbed now have a powerful tool to gain insights into the vulnerability landscape. In the example shown below, I leveraged data from the desktops/laptops to automatically create a list of affected devices. To achieve this, I configured Riverbed Aternity to retrieve and evaluate additional information such as CPUID and MCU (for Intel) from the CPUs. In practical terms, you only need to import a Custom Device Attribute Monitor into the configuration of Aternity and access the corresponding dashboard. The advantage here is that the CPU data can seamlessly analyzed alongside existing observability data.

Observability Data for Downfall and Inception Vulnerability Analysis

At a glance, it’s evident that 60% of the devices are undeniably affected by the security vulnerability, while an additional 17% require manual inspection due to undetermined firmware versions, yet the CPUs are classified as “affected.”

Plan and monitor your next steps

Some hardware vendors, like Lenovo, already released BIOS or firmware updates to to mitigate this risk. For instance, Lenovo provided an update (Version 1.54) for the ThinkPad T14s Gen 2i. However, here’s where the challenge arises. IT organizations must plan, execute, and validate the successful deployment of these updates. Many companies rely on automatic updates facilitated by hardware manufacturer tools, but the visibility into their effectiveness or user permissions isn’t always clear. This is where observability data becomes invaluable.

Observability Data for Downfall and Inception Vulnerability Analysis

In our example above, we have 16 Lenovo T14s Gen 2i devices, with only one device having the necessary BIOS version to address the vulnerability, while the others have various versions. With this information, the IT department now knows that 15 devices require prompt updates. To facilitate this, the Riverbed Aternity Remediation Action can be employed.

Observability data can switch on the lights

If you already use Riverbed Aternity, look for the CPU Vulnerability Analysis Dashboard in the Aternity SE Dashboard Library or reach out to your Riverbed technical contact. Installation of the Custom Attribute Monitor is essential, and you’ll need a free Custom Attribute within your environment. Further details can be found in the Description page of the Dashboard. In the examples above, we utilized data from desktops and laptops, but observability data can also be sourced from servers in your data center, enabling similar analyses for your server landscape.

If you would like to learn more about Riverbed, visit our site, and existing users may log in to access the Riverbed Knowledge Base here. Visit these pages for further readings on the Intel Advisory, INTEL-SA-00828, and the AMD Advisory, Return Address Security Bulletin.

]]>
What Are the Four Types of Network Management? https://www.riverbed.com/blogs/four-types-of-network-management/ Thu, 31 Aug 2023 12:45:18 +0000 https://www.riverbed.com/?p=73410 Network management is a complex discipline that requires a comprehensive effort to plan, optimize, maintain, and secure enterprise network operations. This starts with understanding all the elements that establish a comprehensive network management strategy.

Network fault management

Network fault monitoring typically involves the deployment of monitoring tools that collect data from network devices in real-time. These tools often use techniques such as SNMP, WMI, streaming telemetry, ping tests, flow analysis, and log analysis to monitor network health and identify faults.

When a fault or anomaly is detected, the monitoring system generates alerts or notifications to network administrators or operators. These alerts provide information about the nature of the fault, its severity, and its potential impact on the network. Network administrators can then take appropriate actions to diagnose and resolve the issue, ensuring the network operates optimally.

Fault management is a critical aspect of network and systems administration that focuses on detecting, diagnosing, and resolving various types of faults or issues that may arise within a system, network, or application. The key capabilities of fault management include:

  1. Fault Detection and Isolation: The ability to identify deviations from expected behaviors or conditions, then determining the scope and impact of a fault. This involves monitoring various parameters, metrics, and performance indicators to detect anomalies, errors, or failures.
  2. Root cause Analysis: Identifying the underlying cause of a fault. This involves analyzing metrics and logs to determine the sequence of events that led to the fault and pinpointing the specific component or process responsible.
  3. Alert Generation: Generating alarms, alerts, or notifications when a fault is detected. These alerts can be in the form of emails, text messages, dashboard indicators, or other notifications to inform administrators or users about the presence of a fault.
  4. Reporting and Analytics: Generating reports and insights on the frequency, duration, and types of faults that occur. This information can be used for trend analysis, capacity planning, etc.
Network fault management
Monitoring device and interface health are two key capabilities on network fault management.

Configuration management

Network configuration management is the process of monitoring, maintaining, and organizing the information pertaining to your organization’s network devices. It is responsible for the setup and maintenance of network devices along with the installed firmware and software.

The primary goal of configuration management is to confirm that the system’s components work together seamlessly, facilitate efficient, reliable deployment and maintenance processes, and ensure compliance with regulatory standards. Configuration management allows you to quickly configure and replace the functionality of a network device after a failure. If you don’t have a recent backup of that device, you’ll be starting over from scratch to configure new devices.

Key characteristics of configuration management include:

  1. Network device discovery and diagramming: Having an accurate account of your network inventory and its status is critical to network configuration management. The first step is to map the network elements, including physical, logical, and virtual components, to create a high-definition network diagram. These automated network diagrams highlight new and modified devices, as well as devices with configuration errors.
  2. Configuration backup: Configuration backup is the process of extracting configuration settings from a device and storing it to disk. The configuration restore process uses backup configuration data files for the system to restore a specific system configuration, whether on that same device or similar devices.
  3. Configuration change management: Obviously, your network change management solution must be designed to keep track of any changes anyone makes to your devices or systems. This is crucial to avoid any errors or unauthorized changes that might bring about unfavorable consequences. It also speeds the troubleshooting process immensely by automatically comparing before and after configurations and highlighting differences.
  4. Policy compliance and reporting: Network configuration management helps ensure compliance with regulatory, organizational, and security policies, like FISMA, SOX, HIPAA, PCI, NIST 800-53, SAFE, or DISA STIG. Out-of-the box templates make sure devices and systems are configured correctly to conform to organizational and regulatory policies. Leverage fully customizable rules to validate against a “gold-standard” configuration.

In short, configuration management promotes consistency, helps in identifying and resolving issues more efficiently, and ultimately leads to more stable and reliable systems. Additionally, configuration management can play a crucial role in handling complex configurations and managing dependencies between different components.

Network performance management

Network Observability consists of tools that leverage a combination of data sources to provide a holistic view of how networks are performing. Data sources include network device-generated traffic data (like network flows), raw network packets, and network device health metrics and events.

Network performance management tools provide diagnostic workflows and forensic data to identify the root causes of performance degradations — increasingly through the adoption of advanced technology, such as artificial intelligence (AI) or machine learning algorithms (ML). Based on network-derived performance data, NPM tools provide insight into the quality of the end-user experience.

NPM use cases provide the ability to monitor, diagnose and generate alerts for dynamic end-to-end network service delivery as it relates to digital experience. Key capabilities of network performance management include:

Response time chart
Riverbed AppResponse uses packets to analyze rich network data, like this response time chart.
  1. Monitoring: NPM involves continuous and real-time monitoring of various network parameters such as bandwidth utilization, latency, and packet loss.
  2. Analysis: After gathering data through monitoring, NPM tools analyze the collected information to identify trends, patterns, and potential performance bottlenecks. This analysis, which typically leverages AI and machine learning, helps IT Operations teams understand the current state of the network and identify areas that need improvement.
  3. Troubleshooting: When issues arise, NPM allows IT to quickly diagnose and troubleshoot problems. This includes identifying the root causes of performance degradation, locating faulty devices or configurations, and resolving performance bottlenecks.
  4. Reporting: NPM tools generate comprehensive reports and dashboards that provide insights into network performance over time. These reports help in tracking key performance indicators (KPIs), identifying recurring issues, and measuring the effectiveness of performance improvement measures.
  5. Capacity Planning: NPM involves planning for future network requirements based on historical performance data. By predicting future demands, organizations can allocate resources more efficiently and avoid unexpected performance issues.
  6. Security: Network performance management can also supplement network security since poor network performance can be a sign of security breaches or cyberattacks. NPM tools typically include security monitoring features to detect anomalies and potential threats as they cross the network.

In summary, network performance management is a crucial aspect of maintaining a healthy and responsive network infrastructure, ensuring that organizations can meet the demands of their users and applications while maximizing the efficiency of their network resources.

Network security forensics

Network security forensics centers on the discovery and retrieval of information about cyberthreats within a networked environment. Common forensic activities include the capture, recording and analysis of events that occurred on a network to establish the impact and source of cyberattacks.

Investigators use network forensics to examine network traffic data that are involved or suspected of being involved in cyberattack. Security experts will also look for data that points in the direction of data exfiltration, outbound communication with blacklisted IPS, internal reconnaissance, etc. With the help of network forensics, security experts can track down all communications and establish timelines based on network data captured by the network monitoring solutions.

list of user-defined policies
NetProfiler tracks lateral movement, governance violations and other challenges such ​as P2P, tunneling, and SPAM activity

The main objectives of network security are to:

  1. Prevent unauthorized access: Network security measures are designed to prevent unauthorized individuals or entities from gaining access to sensitive data, systems, and resources. This includes protecting against external attackers as well as unauthorized internal users.
  2. Protect data integrity and confidentiality: Network security ensures that data remains unaltered and trustworthy during transmission and storage. It prevents unauthorized users from accessing or modifying data in transit or at rest.
  3. Maintain network availability: Ensuring network availability is essential for maintaining business operations. Network security measures aim to minimize the risk of disruptions and downtime caused by cyberattacks.

Riverbed supports four types of network management

Riverbed offers a complete and integrated portfolio of network management solutions:

  • Riverbed NetIM provides fault and configuration management. It leverages SNMP, WMI, streaming telemetry, CLI, synthetic testing, IP SLA metrics, syslog, and traps to monitor and troubleshoot network infrastructure health, availability, and performance. Use NetIM to detect performance issues, map application network paths, diagram your network, identify configuration changes, plan for capacity needs, and troubleshoot infrastructure problems.
  • Riverbed NetProfiler provides enterprise-wide network flow monitoring. It supports a wide range of flow types and is used for monitoring bandwidth consumption, top talkers, and network utilization. It also supports discover and dependency mapping, capacity planning, and security forensics.
  • Riverbed AppResponse provides real-time packet capture and analysis. In addition to monitoring round trip time, network errors, and bandwidth, AppResponse can also analyze more than 2500 business applications, including web transactions, SQL databases and VoIP and video application performance. Packet capture is also critical to network security forensics.

To learn more about Riverbed’s network management capabilities, please click here.

]]>
SteelHead Domain Join Integration with Active Directory https://www.riverbed.com/blogs/steelhead-domain-join-integration-with-active-directory/ Mon, 28 Aug 2023 16:18:22 +0000 https://www.riverbed.com/?p=73489 Microsoft started enforcing msds-KrbTgtLink validation starting January 2022 via their Security Update for NTLM authentication. In simple terms, msds-KrbTgtLink is a link that helps verify your identity when you’re trying to access network resources, preventing tampering. Microsoft has explained that these improvements and fixes will be a part of Security Updates going forward.

This produced a hurdle for users of Riverbed SteelHeads who join domains via Riverbed’s Active Directory Integrated Mode (Windows 2008 and later). One of the solutions researched by Riverbed was to modify the SteelHead’s userAccountControl value to represent with a small subset of attributes used by a Domain Controller, but without enabling any Domain Controlling functions from Riverbed SteelHeads after joined to the domain.

For a detailed technical insights, please refer to this technical brief.

]]>
Navigating Network Security Challenges with Unified Observability https://www.riverbed.com/blogs/network-security-with-unified-observability/ Thu, 24 Aug 2023 12:49:21 +0000 https://www.riverbed.com/?p=73279 For a long time, security has been top of mind in every company across all industries. But since the Covid-19 pandemic drove more of us to do more things online–from shopping and banking to handling our healthcare at the swipe of a screen–organizations have become increasingly conscious of cyberattacks.

To see for yourself, simply head to your browser and enter the web address of your favorite site, starting the URL with ‘http’ instead of the safer ‘https’. Chances are, it’ll appear ‘untrustworthy’, and you’ll be denied access.

Managing and overseeing network traffic and status are critical aspects of maintaining the integrity, availability, and confidentiality of a company’s computer systems. However, perhaps slightly ironically, it’s difficult to monitor network activity with encryption and other high-level security measures in place.

Essentially, in protecting your network, you’re shutting out not just cybercriminals, but your own well-meaning employees. So, how can you allow the right people to detect threats, breaches, and anomalies without giving access to the wrong ones?

Familiar security challenges organizations face

There are several different security barriers you may be up against. Here are some that might feel familiar:

Your protection is too good

With the widespread use of encryption protocols, monitoring network traffic becomes arduous. Encrypted traffic obscures content and makes it difficult to inspect for potential threats. And while encryption enhances privacy and data protection, it can hinder network security monitoring efforts.

Your network is too overwhelming

Meanwhile, networks have become increasingly complex with the growth of cloud computing, Internet of Things (IoT) devices, and distributed systems. Managing and monitoring them–across multiple platforms, protocols, and endpoints–is tough.

The volume and variety of network traffic you receive won’t help things, either. Handling and analyzing a vast amount of diverse data, including emails, web browsing, file transfers, and multimedia content–all in real-time–can be overwhelming. It requires robust monitoring solutions and capable hardware infrastructure.

Your threats are too sophisticated

When searching for security solutions, it’s important to know what you’re up against. But what about when you don’t? Enter Advanced Persistent Threats (APTs): sophisticated and stealthy attacks designed to infiltrate a network and remain undetected for an extended period. These attacks often employ evasion techniques that bypass traditional network security measures and escape detection by standard monitoring systems.

You’ll also likely have vulnerabilities that are unknown to you and your software providers and, therefore, lack a patch or fix. These are called zero-day exploits, and cybercriminals can abuse them to compromise your network. Monitoring for zero-day exploits is hard, as you won’t be aware they’re there.

Your monitoring system is too unreliable

Network monitoring systems generate alerts and notifications based on predefined rules and patterns. But these systems are prone to false positives (incorrectly flagging benign activities as malicious) and false negatives (failing to identify actual threats). Striking a balance between accurate detection and minimizing false alarms is crucial but problematic.

Monitoring network traffic for insider threats, where authorized users misuse their privileges or intentionally compromise the network, can be troublesome, too. Identifying anomalous behavior and distinguishing between legitimate and malicious activities requires advanced behavioral analysis and user monitoring techniques.

Your regulations are too stringent

As you’ll be painfully aware, it’s vital you comply with various industry-specific regulations and legal requirements, such as the General Data Protection Regulation (GDPR) if you operate in Europe or the Health Insurance Portability and Accountability Act (HIPAA). Monitoring network traffic while ensuring compliance can be tricky, requiring careful handling of sensitive data and maintaining appropriate audit logs.

A better solution for staying on top of network health

To overcome these obstacles, you’ll probably have deployed a combination of network security standards, such as:

  • Firewalls
  • Intrusion detection and prevention systems
  • Secure network protocols
  • Encryption mechanisms
  • Advanced monitoring solutions

Plus, you no doubt enforce your security best practices, conduct regular security audits, and stay up to date with emerging threats and vulnerabilities. If so, you’re doing a great job–these elements are essential for maintaining network security.

But now, there’s an even easier way to stay on top of your network’s health without the headache. Riverbed Unified Observability tools are empowering organizations across the globe to maintain the visibility of their network traffic and monitor network infrastructure. No compromising network security standards, and no introducing potential risk to the customer environment.

Real-world impact of Riverbed

International engineers and project management company, Artelia Group, transformed its cyber security using the solution. Franck Martel-Badinga, Head of Infrastructure & Telecoms, explained, “Cybersecurity is a challenge for all organizations globally. We have over 7,000 employees and plan to reach 10,000 by 2025. As a result, we’re experiencing an increased number of attacks that are becoming more targeted and complex.”

To deliver a seamless and secure digital journey for employees and end customers, Artelia wanted to centrally monitor the end-user experience, servers, applications, and the network–even when teams worked remotely. The business had security tools, firewalls, and antivirus and malware software. Still, a talented hacker knows how to easily sidestep the radar of traditional tools to gain access to systems, apps, or networks.

So, Artelia needed to monitor the normal behavior of its network and systems to create a baseline. For example, an endpoint contacting 100 servers in two minutes is clearly abnormal, but antivirus software on a PC wouldn’t flag this kind of behavior.

Martel-Badinga concludes: “The Riverbed Aternity Digital Experience Management and Riverbed NetProfiler solutions provide us with visibility into all data across our networks, apps, and end-users, giving us invaluable actionable insights for the business. It enables us to make even better decisions to continuously improve the digital experience for end-users and our overall business performance.”

A global biopharmaceutical innovator deployed the Riverbed suite to improve its security team’s agility and forensic recall capabilities. It did this by providing an automated process to preserve packet-based evidence associated with security events and they needed support for further security investigation. The solution boosts the AppResponse ROI for the tools team, allowing staff to satisfy additional stakeholders by extending packet retention time without having to invest in additional storage units.

Ready to learn more?

The Riverbed portfolio is constantly being developed and improved to support the latest network security standards and provide visibility into today’s rapidly evolving, complex IT environments. In fact, some might say it’s just too valuable to miss. Schedule a demo today, or visit Riverbed’s website to learn more

]]>
Five Ways to Cut IT Costs with Riverbed Aternity https://www.riverbed.com/blogs/cut-it-costs-with-alluvio-aternity/ Wed, 23 Aug 2023 12:16:02 +0000 https://www.riverbed.com/?p=73263 Cost saving is undoubtedly a priority for your IT team, especially as the wider business puts increasing pressure on you to save more and spend less. But improving the user experience is surely high on your agenda, too. While nobody wants to blow their budget, it’s far more frustrating to be limited in your role by lagging software and hardware that’s too old to handle updates.

The good news: today, thanks to Digital Experience, there’s no need to compromise. Riverbed Aternity is a Digital Experience Management solution that:

  • Measures and analyzes what’s going on with your IT suite–at a granular level, according to your unique setup and needs.
  • Facilitates non-invasive remote operation, intelligent automation, and self-healing–so you can focus on what really matters.
  • Provides detailed, data-driven insight and analysis–empowering you to make informed decisions.
  • Shows genuine, meaningful in-year savings–while making life easier for every single user.

Here are five ways you can use Riverbed Aternity to keep everyone working seamlessly and cut costs, and your carbon footprint, while you’re at it.

Supercharge productivity with reliable services

So often, organizations struggle to stay efficient due to poor-performing IT systems, and plug the gap by hiring more staff, buying more machines, or otherwise overspending.

The Princess Alexandra Hospital NHS Trust was wasting 947 hours a year dealing with blue screens across devices, problems with applications, and PCs running slowly. These situations would regularly go unreported, as clinicians were under pressure and focused on critical medical tasks. Without quick and easy access to the data they needed, healthcare workers were spending less time with patients and collaborating with other specialists–like GPs, mental health teams and social care professionals.

Meanwhile, one insurance company we worked with experienced productivity issues in its call center. Crashes, hangs and freezes meant customers were constantly cut off, and then had to wait up to 30 minutes to speak to someone again. So, the business employed more agents in an effort to minimize waiting times (and angry phone calls). This came at tremendous cost.

With Riverbed Aternity, you can pinpoint exactly when, where, and how frequently issues occur, and remedy them without needing to be told or waiting for things to go seriously wrong.

In fact, an international energy solutions company saved 2,000 employee hours using the solution to introduce proactive interventions. By making your people more productive, you can save money and their patience, leading to unexpected benefits like improved employee retention (which cuts recruitment and training costs as well). It’s a win-win.

 Cut the volume of support tickets and the time to resolve them 

IT support is another area that can drain financial resources, and your operators’ job satisfaction. After all, there’s a cost associated with tickets–generally averaging $100 or more each–and your team is the one that has to pay.

As Riverbed Aternity can proactively detect and address problems, it can drive an impressive reduction in not just the number of tickets you receive, but also the amount of them solved at Level 1–and the time in which it takes to close them altogether. Plus, because the tool gives insight remotely, IT staff don’t have to disturb users for their IP address or a breakdown of what they were doing on their computer when, saving everybody precious time and effort.

A multinational consumer goods company saw a 20% reduction in support tickets year-on-year after deploying the solution. Using that same proactive approach it was also able to detect when employees were away from their computers for long periods with their devices still on–like on breaks or overnight–shutting them down remotely and raking back an unbelievable $1.8 million a year to not only save the company $$ but make a strong contribution to their carbon footprint reduction objectives.

Energy company EDF used Riverbed Aternity in a similar way–checking when users last restarted their machines, sending them emails prompting them to do so if it had been a few days, and rebooting remotely if necessary. This helped keep hardware healthy, improved its stability, and reduced the number of support tickets received.

Break down the barriers to innovation

Innovation is another essential that comes with a price, and a risk. From upgrading hardware and amending application access to introducing virtualization and SaaS, you’re likely always looking ahead to the next useful technology. But that tech is only helpful if it works, and sometimes, it doesn’t. As you discover when real users find issues with the tool, and it’s too late to backtrack.

With Riverbed Aternity, you can test changes with those real users in a much safer way, creating small pilot groups before deciding whether or not to roll out updates to the masses. You can also see a clear baseline for performance before, during and after deployment, gaining an accurate picture of its effects, both positive and negative.

Take stock of your software licenses

Are all your staff using all your available software, all the time? License costs can add up, even if the applications themselves go unused. By implementing Riverbed Aternity, you can monitor the precise amount of time users spend on different software, and exactly what they use it for.

Perhaps employees only need the online version of a package you’re paying for on desktop devices. Maybe they don’t need certain tools at all. There’s only one way to find out, and once you do, you’ll be surprised how much you can save by downgrading or deleting unwanted solutions.

This was certainly the case for global food and beverage supplier Tate & Lyle. Using Riverbed Aternity, the company was able to allocate the correct number of software licenses to its user community, benefitting from a fantastic return on investment.

Refresh your hardware smarter, not sooner

Typically, organizations buy hardware with three- or four-year warranties. Once this time’s up, the machines are replaced. It’s expected, it’s planned for, and it’s written into your budget. But does it make sense?

Riverbed Aternity allows you to break free from this one-size-fits-all age-based approach. It constantly assesses whether devices still provide the optimal user experience, even after three or four years. When they do, less money is wasted on new hardware, and less tech ends up in landfill. When they don’t, they can be replaced–before the point that problems set in. Kent Community Health NHS Foundation Trust, for example, used our solution to save a remarkable 42% of their devices this way.

Now it’s your turn. To learn more about Riverbed and Aternity, get in touch with our friendly team here. We look forward to supporting you in driving your budget-boosting, experience-enhancing IT transformation.

]]>
Riverbed Hits the Road Again for EMPOWEREDx https://www.riverbed.com/blogs/riverbed-hits-the-road-again-for-empoweredx/ Mon, 21 Aug 2023 12:02:31 +0000 /?p=22333 Like many professionals, I’ve grown accustomed to using various digital channels to connect with people. But there’s just something special about meeting customers in person. I enjoy hearing directly from users about how Riverbed solutions allow them to create better digital experiences, simplify IT, and (particularly in this economically challenging environment) cut costs while still enabling maximum productivity. And I’m not alone: We’ve found that our customers love meeting with other users and peers in the industry as well to see how they’re addressing similar challenges and succeeding.

Riverbed EMPOWEREDxThat’s why I’m looking forward to EMPOWEREDx—Riverbed’s annual customer community event. Last year’s global roadshow served as an opportunity to meet users and discover the challenges IT teams are facing, such as data overload and the need for more automation. This year, we’re visiting five cities across three continents to discuss Igniting Exceptional Digital Experiences, and to connect with those seeking to become leaders in the digital experience movement. In addition to Riverbed’s executive and technical speakers, I’m excited that we’ll have many customers—including EDF Energy, Global Credit Union, Princess Alexandra Hospital Trust NHS, Unilever, and many more—joining us to share their digital experience journeys. 

As Global CIO at Riverbed and formerly a CTO at JPMorgan Chase, I understand the allure and excitement of cutting-edge tech solutions. However, it’s important not to lose sight of the ultimate goal: enhancing the lives of the people who use these technologies. At Riverbed, our customers are at the heart of everything we do, and their satisfaction and success are the driving forces behind our innovations.

The EMPOWEREDx event is a reflection of our commitment to nurturing a strong relationship with our valued customers. It offers us the chance to listen directly to your experiences, challenges, and triumphs — gaining invaluable insights that shape the future of our solutions. EMPOWEREDx also provides a platform for us to showcase our latest innovations, giving you hands-on experiences and live demonstrations. Our hope is that this tangible and interactive approach further fosters trust and confidence in our products, as you get to witness firsthand how they can revolutionize your digital experiences and drive meaningful results in your organizations.

Join in for an immersive experience 

What sets EMPOWEREDx apart from other events is that we designed it to be a fully immersive event and not just a place to hear speakers talk. So bring your laptops or tablets and be prepared to take part in breakout sessions. You can expect presentations, demo or lab experiences, and roundtable discussions aligned with key use cases. For example, here’s what we’ll be covering throughout the event during the breakouts.

  • Intelligent incident response & remediation: In this hands-on lab and discussion, you’ll delve into the world of intelligent automation and discover how it can free your IT staff from monotonous tasks, leading to a transformation within your IT organization.
  • IT cost optimization: IT is always dealing with budget constraints, especially with today’s economic climate. The pressure is on to cut costs, and we’ll reveal how to do it while actually improving digital experiences.
  • Redefining digital excellence: Learn how high-fidelity performance data from multiple domains enables IT teams to accurately identify user experience issues and take prescriptive, targeted actions. The result? Increased employee productivity, happier employees, improved service quality, and a positive customer experience.

Join us live!  

We’re looking forward to meeting you and having you take part in what’s sure to be an unforgettable event. If you live near one of these cities, mark your calendars. 

  • August 23: Brisbane, Australia
  • September 12: Dallas, USA
  • September 19: NYC, USA
  • September 19-20: Paris, France
  • October 10: London, UK

EMPOWEREDx is an opportunity to develop deeper relationships with other Riverbed customers and gain insights that will help you become a leader in the digital experience movement. If you can’t make it to one of our stops, schedule a free demo of Riverbed solutions or Riverbed Acceleration solutions to discover how our technology creates transformative digital experiences. We look forward to seeing you at EMPOWEREDx!

]]>
Monitoring the Cloud for End User Experience https://www.riverbed.com/blogs/cloud-monitoring-for-end-user-experience/ Thu, 17 Aug 2023 18:31:22 +0000 https://riverbed-new.lndo.site/?p=73100 Cast your mind back to the last time you lost your house keys. I mean, really lost them. They weren’t in any of the usual places. You’ve checked about ten times. And when you did find them (in a sweaty panic and a flurry of overturned cushions because you were meant to leave the house 20 minutes ago), they were somewhere completely unexpected, like the laundry basket or in the fridge.

This scenario, so familiar to us all, is very much like monitoring application performance in the cloud. When it comes to finding problems and the underlying cause–the place you need to be looking is often very different to what you think.

Cloud adoption is standard for many businesses and is accelerating across all sectors. According to Gartner, 85% of organizations will embrace a cloud-first principle by 2025. While migrating to the cloud makes businesses more agile, resilient and able to provide a true remote/hybrid experience for employees, it also comes with its fair share of challenges.

When it comes to cloud, end user experience is the ultimate measure for success. But when organizations migrate to a cloud environment, the infrastructure on which business-critical apps run is no longer within your control, nor is it with the cloud vendor. Therefore, cloud monitoring tools play a critical role in alerting IT teams when something goes wrong.

As multi-cloud environments grow in complexity and the costs associated with app downtime grow, teams need more than an alert when there is a problem. They need insights into where the issue is, what has caused it, and how best to solve it. To deliver an optimal end user experience, cloud monitoring works best as part of a more holistic toolkit, which is where a more sophisticated jump to a unified observability platform may be a better option.

Benefits of cloud monitoring

Cloud monitoring plays an important role in making sure that service-level objectives are being met, which is essential for a consistent user experience. It offer an excellent option for growing businesses, as it allows them to scale resources up or down on demand and can track large volumes of data across different cloud locations. Yes, its core value lies in assessing system health, analyzing long-term trends and sending out alerts when things go wrong. It also provides insights into how well apps are performing and how they are being used over time.

Additionally, cloud monitoring tools offer the flexibility to be used across desktop computers, tablets, and phones, making it easy for teams to track application performance from any location. This is especially helpful for distributed teams and remote workers who need to access company data no matter where they choose to work. Monitoring also strengthens the security of applications by identifying potential risks.

As cloud infrastructure and configurations are already in place, installing a monitoring tool is relatively straightforward. It strengthens business resilience because even if local infrastructure fails, cloud-based resources will still function, ensuring continuity of operations.

What cloud monitoring can’t do

While cloud monitoring provides numerous benefits, it does have limitations. Firstly, tools in this space often only track application usage and consumption. They can provide an alert to a poor user experience but may not offer the insights into why it was sub-par. IT teams are obliged to investigate every alert without context, which often results in alert fatigue. War rooms need to be set up to deal with major outages, which are resource intensive because IT teams spend a lot of time chasing bad leads and looking in the wrong places.

To resolve problems impacting the end user experience quickly, IT teams need to ascertain both the location and cause of a problem to ensure that the problem doesn’t keep resurfacing. That is why cloud monitoring shouldn’t be used in isolation, but as part of a suite of tools that include network performance monitoring and diagnostics (NPMD), application performance monitoring (APM), infrastructure monitoring, and digital experience monitoring (DEM). This unified set of solutions tracks all moving parts in end user experience delivery, allowing IT teams to really zero in on the root cause of problems.

How unified observability fills the gap

Where monitoring tracks system performance and identifies known failures, observability goes the extra mile. If all the moving parts of delivering cloud based applications are thought of as a single system, Observability can look at the overall system with all it’s interdependencies and can identify the root cause of a problem by analyzing the data it gathers from many different sources. An observability solution not only assesses the health of that system but provides actionable insights as well. This allows IT teams to proactively address problems and resolve them faster.

Riverbed Unified Observability platform overcomes siloes to capture full-fidelity data from networks, applications, servers, client devices, cloud-native environments and end user devices. AI and ML are then used to analyze data streams, automating much of the troubleshooting work that would usually be carried out by IT engineers. This allows employees at any level to help solve user experience issues quickly. Insights are filtered, contextualized and prioritized, ready for action by the IT team.

Therefore, while cloud monitoring is crucial, meeting rising expectations for the end-user experience requires a more comprehensive and sophisticated solution. With a unified observability solution, you can set IT teams up for success by not only alerting them to problems but showing them where to look and automating the bulk of the troubleshooting process. This allows issues to be resolved before they escalate to outages, improving the end-user experience.

]]>
What Is Streaming Telemetry and When Should You Use It? https://www.riverbed.com/blogs/what-is-streaming-telemetry-and-when-should-you-use-it/ Wed, 09 Aug 2023 20:29:42 +0000 https://www.riverbed.com/?p=76107 Traditionally, network monitoring involved polling devices for their status and statistics. Now with streaming telemetry, devices proactively send data in real-time, providing a continuous stream of metrics.

Streaming telemetry enables network administrators to gather a wide range of data, including performance metrics, operational statistics, health information, and other relevant details from network devices such as routers, switches, firewalls, and servers. This data is typically transmitted using network protocols like gRPC (Google Remote Procedure Call), NETCONF (Network Configuration Protocol), or other lightweight protocols.

Overall, streaming telemetry transforms network management by providing continuous, real-time data streams that facilitate proactive troubleshooting, optimization, and decision-making in large, complex network environments.

Streaming telemetry vs SNMP?

Streaming telemetry and Simple Network Management Protocol (SNMP) are two different approaches to network monitoring and data collection. Here are the key differences between the two.

  • blue block on black showing push and pullData Collection Method: SNMP uses a polling mechanism where the management system periodically queries network devices to retrieve specific data. The devices respond with the requested information. The issue is that a delay between polling intervals can resultin a lag in detecting and responding to network issues. Streaming telemetry, on the other hand, uses a push mechanism. The network devices proactively transmit data as a continuous, real-time stream without waiting for requests from the management system. It enables faster detection and response to network anomalies and events.
  • Data Frequency and Granularity: SNMP collects data at regular polling intervals, for example, every five minutes or longer. The data collected is typically limited to predefined metrics specified in the MIB (Management Interface Base). Whereas streaming telemetry can collect and transmit data at sub-second intervals, providing real-time network visibility. It also enables IT to collect a wider range of data points, including custom metrics. It can deliver a more comprehensive view of network performance and behavior.
  • Network Overhead: SNMP polling generates additional network traffic as the management system sends requests and devices respond with data. The frequency of polling can impact network performance, especially in large-scale deployments. Streaming telemetry reduces network overhead since data is sent proactively without the need for queries. Network utilization is also more efficient and can scale better in complex network environments.

In short, both SNMP and streaming telemetry have their strengths and are suitable for different monitoring scenarios. SNMP is a mature protocol supported by a wide variety of network devices, while streaming telemetry provides more real-time, granular, and flexible data collection capabilities. Organizations often use both, based on their monitoring requirements, device support, and need for real-time insights.

When should I recommend streaming telemetry vs SNMP?

The decision to use one versus the other depends on several factors, including the specific use case, the network infrastructure, and your clients’ requirements. Here are some considerations to help you decide when to use each.

Use streaming telemetry when your clients need to:

  • Stream real-time data for applications that require immediate and continuous updates.
  • Collect highly granular data, including fine-grained statistics, counters, or operational information.
  • Monitor large-scale deployments and handle high data rates.
  • Define custom data models and collect specific information.

Continue to use SNMP polling if:

  • Your client’s network primarily consists of devices with SNMP capabilities. It might be simpler to stick with SNMP monitoring.
  • Your clients needs to perform configuration changes or control devices remotely.

In some cases, using a combination of streaming telemetry and SNMP might be best. For example, you can use streaming for real-time monitoring and granular data collection while still using SNMP for device management and compatibility with legacy systems. Ultimately, the decision between which you use depends on your specific needs, the capabilities of your network devices, and the ecosystem of tools and systems you are using.

Riverbed NetIM

Fortunately, Riverbed NetIM supports both SNMP and streaming telemetry, as well as WMI, CLI, API, and synthetic testing for a comprehensive picture of how infrastructure performance affects network and application performance and ultimately, user experience. It provides integrated mapping, monitoring, and troubleshooting of network infrastructure. NetIM can capture infrastructure topology information, detect, and troubleshoot performance issues, map application network paths, plan for capacity needs, and diagram the network.

 

alluvio screenshot diagram

 

For more information on the benefits of Riverbed NetIM infrastructure monitoring, log into the Partner Portal.

]]>
What Are the Three Major Network Performance Metrics to Focus On? https://www.riverbed.com/blogs/what-are-the-three-major-network-performance-metrics-to-focus-on/ Wed, 02 Aug 2023 20:36:01 +0000 https://www.riverbed.com/?p=76122 In today’s hyper-connected world, where businesses rely heavily on network infrastructure to transmit data and deliver services, helping your clients understand network performance metrics is crucial in starting conversations about how Riverbed solutions can improve performance. Network performance metrics provide insights into the efficiency, reliability, and overall health of a network. In this blog, we will delve into three major network performance metrics: ThroughputNetwork Latency (Delay), and Jitter.

By understanding these metrics, you’ll be better equipped to help your clients optimize your network and ensure seamless operations.

What is Throughput?

Throughput refers to the amount of data that can be transmitted through a network within a given time frame. It is commonly measured in bits per second (bps) or its multiples (Kbps, Mbps, Gbps). Throughput represents the network’s capacity to deliver data and is often associated with bandwidth. It measures how fast data can be transferred between devices, servers, or networks. Higher throughput indicates a network’s ability to handle larger data volumes and support bandwidth-intensive applications such as video streaming or large file transfers.

What is Network Latency (Delay)?

Network latency, also known as delay, is the time it takes for a data packet to travel from its source to its destination across a network. It is usually measured in milliseconds (ms). Latency can be affected by various factors such as the distance between network endpoints, network congestion, and the quality of network equipment. Lower latency signifies faster response times and better user experience. Applications that require real-time interaction, such as online gaming or voice/video conferencing, are particularly sensitive to latency. Minimizing latency is crucial to ensuring smooth and seamless communication.

What is Jitter?

Jitter refers to the variation in delay experienced by packets as they traverse a network. It is measured in milliseconds (ms) and represents the inconsistency or unevenness of latency. Jitter is caused by network congestion, routing changes, or varying levels of traffic. High jitter can lead to packet loss, out-of-order packet delivery, and increased latency, negatively impacting the performance of real-time applications. To ensure optimal performance, it is essential to minimize jitter and maintain a stable and predictable network environment.

Why are network performance metrics important?

Network performance metrics play a vital role in several aspects. Here’s how Riverbed can help.

Capacity Planning

Understanding throughput helps network administrators determine the network’s capacity and whether it can handle the expected workload. With Riverbed Network Observability solutions, organizations can proactively manage network and application performance. Additionally, NPM allows Network Operations teams to effectively manage costs by investing only in upgrading critical infrastructure, consolidating underutilize resources and managing assets of multiple business units. Riverbed Network Observability delivers the ability to auto-discover topology and continuously poll metrics, automate analyses, and generate capacity planning reports that are easily customizable to changing business and technology needs.

Performance Optimization

Monitoring latency and jitter allows organizations to identify and troubleshoot network performance issues. By pinpointing the root causes of delays or inconsistencies, network administrators can optimize network configurations and minimize disruptions. For performance optimization, Riverbed Network Observability provides cloud visibility by ensuring optimal use and performance of cloud resources and helps organizations manage the complexity of Hybrid IT with agile networking across data centers, branches and edge devices. Riverbed Network Observability helps overcome latency and congestion by proactively monitoring key metrics and their affect on application performance.

Quality of Service (QoS)

Network performance metrics enable the implementation of effective Quality of Service policies. By prioritizing specific types of traffic based on their requirements, such as voice or video data, organizations can ensure a consistent and reliable user experience. The Riverbed QoS system uses a combination of IP packet header information and advanced Layer-7 application flow classification to accurately allocate bandwidth across applications. The Riverbed QoS system organizes applications into classes based on traffic importance, bandwidth needs, and delay sensitivity.

SLA Compliance

Service Level Agreements (SLAs) often include performance metrics that must be met by network service providers. Monitoring and measuring these metrics allow organizations to hold providers accountable and ensure that agreed-upon performance standards are being met. Riverbed Network Observability monitors metrics associated with the service components that make up each SLA. By proactively monitoring the health of the network, issues can be identified and escalated quickly, before end users are impacted.

Help clients gain insights into their networks

Network performance metrics, including Throughput, Network Latency (Delay), and Jitter, provide valuable insights into the efficiency and reliability of a network. Riverbed makes it easy for your clients’ Network teams to monitor, optimize, troubleshoot, and analyze what’s happening across their hybrid network environment. With end-to-end visibility and actionable insights, Network teams can quickly and proactively resolve any network-based performance issues.

Riverbed Network Observability collects all packets, all flows, all device metrics, all the time, across all environments—cloud, virtual, and on-prem—providing enterprise-wide, business-centric monitoring of critical business initiatives.

]]>
Riverbed IQ Named Finalist in CRN 2023 Tech Innovator Awards https://www.riverbed.com/blogs/crn-tech-innovators-award-to-alluvio-iq/ Mon, 17 Jul 2023 15:34:00 +0000 /?p=22231 Alluvio IQ Finalist Tech Innovator Award BadgeRiverbed IQ by Riverbed was named a finalist in the IT Infrastructure Monitoring category of CRN’s 2023 Tech Innovators Award, announced today.

Riverbed IQ is a SaaS-delivered Unified Observability service that surfaces impactful issues with the context to solve problems fast. It accomplishes this by leveraging full-fidelity data–across networks, infrastructure, applications, and end users–then applying AI/ML, correlation, and intelligent automation to surface actionable insights.

About CRN’s Tech Innovator Awards

According to CRN, these awards are meant to help solution providers identify products that are truly innovative and offer value for their customers. The 2023 CRN Tech Innovator Awards showcase IT vendor offerings that provide significant advances in IT–and partner growth opportunities–across a broad range of technology categories including cloud, networking, security, storage, and software. The awards spotlight innovative products across a wide range categories. The winners and finalists were chosen by CRN staff.

Four powerful intelligent automation use cases

The June release of Riverbed IQ, which was submitted for the award, focuses on intelligent automation across the Riverbed platform. Powered by the Riverbed LogiQ Engine, the Riverbed platform leverages AI, correlation, and automation to streamline repeatable processes with minimal human intervention and improved user satisfaction. Riverbed IQ uniquely offers broader automation use cases that extract insights across Riverbed telemetry and existing 3rd party tool silos to enable faster time to resolution.

With its powerful automation, analytical and integration capabilities, Riverbed currently supports four automation use cases:

  1.  Incident response runbooks automate troubleshooting by replicating the best practices of IT experts. With the Riverbed portfolio’s full-fidelity insights, complex troubleshooting workflows become razor sharp, highly automated processes. Riverbed IQ replicates the advanced investigative processes of IT operations teams, providing context-driven insights that empower them to proactively resolve issues without escalating.
  2. Security forensics automation with Riverbed IQ bridges the gap between NetOps and SecOps by leveraging automation to distill forensic data from the Riverbed NPM portfolio for use in traditional security tools, like SIEMs and SOARs. SecOps teams need easy access to all data sources and to easily integrate that data into their existing security tools. Riverbed IQ provides out-of-the-box runbooks for security investigations and threat hunting. These runbooks provide SecOps teams with easy access to Riverbed NPM and DEX data to help SecOps fully investigate threats with more context, reducing risk to the business.
  3. Logic-driven desktop remediations harnesses the power of the Riverbed LogiQ Engine, logic-driven endpoint remediation workflows are capable of dynamically mimicking expert decision-making, resulting in instant fixes for simple to complex issues. Unlike other solutions that demand a multitude of remediation scripts customized to address narrow use cases, Riverbed Aternity sets itself apart by offering one-click remediation actions that can dynamically mimic expert decision-making by constructing logic-driven remediation workflows using reusable steps. This enables the resolution of both simple and complex issues. Combined with the fact that Aternity offers an extensive catalog of Mac and PC remediation actions for recurring end user experience issues such as application hangs, boot and login times, network connectivity, application crashes, OS crashes and more, IT can have more time to focus on innovation.
  4. Intelligent ServiceNow ticketing empowers IT with their ideal scenario – automated ticket generation that is prioritized, remains up to date, and contains all the context IT needs to quickly remediate, directly from ServiceNow. Riverbed IQ’s integration with ServiceNow, combined with its ability to integrate with third party tools, uniquely provides ITOps users with context-driven insights directly in their ServiceNow UI.

The results are better IT agility and efficiency, fewer errors, and reduced risks.

More on Riverbed IQ

Interested in an observability platform that unifies data, insights, and actions across IT? To learn how your teams can harness the power of Intelligent Automation to gain efficiency, quality, speed, while reducing costs, visit Riverbed today for more information on Riverbed IQ or to Request Demo.

Visit CRN to learn more about this year’s Tech Innovator Awards.

]]>
Exploring the Evolution of Digital Workplace Monitoring https://www.riverbed.com/blogs/the-user-experience-monitoring-evolution/ Fri, 14 Jul 2023 12:08:00 +0000 /?p=22084 Alluvio by Riverbed at VWE 2023How is monitoring for the workplace evolving?

The Virtual Workplace Evolution (VWE) stands as one of Germany’s of the largest desktop events, where customers, partners and vendors convene to exchange experiences and share insights on the digital workplace. The special feature of this event is the networking aspect. In presentations, companies share their strategies on how they are tackling current challenges, while solution providers showcase how their technologies are evolving.

Employee experience is omnipresent

The significance of employee experience permeated throughout the event. Companies such as Lufthansa, one of the largest airlines in Europe, have reported how they have transitioned from outsourcing to insourcing, leveraging cloud technologies and standardization for the workplace. In doing so, it has become increasingly clear how important it is to involve employees and stakeholders in the process. TUI Group, a German leisure, travel and tourism company which has realigned its global infrastructure for a hybrid workplace, also emphasized the importance of close stakeholder alignment.

Instead of relying on Microsoft 365, Kärcher, a leading manufacturer of commercial, industrial and consumer cleaning equipment, started using Google Workspace. This transition was executed swiftly yet cautiously, ensuring the inclusion of employees throughout the process.

User Experience Monitoring has arrived on the market

In addition to many companies, a number of software vendors were also represented at the VWE. Among them, the User Experience Monitoring sector had a strong presence, with representatives from ControlUp, Nexthink and Riverbed. A few years prior, Riverbed Aternity was the sole User Experience Monitoring vendor present at VWE. This surge in representation stems from the fact that considering user experience and employee experience has become indispensable for the successful implementation of IT projects, making this type of monitoring increasingly prevalent on the market.

Monitoring evolves into observability for the desktop

Alluvio managed devices on VWE 2023However, traditional user experience monitoring also has its limitations, and Riverbed Aternity in particular is at the forefront of addressing these constraints. Collecting data from end devices and enriching it with survey data, if necessary, is no longer enough. To successfully plan and implement IT projects, the needle must be found in the “haystack” of data. IT must have a clear understanding of what it is looking for. The same is true when companies troubleshoot with desktop monitoring tools. While the data is there, it must be enriched and interpreted using expert knowledge.

My presentation at the VWE highlighted that the future of employee and user experience monitoring lies in observability. Data may need to be enriched from other systems and automatically pre-analyzed and given context so that companies can derive the necessary added value from it. Observability also enables proactive interaction with the data, such as using flow charts to define which actions should be executed when analyzing or resolving specific situations.

Observability and IT data can support ESG and sustainability initiatives

This observability and IT data can also be leveraged to support ESG and sustainability goals. For instance, User Experience Monitoring data can indicate whether an end device remains operational throughout the night without any interaction or active computing work. With this data, various actions can be automatically triggered.

For example, in case the employee only switched off the monitor and left the workstation, the data constellation can be used to automatically execute actions. For instance:

  • The employee can receive a pop-up or survey the next day, reminding them to shut down their computer at night to conserve energy.
  • Additionally, IT could offer an automatic configuration change to put the computer to sleep when not in use for an extended period.

IT can also perform automatic evaluations to determine in advance how much energy or CO₂ can be saved through this feature.

Extend PC lifecycle to 5 to 7 years

HP, as a device manufacturer, addressed ESG and sustainability at the VWE, and demonstrated that workplace devices can be utilized for up to 7 years, thereby reducing the CO₂ footprint. Colleagues also emphasized the importance of ensuring that the PC’s performance does not adversely affect the user experience. This is precisely where observability plays a crucial role. The video below, “IT Asset Cost Reduction for Digital Workplace Teams,” gives a glimpse into the possibilities:

Watch Video

Until next year

The next Virtual Workplace Evolution takes place on June 19-21, 2024 in Berlin. We are already looking forward to reconnecting with our colleagues and industry peers for an inspiring exchange of experiences!

]]>
Analyst Insights for Trailblazing the Digital Workplace Landscape https://www.riverbed.com/blogs/analyst-insights-for-dex-and-the-digital-workplace/ Thu, 13 Jul 2023 12:47:00 +0000 /?p=22122 Recently, I had the privilege of attending the 2023 Gartner Digital Workplace Summit in San Diego, where leading Gartner analysts shared their insights and predictions on the latest trends shaping the digital workplace landscape. When it comes to empowering employees, enhancing productivity, and fostering sustainable growth in the digital workplace, these are a few of the important messages that resonated with me during the event.

Digital dexterity for managers and employees

Gartner analyst Lane Stevenson predicted that organizations prioritizing digital dexterity enablement for both managers and employees will experience significant year-over-year revenue growth by 2027. Digital dexterity refers to the ability for employees to use and manipulate digital technologies effectively and efficiently. It involves understanding their proficiency to navigate digital interfaces, operate devices, and interact with digital tools and applications.

Leveraging employee productivity monitoring tools

According to Gartner analyst Tori Paulman, many employees are open to being tracked by productivity monitoring tools if there is a system in place that helps them improve their skills. Robust Digital Employee Experience (DEX) solutions, enable organizations to assess whether applications empower or frustrate employees. Understanding and enhancing the employee experience can have a positive impact on employee engagement, productivity and overall satisfaction.

Expanding the scope of DEX

Gartner analyst Dan Wilson asserted that DEX tool deployments focused solely on IT use cases will struggle to achieve sustainable ROI. It is crucial to consider non-IT use cases such as equitable experience and sustainability. DEX tools equipped with telemetry and sentiment analysis capabilities can identify digital friction experienced by individual employees or specific employee segments. This is particularly relevant for remote workers facing challenges like slow internet connectivity.

The intersection of sustainability and performance

During the conference, Gartner analysts emphasized the importance of evaluating DEX platforms based on their Green IT use cases. Autumn Stanish and Stuart Downes discussed the benefits of adopting a performance-driven refresh cycle for endpoint devices, rather than relying on calendar-based replacements.

DEX tools that monitor power consumption, optimize power-saving features, and encourage improved habits among employees lay the foundation for digital business leadership. However, the challenge lies in striking the right balance between sustainability and performance trade-offs. While extending the lifecycle of laptops reduces the annual total cost of ownership and carbon footprint, the number of performance risks for laptops increase. Examples include increased failure rates, compatibility issues with future OS and apps, insufficient hardware support for new workloads and more.

Create a strong digital workplace

The Gartner Digital Workplace Conference shed light on crucial aspects of today’s digital workplace strategy.

Prioritizing digital dexterity, leveraging employee productivity monitoring tools, expanding DEX beyond IT use cases, and embracing sustainability through Green IT initiatives are key considerations for organizations aiming to thrive in the digital age. By staying informed and implementing these insights, businesses can create a digital workplace that empowers employees, maximizes performance, and drives sustainable growth in the digital age.

]]>
What Is Streaming Telemetry and When Should You Use It? https://www.riverbed.com/blogs/what-is-streaming-telemetry-and-when-to-use-it/ Thu, 06 Jul 2023 12:39:00 +0000 /?p=21446 Traditionally, network monitoring involved polling devices for their status and statistics. Now with streaming telemetry, devices proactively send data in real-time, providing a continuous stream of metrics.

Streaming telemetry enables network administrators to gather a wide range of data, including performance metrics, operational statistics, health information, and other relevant details from network devices such as routers, switches, firewalls, and servers. This data is typically transmitted using network protocols like gRPC (Google Remote Procedure Call), NETCONF (Network Configuration Protocol), or other lightweight protocols.

Overall, streaming telemetry transforms network management by providing continuous, real-time data streams that facilitate proactive troubleshooting, optimization, and decision-making in large, complex network environments.

Streaming telemetry vs SNMP?

Streaming telemetry and Simple Network Management Protocol (SNMP) are two different approaches to network monitoring and data collection. Here are the key differences between the two.

Streaming telemetry uses a push method while SNMP pushes metrics to the collector.
Streaming telemetry uses a push method while SNMP pushes metrics to the collector.
  • Data Collection Method: SNMP uses a polling mechanism where the management system periodically queries network devices to retrieve specific data. The devices respond with the requested information. The issue is that a delay between polling intervals can result in a lag in detecting and responding to network issues. Streaming telemetry, on the other hand, uses a push mechanism. The network devices proactively transmit data as a continuous, real-time stream without waiting for requests from the management system. It enables faster detection and response to network anomalies and events.
  • Data Frequency and Granularity: SNMP collects data at regular polling intervals, for example, every five minutes or longer. The data collected is typically limited to predefined metrics specified in the MIB (Management Interface Base). Whereas streaming telemetry can collect and transmit data at sub-second intervals, providing real-time network visibility. It also enables IT to collect a wider range of data points, including custom metrics. It can deliver a more comprehensive view of network performance and behavior.
  • Network Overhead: SNMP polling generates additional network traffic as the management system sends requests and devices respond with data. The frequency of polling can impact network performance, especially in large-scale deployments. Streaming telemetry reduces network overhead since data is sent proactively without the need for queries. Network utilization is also more efficient and can scale better in complex network environments.

In short, both SNMP and streaming telemetry have their strengths and are suitable for different monitoring scenarios. SNMP is a mature protocol supported by a wide variety of network devices, while streaming telemetry provides more real-time, granular, and flexible data collection capabilities. Organizations often use both, based on their monitoring requirements, device support, and need for real-time insights.

When should I use streaming telemetry vs SNMP?

The decision to use one versus the other depends on several factors, including the specific use case, the network infrastructure, and your requirements. Here are some considerations to help you decide when to use each.

Use streaming telemetry when you need to:

  • Stream real-time data for applications that require immediate and continuous updates.
  • Collect highly granular data, including fine-grained statistics, counters, or operational information.
  • Monitor large-scale deployments and handle high data rates.
  • Define custom data models and collect specific information.

Continue to use SNMP polling if:

  • Your network primarily consists of devices with SNMP capabilities. It might be simpler to stick with SNMP monitoring.
  • You need to perform configuration changes or control devices remotely.

In some cases, using a combination of streaming telemetry and SNMP might be best. For example, you can use streaming for real-time monitoring and granular data collection while still using SNMP for device management and compatibility with legacy systems. Ultimately, the decision between which you use depends on your specific needs, the capabilities of your network devices, and the ecosystem of tools and systems you are using.

Riverbed NetIM

Fortunately, Riverbed NetIM supports both SNMP and streaming telemetry, as well as WMI, CLI, API, and synthetic testing for a comprehensive picture of how infrastructure performance affects network and application performance and ultimately, user experience. It provides integrated mapping, monitoring, and troubleshooting of network infrastructure. NetIM can capture infrastructure topology information, detect, and troubleshoot performance issues, map application network paths, plan for capacity needs, and diagram the network.

Alluvio NetIM home page provides an overview of device and interface performance.
Riverbed NetIM home page provides an overview of device and interface performance.

For more information on the benefits of Riverbed NetIM infrastructure monitoring, click here.

]]>
Improve Cybersecurity with Easy Integration of Observability Data https://www.riverbed.com/blogs/cybersecurity-with-observability-data-integration/ Wed, 28 Jun 2023 12:58:00 +0000 /?p=21579 Read the EMA white paper entitled "From Complexity to Clarity:Resolving Challenges in Cybersecurity Observability"
Read the EMA white paper, “From Complexity to Clarity: Resolving Challenges in Cybersecurity Observability”

Traditional security tools like Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) are only as good as the intelligence that they ingest.

In a recent report from Enterprise Management Associates (EMA), Analyst Ken Buckler reflects on why SecOps needs to leverage observability data for faster, more complete incident response.

Cybersecurity facing mounting challenges

According to Buckler, modern cybersecurity faces a range of challenges that IT leaders must overcome to ensure effective threat detection. One example is the complexity of today’s networks, which feature copious devices, endpoints, and applications. This complexity hinders SecOps’ ability to gain consistent monitoring of the environment for threat detection.

The exponential growth of data volume by network devices and applications, analyzing and processing this data in real time is a formidable task. It demands scalable data collection, storage, and analysis techniques, plus advanced technologies, like machine learning, correlation, and automation. As a result, insufficient visibility into certain network constructs, devices and applications lead to security blind spots. Addressing this challenge involves implementing standardized monitoring practices and utilizing network visibility tools to enhance observability.

Integration is essential

Integrating observability with existing security tools is vital for a comprehensive security pos­ture. However, the complexity and diversity of security technologies pose integration challenges. Overcoming this obstacle requires careful planning, ensuring interoperability, and leveraging auto­mation and orchestration capabilities.

To tackle these challenges, organizations must invest in comprehensive observability solutions, such as Riverbed IQ, that encompass real-time monitoring, advanced analytics, and intelligent automation. By implementing standardized monitoring practices, utilizing efficient data processing technologies, enhancing visibility through full-fidelity telemetry, and integrating observability with existing security tools, organizations can bolster threat detection, incident response, and overall cybersecu­rity resilience.

Riverbed IQ automates cybersecurity incident response

Riverbed IQ can aid in the investigation of cyberthreats using the Riverbed LogiQ Engine intelligent automation capabilities. It investigates threats found in traditional security tools, like SIEM or SOAR solutions. The SIEM or SOAR initiates a request for supporting diagnostic data using an API. Riverbed IQ then parses this request and the kicks off a low-code security runbook that automates the collection of network forensics data from across the Riverbed ™ portfolio or from third-party data. By distilling the forensic data and sending actionable insights back to the requesting solution, SecOps teams gain easy access to the supporting data they need to drive intelligent security investigations and mitigate cyber threats.

For more information on the need for observability data in cybersecurity, read the EMA white paper, From Complexity to Clarity: Resolving Challenges in Cybersecurity Observability.

]]>
How Observability and IT Data Can Unlock ESG Success https://www.riverbed.com/blogs/how-observability-solutions-can-unlock-esg-success/ Mon, 26 Jun 2023 12:09:00 +0000 /?p=21675 Blocks spelling out ESGCompanies are increasingly placing a bigger emphasis on sustainable activities across their business domains. Recently, the term ESG, which stands for environmental, social, and governance, has gained significant traction. ESG serves as a comprehensive framework for assessing a company’s business practices and performance in relation to sustainability and ethical issues. Additionally, it provides a way to measure business risks and identify opportunities.

While sustainability, ethics, and governance are generally considered non-financial performance indicators, the role of an ESG program is to ensure accountability and implementation of systems and processes to manage a company’s impacts, such as its CO₂ footprint and the way it treats employees, suppliers, and other stakeholders.

The challenge of accurate figures

To establish a trustworthy ESG initiative, businesses must acquire accurate and reliable data from different fields. This challenge is particularly evident in the realm of IT, as illustrated by the following example:

Organizations that maintain their own data centers have the capability of accurately reporting the energy usage of these facilities. By leveraging their personal power circuits and electrical meters, this reporting can be done with great accuracy. However, determining the power consumption of workstation computers, monitors, and printers becomes more challenging, especially when they are located in home offices rather than the company’s own buildings. As a result, estimating the energy consumption of workstation computers is the common practice. Unfortunately, many companies simply take the maximum electricity consumption or published average values for their calculations, leading to highly inaccurate results.

While this inaccurate data is sufficient in providing an overview of the situation, the goal of ESG programs, however, is to drive improvement while considering the cost/benefit. For example, simply replacing an old desktop PC with a modern mini-PC is not necessarily a more sustainable solution. It’s important to account for the CO₂ footprint associated with manufacturing new computers. The key instead is to understand whether users can achieve the same user experience with their existing PCs as with a new one, or if a small upgrade will help enhance efficiency and sustainability.

Observability brings the necessary insights

And this is where Observability helps. Observability represents the next stage in the evolution of IT monitoring and is being implemented in various aspects of IT. The information obtained from this source may find be leveraged in ESG initiatives, especially to make more precise assessments and to better evaluate the cost/benefit question.

This application extends beyond the workplace, where it can help determine how user-friendly and efficient older computers are and whether they should be upgraded or replaced when necessary. It can also identify which applications generate particularly high levels of network traffic, which in turn lead to a corresponding CO₂ impact from the data centers across data networks. With Observability data, it becomes possible to assess, plan, and execute data center consolidations or cloud migration with regards to ESG goals.

Using IT data for the ESG perspective

Woman at computer looking at IT dataIT departments, alongside Observability solution providers like Riverbed, can make valuable contributions to various ESG initiatives. The data gathered in IT can be harnessed and used for sustainability projects to achieve measurable successes that are also cost-beneficial. It is important to tailor the parameters used in the calculations to the specific situation of each company, considering variables such as the CO2eq/kWh value and energy costs differ from country to country and company to company.

Before starting large ESG projects, IT departments can achieve success with small “quick wins.” It is easy to analyze whether a server in the data center is still operational but no longer in use. Another starting point is to determine whether users are shutting down their computers at the end of the work day or putting them in sleep mode. If not, a simple change to the settings can ensure that the computer is automatically sent to “sleep” after a few minutes of idle time. However, since sustainability is also about fostering awareness, it can be equally effective to make users aware of specific situations automatically to educate them about the impact on the environment.

The data from IT’s observability tools brings more context to sustainability projects, and thereby leads to the discovery of quick wins and opportunities for improvement.

]]>
What Are the Three Pillars of Observability? https://www.riverbed.com/blogs/what-are-the-three-pillars-of-observability/ Thu, 22 Jun 2023 12:30:00 +0000 /?p=21152 The traditional three pillars of observability are considered logging, metrics, and tracing. These three data types are essential for building a reliable, scalable, and maintainable system. Logs, metrics, and traces are essential to observability because they provide different types of data that enable IT engineers to understand how a system is behaving and diagnose issues when they arise.

This blog looks at these three pillars and analyzes how Riverbed takes them further to unify data, insights and actions for all IT.

What are logs?

Logs refers to the collection of data generated by an application or system as it runs. Logs record events that happen within a system and provide a detailed record of its behavior. Logs are important because they record events and activities that happen within a system, providing a detailed history of its behavior. Logs are helpful for debugging issues, troubleshooting, and auditing. When something goes wrong, logs can help engineers identify what happened and when, as well as providing clues as to why it happened.

What are metrics?

Metrics are numerical values that represent the behavior of a system over time. They are typically collected at regular intervals and can be used to track trends and identify anomalies. Metrics are often used to monitor system performance, such as CPU usage, memory utilization, or traffic throughput.

Metrics are important because they provide data that can be used to track system performance over time. By collecting and analyzing metrics, engineers can identify patterns and trends, which allows them to optimize performance, troubleshoot issues, and make informed decisions about system capacity and resource allocation.

What are traces?

Traces refer to the ability to follow a request as it moves through a distributed system. Tracing helps engineers identify the source of a problem, understand the flow of data, and optimize the performance of a system. Tracing involves instrumenting the code and collecting data at various points in the system, then aggregating and analyzing that data to create a trace of a request’s journey. Traces making it easier to diagnose and resolve problems. They can help identify where performance bottlenecks are occurring and help pinpoint the root cause of issues.

Together, the three pillars of observability provide a comprehensive view of a system, enabling engineers to monitor, debug, and optimize it. They form the foundation of observability, allowing engineers to gain insight into complex systems and improve their reliability, scalability, and maintainability.

Riverbed Unified Observability unifies data, insights and actions across IT

Riverbed IQ is a SaaS-delivered unified observability service that captures full-fidelity performance metrics, applies machine learning and correlation to identify to separate false positives from critical events, and then automates the investigative workflows of IT experts to gather the diagnostic data necessary to resolve problems quickly.

Alluvio Unifies Data, Insights and Actions Across IT
Riverbed unifies data, insights and actions across IT.

In short, Riverbed expands on the three pillars of observability to deliver an observability solution that unifies data, insights, and actions for all IT.

Unified data is the comprehensive support of full-fidelity telemetry from across diverse sources, including devices, networks, applications, cloud-native environments, users, and third-party solutions. Unlike other solutions that sample data to deal with the scale of today’s distributed environments, Riverbed captures every transaction, packet, and flow, as well as actual user experience for every type of application. Full-fidelity data gives IT a complete picture of what’s happening and what has happened, without missing key events due to sampling. It provides the foundation of unified observability.

Unified insights mean IT solves the right problems fast to keep users productive. With the best data, AI and multifaceted correlations, plus workflow automation, Riverbed IQ delivers context-rich, filtered, and prioritized insights that help IT teams understand the scope and severity of issues and the cause of poor performance.

Alluvio IQ pulls together all evidence related to an incident in a single report, which can also be used in inform trouble tickets.
Riverbed IQ pulls together all evidence related to an incident in a single report, which can also be used in inform trouble tickets.

Unified actions employs low-code runbooks to replicate and automate the best practices of IT experts to provide probable root cause of performance of security incidents. By automating the gathering of supporting diagnostic data from disparate solutions, Riverbed IQ helps IT teams accelerate problem-solving, break down silos, and avoid time-consuming war rooms.

Alluvio IQ runbooks automate the process of gathering diagnostic data to speed time to resolution.
Riverbed IQ runbooks automate the process of gathering diagnostic data to speed time to resolution.

Why Riverbed IQ Unified Observability?

Riverbed IQ Unified Observability unifies data, insights, and actions to empower all IT teams to deliver seamless digital experiences and end-to-end performance visibility. It uniquely leverages a combination of enterprise-wide data collection, sophisticated AI techniques, and intelligent automation to speed common and repetitive IT tasks. As a result, IT can achieve the following benefits:

  • Faster problem detection and resolution: With unified observability, it becomes easier to detect problems as they occur, rather than waiting for user complaints or failures. Once a problem is detected, Riverbed IQ can help pinpoint the root cause of the issue. Riverbed Unified Observability uses intelligent automation to gather supporting evidence and context. This reduces the time it takes to resolve the problem and get the system back up and running.
  • Better performance: By monitoring key metrics and indicators, unified observability helps identify performance areas that are not optimal. This can help improve performance of networks, applications, and users and prevent potential issues before they occur.
  • Improved collaboration: Observability tools can provide visibility into the IT environment to multiple teams across an organization. This visibility can improve collaboration between teams and help everyone work towards a common goal of improving performance and reliability.
  • Better customer experiences: By resolving problems faster, Riverbed IQ helps improve digital experiences, which leads to increased customer satisfaction and loyalty.

Extend the three pillars of observability to include unified data, insights and actions.

]]>
Empowering Patient-Centered Healthcare with Visibility Solutions https://www.riverbed.com/blogs/network-visibility-solutions-for-patient-healthcare/ Tue, 20 Jun 2023 12:03:49 +0000 /?p=21530 High-performance healthcare can be achievable with a patient-centered approach guided by visibility solutions. In today’s healthcare environment, hospitals face the challenge of managing a complex IT infrastructure that must support a wide range of technology interfaces.

Some of the most significant challenges include:

  • Picture of surgical robot in hospital.Innovation: Patient-centered care means having the right expertise available wherever the patient may be located. Technologies such as the Da Vinci Surgical System allow remote surgeons to operate and perform complex surgeries remotely.
  • Security: Healthcare organizations must protect sensitive patient data, including medical records, personal information, and financial data, from unauthorized access or theft. As healthcare systems are digitally managed, the risk of data breaches and cyberattacks increases, making data security a top concern. For example, healthcare organizations are often top targets of ransomware and data breaches including the 2018 SingHealth data breach.
  • Interoperability: Different healthcare systems and applications may use diverse data formats and protocols, making it difficult to share information and coordinate care across healthcare providers and settings. This lack of interoperability can result in errors, delays, and redundancies in care delivery.
  • System Integration: Integrating disparate technologies, applications and devices is critical for healthcare organizations. However, many systems are not designed to work together, resulting in inefficiencies, data inconsistencies and difficulty in exchanging data and paralyzing hospital operations.
  • Regulatory Compliance: Healthcare IT networks must comply with numerous regulatory requirements and data privacy regulations.
  • Cost: Implementing and maintaining healthcare IT networks can be expensive, particularly for smaller organizations. Healthcare providers must also ensure that their networks can support the demands of the clinical setting, such as high-volume data transfer and real-time data processing. Learn more about how you can reduce costs for devices, software, cloud and network with Riverbed.
  • User adoption and training: Healthcare providers and staff may resist new technology or need more skills to use it effectively. Adequate training and support are critical to ensure the optimal use of technology.

 

The role of IT in managing change

Implementing and managing healthcare tech is an IT job, but the planning process calls for collaboration with clinical leaders to ensure optimal care delivery when and where needed. Investing in appropriate resources, including people, processes, and technology, is vital to providing exceptional patient experiences. To keep up with these changes, hospitals need to invest in building futuristic architectures that can support technological advances to enhance patient experience, empower healthcare workforce, and streamline operations.

End-to-end network visibility to harmonize healthcare IT

New-age healthcare tech such as heart monitors, biosensors, oximeters, BP monitors, etc. and the ability to view health reports online can help promote real-time collaboration and consultation with colleagues and specialists during hospital rounds or practice hours, from clinics in regional areas, or whenever and wherever needed. The benefits are not limited to accessing patient records and improving patient care. It can significantly help manage daily hospital operations such as staff rostering, equipment sterilization, bookings for surgeries, and more. However, these devices increase the load on the hospitals’ LAN and WiFi network.

For the successful integration of technologies to enable communication, it’s crucial to have a dependable underlying network that supports them. To achieve this, IT teams need comprehensive end-to-end network visibility to keep all the applications connected.

Addressing challenges associated with new-age healthcare tech

Substantial use of IT heightens the risk of a data breach. Managing various endpoints, including mobile users, medical devices, and applications, is complex—bring your own device (BYOD) could add to the complexity. An increase in the number of devices can also strain the infrastructure, bandwidth and IT resources.

Hospitals can deploy tools such as  Aternity Digital Experience Management (DEM) to address these challenges. Aternity DEM is a comprehensive platform that captures and stores technical telemetry from desktop and mobile endpoint devices. It enables IT teams to gain better visibility into the actual user experience and device performance, which can inform decisions on device replacement based on performance and help identify and eliminate redundant or underused software licenses. By curtailing shadow IT, IT teams can manage software usage more effectively, identify and eliminate wasteful solutions and utilize budgets more efficiently.

The Riverbed NetProfiler proved to be incredibly valuable for monitoring the complex hospital network, which requires constant communication between internal and external endpoints. With end-to-end network monitoring and visibility, the hospital can manage information flow, monitor patient health in real time, process insurance claims, maintain medical records, and improve overall operations.

The Riverbed AppResponse enables the hospital to monitor and analyze network-based application performance, allowing them to quickly resolve issues to avoid disruptions in daily operations. The Riverbed NetIM maps application network paths, providing granular-level monitoring and troubleshooting of the IT infrastructure. This mapping is particularly crucial in a hospital setting, where staff across various functions tend to use different applications. Lastly, the Riverbed Portal provides integrated network and application insights, enabling the hospital to gain control of their network and ensure that their IT systems are functioning properly.

Invest in operational excellence

In conclusion, end-to-end network visibility unleashes the hidden aspects of the healthcare ecosystem, allowing caregivers to deliver high-quality and personalized patient care. To keep up with these advances, hospitals can invest in adding futuristic tools and applications supported by a high-performance network to enhance patient experience and operational excellence. Learn how to get more out of your IT budget with Aternity DEM before you plan to integrate new technologies to your healthcare setup’s IT stack.

]]>
Is Your MOVEit Service Under Threat? Riverbed Can Help https://www.riverbed.com/blogs/protect-against-moveit-service-vulnerability/ Fri, 16 Jun 2023 22:30:07 +0000 /?p=21874 MOVEit, a managed file transfer software product developed by Progress Software, employs Secure File Transfer Protocol (SFTP) to securely transfer and encrypt data at rest. The software has been popular with the healthcare industry as well as financial services and government sectors, but on May 31st, 2023, Progress Software disclosed a critical vulnerability: CVE-2023-34362.

Upon successful exploitation of this vulnerability, an attacker could gain sufficient access to install a web shell inside the MOVEit application. This would allow the bad actor full access to read, write and delete contents of the various databases it utilizes, such as MySQL, Microsoft SQL Server, and Azure SQL. Multiple vendors have published details about the attack vector, revealing a consistent pattern of attempting to infiltrate the vulnerable system via SQL injection to implant the web shell.

Read on to discover three ways Riverbed can help safeguard your organization from potential breaches.

1. Uncover historical activity

If you have unknowingly been scanned or implanted with this web shell, it is important to note that these attackers have been known to use a range of IP addresses. This range is released with the CVE Indicators of Notice. Thanks to Riverbed NetProfiler‘s high-resolution, raw-flow retention that comfortably goes back multiple years, the search through history to investigate any traces of offending IP addresses is made simple.

NetProfiler Flow Log Showcasing Retention Time Range
NetProfiler Flow Log Showcasing Retention Time Range

Simply copy and paste these IPs, set your desired time range, and then see whether there has been any activity from these IP addresses in the past.

NetProfiler Provided with IPs Inflicting MOVEit Vulnerability Scans
NetProfiler Provided with IPs Inflicting MOVEit Vulnerability Scans

NetProfiler then provides detailed, highly-customizable interactive reports, such as report shown below, on the various TCP or UDP communication these IPs have been engaging. In easy to understand tables, it provides port number and traffic volumes.

Traffic Report in NetProfiler
Traffic Report in NetProfiler

2. Visualize relationships between IP addresses

Visualizing the relationships of IP end points will usually bring out hidden trends and patterns in the attack vector that may not be as easily apparent in reading reports and tables. NetProfiler provides dynamically-generated, interactive visualizations of the TCP/UDP communication with the attacker’s IPs.

Service Map Details NetProfiler
Service Map Details NetProfiler

3. Track attack signatures from packets

Most attacks exhibit distinct patterns that can be captured through network activity analysis. In this case, the Indicators of Compromise (IOC) are specific HTTP headers present in the attacker’s requests:

  • X-siLock-Comment
  • X-siLock-Step1
  • X-siLock-Step2
  • X-siLock-Step3

Configuring the below definition for “Web Application” within Riverbed AppResponse ensures that even a single packet detected in the full bandwidth of data being analyzed by the appliance will trigger an event. Packets can be reviewed, and metrics for that TCP and HTTP exchange will be logged.

Detecting Attack Signature with Appresponse
Detecting Attack Signature with AppResponse

Once the definition is in place, you can observe detailed packet-based metrics and access the actual packets through right-click functionality.

Details of Scanners Provided by AppResponse
Details of Scanners Provided by Riverbed AppResponse

Here are some of the typical alerts that AppResponse offers, with numerous other categories available:

Appresponse Alerts
Riverbed AppResponse Alerts

Summary

In this blog, we explored how attack vectors follows common patterns to scan for vulnerabilities and how packet and flow-based monitoring tools can be used to analyze past incidents and detect ongoing scans and threats. To learn more about how Riverbed Observability tools can help you protect against malicious actors, please reach out to our experts here.

]]>
What Are the Three Major Network Performance Metrics? https://www.riverbed.com/blogs/what-are-the-three-major-network-performance-metrics/ Tue, 13 Jun 2023 12:42:00 +0000 /?p=21527 In today’s hyper-connected world, where businesses rely heavily on network infrastructure to transmit data and deliver services, understanding network performance metrics is crucial. Network performance metrics provide insights into the efficiency, reliability, and overall health of a network. In this blog, we will delve into three major network performance metrics: Throughput, Network Latency (Delay), and Jitter.

By understanding these metrics, you’ll be better equipped to optimize your network and ensure seamless operations.

What is Throughput?

Throughput refers to the amount of data that can be transmitted through a network within a given time frame. It is commonly measured in bits per second (bps) or its multiples (Kbps, Mbps, Gbps). Throughput represents the network’s capacity to deliver data and is often associated with bandwidth. It measures how fast data can be transferred between devices, servers, or networks. Higher throughput indicates a network’s ability to handle larger data volumes and support bandwidth-intensive applications such as video streaming or large file transfers.

What is Network Latency (Delay)?

Network latency, also known as delay, is the time it takes for a data packet to travel from its source to its destination across a network. It is usually measured in milliseconds (ms). Latency can be affected by various factors such as the distance between network endpoints, network congestion, and the quality of network equipment. Lower latency signifies faster response times and better user experience. Applications that require real-time interaction, such as online gaming or voice/video conferencing, are particularly sensitive to latency. Minimizing latency is crucial to ensuring smooth and seamless communication.

What is Jitter?

Jitter refers to the variation in delay experienced by packets as they traverse a network. It is measured in milliseconds (ms) and represents the inconsistency or unevenness of latency. Jitter is caused by network congestion, routing changes, or varying levels of traffic. High jitter can lead to packet loss, out-of-order packet delivery, and increased latency, negatively impacting the performance of real-time applications. To ensure optimal performance, it is essential to minimize jitter and maintain a stable and predictable network environment.

Why are network performance metrics important?

Network performance metrics play a vital role in several aspects.

Capacity Planning

Understanding throughput helps network administrators determine the network’s capacity and whether it can handle the expected workload. With Riverbed Riverbed’s Unified Network Performance Management (NPM) solutions, organizations can proactively manage network and application performance. Additionally, NPM allows Network Operations teams to effectively manage costs by investing only in upgrading critical infrastructure, consolidating underutilize resources and managing assets of multiple business units. Riverbed NPM delivers the ability to auto-discover topology and continuously poll metrics, automate analyses, and generate capacity planning reports that are easily customizable to changing business and technology needs.

Performance Optimization

Monitoring latency and jitter allows organizations to identify and troubleshoot network performance issues. By pinpointing the root causes of delays or inconsistencies, network administrators can optimize network configurations and minimize disruptions. For performance optimization, Riverbed NPM provides cloud visibility by ensuring optimal use and performance of cloud resources and helps organizations manage the complexity of Hybrid IT with agile networking across data centers, branches and edge devices. Riverbed NPM helps overcome latency and congestion by proactively monitoring key metrics and their affect on application performance.

Quality of Service (QoS)

Network performance metrics enable the implementation of effective Quality of Service policies. By prioritizing specific types of traffic based on their requirements, such as voice or video data, organizations can ensure a consistent and reliable user experience. The Riverbed QoS system uses a combination of IP packet header information and advanced Layer-7 application flow classification to accurately allocate bandwidth across applications. The Riverbed QoS system organizes applications into classes based on traffic importance, bandwidth needs, and delay sensitivity.

SLA Compliance

Service Level Agreements (SLAs) often include performance metrics that must be met by network service providers. Monitoring and measuring these metrics allow organizations to hold providers accountable and ensure that agreed-upon performance standards are being met. Riverbed NPM monitors metrics associated with the service components that make up each SLA. By proactively monitoring the health of the network, issues can be identified and escalated quickly, before end users are impacted.

Gain insights into your network

Network performance metrics, including Throughput, Network Latency (Delay), and Jitter, provide valuable insights into the efficiency and reliability of a network. Riverbed makes it easy for Network teams to monitor, optimize, troubleshoot, and analyze what’s happening across their hybrid network environment. With end-to-end visibility and actionable insights, Network teams can quickly and proactively resolve any network-based performance issues.

Riverbed’s  unified NPM collects all packets, all flows, all device metrics, all the time, across all environments—cloud, virtual, and on-prem—providing enterprise-wide, business-centric monitoring of critical business initiatives.

]]>
Exploring New EMA Research on WAN Transformation https://www.riverbed.com/blogs/exploring-new-ema-research-on-wan-transformation/ Mon, 12 Jun 2023 12:37:00 +0000 /?p=21321 EMAA new research report from Enterprise Management Associates (EMA) is now available!

The report, WAN Transformation with SD-WAN: Establishing a Mature Foundation for SASE Success, is based on EMA’s survey of IT professionals across North America and Europe. The report finds that SASE technology is the next step in the evolution of WANs. According to the research, many enterprises are finding that the transition from SD-WAN to SASE is not trivial. The report examines how enterprises are transforming their networks to take the next step toward SASE.

Interesting findings

This WAN transformation research has some more interesting findings on WAN usage. The report states, “71% apply WAN acceleration to their networks, and nearly all of them leverage their SD-WAN vendors for this acceleration.” This may be based on a misconception that SD-WAN can provide comprehensive WAN acceleration.

WAN transformation research
The EMA research reveals that approximately 71% of the organizations surveyed are using WAN acceleration on their networks.

The EMA paper goes on to state, “SD-WAN enables enterprises to add more bandwidth to their WAN underlay via more affordable broadband internet connections. But more bandwidth does not guarantee application performance. An SD-WAN strategy must also include WAN acceleration.”

WAN acceleration is key

Conventional approaches to enterprise networking are no longer sufficient. Today’s distributed environment supports multiple data centers, remote branch offices, and work-from-anywhere users. And it can be challenging to build networks that are agile, high performing, and that effectively connect users and applications. In addition, IT teams are faced with the challenge of deploying these networks in the most cost effective way.

Many companies have implemented SD-WAN to provide agile and efficient networks that connect their distributed users and applications. The benefits of SD-WAN include increasing WAN capacity with cost-effective internet Broadband, streamlining network operations, and accelerating site provisioning. However, while SD-WAN may in some cases improve network efficiency, it doesn’t address serious user experience issues resulting from latency or heavy data application traffic. Even after SD-WAN is deployed, companies may still experience network and application performance issues. These issues cannot be solved with additional bandwidth.

Application Performance
EMA research reveals the application performance problems that organizations struggle with on their WANs.

Riverbed WAN acceleration solutions can work together with and on top of SD-WAN solutions. This will foster higher performing networks, stronger application performance, and improved business productivity. Watch this video to learn how SD-Wan and App Acceleration are better together.

Challenges with internet for primary WAN connectivity

The EMA research also details challenges to using the internet for primary WAN connectivity. The report states, “In general, the internet is not an enterprise-class WAN connectivity solution.” The chart below lists these challenges, the biggest of which are security risk and complexity of managing multiple ISP relationships. It makes sense that security risk is the biggest challenge given the internet is a public network that is “inherently insecure.”

WAN Connectivity
This chart lists the biggest challenges to using the internet for primary WAN connectivity.

The research also found application performance, instability across internet service providers, and inconsistent global performance across geographies to be challenges. These challenges illustrate the reality of the internet—inconsistent performance and instability that leads to application performance problems.

We frequently hear from customers that inconsistent global performance is a major problem. Using the internet for primary WAN connectivity may work out to some degree for organizations that operate in geographies with good bandwidth and connectivity—for example, in western Europe or the United States. However, for international companies with branches across the globe, the performance and experience for each remote branch is very different. In addition, with users working from home where the last mile service provider is randomly based on the employee choice, the company’s IT is no longer even in control.

To meet these challenges head on, organizations need a solution that can help greatly improve the user experience. Riverbed Acceleration solutions provide fast, secure acceleration of apps over any network to users, whether mobile, remote, or in office. Built on decades of WAN optimization leadership and innovation, our solutions power cloud, SaaS, and client applications at peak speeds. And they overcome network speed bumps such as latency, congestion, and suboptimal last-mile conditions. To learn more about our Acceleration solutions click here. 

A need for more granular network monitoring

The WAN transformation research also reveals that while 73% of organizations are relying on the native monitoring provided by their SD-WAN vendors, many are finding it inadequate—with only 41% being fully satisfied with this third-party monitoring.

Users are interested in seeking more granular network monitoring telemetry from third parties that can provide this information. Currently, IT teams may be using both but there is a trend that as network monitoring requires more overlay/underlay visibility, more powerful third-party tools will be required.

Keysight Technologies case study

The EMA report also features an impactful Riverbed case study that describes how Keysight Technologies leverages Riverbed Acceleration and Riverbed Network Performance Management solutions to enable better performance and fast troubleshooting. Keysight, a long-time customer of Riverbed, is an American S&P 500 electronics manufacturing firm leveraging Riverbed Acceleration and Riverbed solutions to gain network visibility and enable faster troubleshooting and better performance.

Due to the nature of its business, large files are distributed from one location to another on a daily basis, requiring significant time and resources. In addition, Keysight faced challenges with network visibility to understand dependencies. Keysight uses Riverbed SteelHead WAN optimization to scale its existing bandwidth and achieve better performance and control. “Riverbed’s acceleration solution, SteelHead, has improved our network performance by 25%. It is extremely effective in reducing the time we spend on transferring files from one office to another” said Ray Schumacher, Network Architect at Keysight Technologies. Additionally, SteelHead improved disaster recovery capabilities by dramatically speeding up device backups across offices around the world.

Keysight wanted to use additional Riverbed visibility tools to seamlessly integrate with SteelHead and digest all the data passing through SteelHead. Keysight began to use Riverbed NetProfiler and Riverbed AppResponse, which provided better visibility for real-time troubleshooting and historical packet data. As a result, the Keysight team is able to fix issues proactively and diagnose problems before their user community raises tickets. “Looking into the network is instrumental to see what’s really going on in the environment, which provides us with insights that we can act on to improve productivity and performance.”

Learn more

Interested in more findings from the report? Click here to access the insightful WAN transformation research from EMA and establish a foundation for SASE success!

]]>
What is Digital Experience Monitoring? https://www.riverbed.com/blogs/what-is-digital-experience-monitoring/ Thu, 08 Jun 2023 12:12:00 +0000 /?p=21429 Digital Experience Monitoring (DEM) is a user-centric approach that focuses on improving the performance of digital platforms to enhance the user experience. As more interactions move online, the need for smooth, intuitive, and responsive digital experiences becomes increasingly important. DEM provides the tools necessary to measure, track, and optimize these experiences in real-time.

The building blocks of Digital Experience Monitoring

At its core, DEM involves monitoring digital services from the end user’s perspective. This means understanding how different elements of a digital platform, such as web pages or mobile apps, perform for the user.

There are two key components in DEM: Real User Monitoring (RUM) and Synthetic Monitoring. RUM involves collecting data from real users in real-time to understand their experiences with a digital platform. This data provides valuable insights into how users interact with a platform, helping to identify any potential performance issues.

On the other hand, Synthetic Monitoring involves simulating user interactions with a platform to identify any potential bottlenecks or performance issues before they affect real users. This proactive approach helps to maintain optimal performance levels and ensure a seamless user experience.

The importance of Digital Experience Monitoring

In today’s digital age, user experience is king. A smooth, seamless, and intuitive digital experience can be the difference between a one-time visitor and a loyal customer. By implementing DEM, businesses can gain a better understanding of how their digital platforms are performing, identify any potential issues, and take proactive steps to optimize the user experience.

A key benefit of DEM is that it provides actionable insights. Instead of just collecting data, DEM allows businesses to understand what the data means and how it can be used to improve performance. This could mean making changes to a website’s design to make it more user-friendly, or optimizing a mobile app to improve load times.

Moreover, DEM is not just about improving the user experience—it also has a direct impact on a business’s bottom line. A better user experience leads to higher customer satisfaction, which in turn leads to increased customer loyalty and higher revenue.

Implementing Digital Experience Monitoring

Implementing DEM involves a combination of technology and strategy. On the technology front, businesses need to invest in the right tools to collect, analyze, and interpret user experience data. These tools need to be able to provide real-time insights and identify any potential performance issues as soon as they occur.

On the strategic front, businesses need to have a clear understanding of what they want to achieve with DEM. This could be improving the load times of a website, increasing the responsiveness of a mobile app, or enhancing the overall user experience across all digital platforms.

Once the goals are defined, businesses can then use the insights gained from DEM to implement changes and track their impact. This iterative process of monitoring, implementing changes, and monitoring again ensures continuous improvement and optimization of the user experience.

Aternity’s role in Digital Experience Monitoring

Aternity Digital Experience Management (DEM) offers a unique, comprehensive solution for businesses aiming to optimize the digital experience for their end-users. Its key strength lies in measuring and analyzing every user interaction across applications, whether cloud-based, on-premise, or mobile. This provides businesses with a clear understanding of how their tech infrastructure affects daily productivity and customer satisfaction.

Aternity DEM captures data directly from the end-user’s device, providing a true “outside-in” perspective for a more accurate reflection of the user experience. Its flexibility allows for monitoring applications regardless of their delivery method, while proactive performance benchmarking and alert thresholds help identify potential issues before they significantly impact the user experience. This comprehensive, user-centered approach empowers businesses to enhance productivity, reduce downtime, and improve overall user satisfaction.

The future of Digital Experience Monitoring

As technology continues to evolve, so too will the field of Digital Experience Monitoring. New technologies such as AI and machine learning are already being used to provide more detailed and accurate insights into user behavior. These technologies will continue to play a vital role in the future of DEM, enabling businesses to provide personalized, intuitive, and seamless digital experiences that exceed user expectations.

Digital Experience Monitoring is a powerful tool for any business operating in the digital space. By providing valuable insights into user behavior and performance issues, DEM enables businesses to proactively optimize the user experience, leading to higher customer satisfaction, increased loyalty, and ultimately, greater success. Learn more about full-spectrum DEM here.

]]>
What Is Intelligent Automation? https://www.riverbed.com/blogs/what-is-intelligent-automation/ Tue, 06 Jun 2023 12:11:00 +0000 /?p=21263 Intelligent automation uses advanced technologies such as machine learning (ML), correlation, and automation to automate IT operations tasks. This involves the use of smart algorithms and runbook workflows to identify and automate routine, repetitive, and time-consuming tasks, freeing up IT staff to focus on more strategic and creative work.

Riverbed uses intelligent automation to automate a wide range of IT processes, including incident response, security forensics investigations, desktop remediation, and providing intelligence to trouble ticketing. Automation can help improve operational efficiency, reduce errors, and enhance the overall quality of IT services.

Intelligent IT automation also provides actionable insights to help IT teams identify and address potential problems before they occur. This can help organizations achieve greater agility and flexibility in their IT operations, while also reducing costs and improving the overall quality of service.

Why use Intelligent Automation?

The proliferation of new applications is generating an overwhelming volume of data, leading to alert overload. It is simply no longer possible for IT teams to analyze and correlate all this data manually and still meet operational expectations.

In addition, alert overload is compounded by today’s scarcity of skilled IT resources–fewer IT staff are left to do more of the work. And, these already short-staffed IT teams must often chase false positives, events that don’t impact digital experience. The lack of automation of these workflows results in longer resolution times of critical issues and higher error rates, both of which can negatively impact user experience and business performance.

How Riverbed IQ leverages Intelligent Automation

Riverbed IQ unified observability service automates incident response of performance and security events and provides intelligent trouble ticketing to ServiceNow.

Incident response

With the Riverbed portfolio’s full-fidelity insights and rich analytics, complex troubleshooting workflows become razor sharp, highly automated processes. Riverbed IQ replicates the advanced investigative processes of Network Operations teams, providing context-driven insights that empower them to proactively resolve issues without escalating.

Security forensics investigations

SecOps teams want easy access to all data sources and to integrate that data into their SOAR and SIEM tools. Riverbed IQ provides out-of-the-box runbooks for security investigations. These runbooks give SecOps teams easy access to Riverbed telemetry data to help fully investigate threats. As a result, security tools gain more context for threat investigations, reducing risk to the business.

Auto-populating trouble tickets

In today’s modern IT market, targeted delivery of fast, context-driven insights to ITSM solutions can mean the difference between business triage and business optimization. Riverbed IQ uniquely delivers deep ServiceNow incident context that streamlines ticket creation and reduces escalation. Riverbed IQ links back to the originating source telemetry to assemble supporting troubleshooting data. Data collected can include network, infrastructure, application, and end user experience.

Automation guides Aternity remediations

Aternity end user experience monitoring is also leveraging intelligent automation from the Riverbed Observability platform. Unlike other solutions that require a multitude of remediation scripts that address narrow use cases, Aternity sets itself apart by offering one-click remediation actions that can dynamically mimic expert decision-making by constructing logic-driven remediation workflows using reusable steps. This enables the resolution of both simple and complex issues. Combined with the fact that Aternity offers an extensive catalog of Macintosh and PC remediations for recurring desktop issues such as application hangs, boot and login times, network connectivity, application crashes, OS crashes and more, IT can have more time to focus on innovation.

 

Aternity leverages automation to dynamically mimic expert decision-making by constructing logic-driven remediation workflows.
Aternity leverages automation to dynamically mimic expert decision-making by constructing logic-driven remediation workflows.

What’s the difference between AI and Intelligent Automation?

AI or Artificial Intelligence is excellent at sorting and classifying both structured and unstructured data. It can provide deep insights into trends, patterns and outliers. For example, a well-trained AI algorithm can execute tasks like recognizing the contents of an image, understanding the contents of a document, correlating related events, and more.

Intelligent automation, on the other hand, refers to the use of advanced technologies such as AI, machine learning, correlation, and automation workflows to automate IT processes. Intelligent automation combines the power of automation with AI to create systems that make decisions with minimal human intervention.

AI is a necessary component of intelligent automation. The main difference between the two is that AI is focused on creating intelligence, while intelligent automation is focused on automating specific tasks or processes to increase efficiency and productivity.

Benefits of Intelligent Automation

McKinsey Global Institute estimates that knowledge work automation tools could take on tasks that would be equal to the output of 110 million to 140 million full-time equivalents (FTEs). They feel it’s possible this incremental productivity could have as much as $5.2 trillion to $6.7 trillion in economic impact annually by 2025.

The Benefits of Intelligent Automation
Intelligent Automation helps organizations improve their agility, reduce costs, and deliver better quality services to their customers.

Other benefits of IT automation, include:

  • Increased efficiency: Automation can significantly improve efficiency by automating repetitive tasks, reducing manual effort, and speeding up processes.
  • Improved accuracy: Automation can decrease the risk of human error, improving accuracy and reliability of IT processes.
  • Better resource utilization: By automating routine tasks, IT staff can focus on more strategic and complex tasks, making better use of their skills and expertise.
  • Faster MTTR: Automation reduces the time required to complete tasks, speeding the recovery of applications and services.
  • Cost savings: Automation decreases the need for manual labor, saving costs on other operational expenses and allowing staff to work on more strategic projects.
  • Scalability: Automation helps organizations scale their operations easily and cost-effectively, by limiting the need for manual analysis.
  • Improved customer satisfaction: Automation delivers consistent and high-quality services, leading to increased customer satisfaction.

Overall, Intelligent Automation helps organizations improve their agility, reduce costs, and deliver better quality services to their customers. For more information on how you can use Riverbed’s Intelligent Automation capabilities to improve your IT environment, visit the new Intelligent Automation page.

]]>
What Are the Four Main Areas of Digital Transformation? https://www.riverbed.com/blogs/what-are-the-four-main-areas-of-digital-transformation/ Thu, 01 Jun 2023 12:27:00 +0000 /?p=21401 In today’s fast-paced and interconnected world, digital transformation has become a critical driver of success for organizations across industries. It encompasses a profound shift in leveraging technology to enhance business processes, revolutionize customer experiences, and drive growth.

The digital transformation journey covers four key areas: domain transformation, process transformation, business model transformation, and organizational digital transformation. Riverbed’s Unified Observability platform, which offers complete visibility and actionable insights based on full-fidelity, full stack telemetry, provides companies the launching pad to a successful digital transformation program.

Domain transformation

Domain transformation focuses on redefining an organization’s core functions and offerings in the digital realm. It involves embracing technological advancements to deliver innovative products and services. This may involve leveraging artificial intelligence, the Internet of Things (IoT), cloud computing, or big data analytics to create new digital experiences for customers.

For example, traditional brick-and-mortar retailers are increasingly adopting e-commerce platforms, providing customers with seamless online shopping experiences. This domain transformation enables retailers to reach a global customer base, personalize product recommendations, and streamline logistics, ultimately enhancing customer satisfaction and driving revenue growth.

Riverbed offers several solutions to reduce the risk of change for domain transformation. For example, Riverbed offers a comprehensive cloud migration offering that helps organizations avoid performance issues, unexpected delays, and unplanned costs. Riverbed provides cloud visibility by providing insights into workload performance in hybrid cloud, multi-cloud, or SaaS environments, and ensuring the security of these workloads. It enables IT Ops teams to plan for seamless application migrations by mapping application dependencies, as well as predicting post-migration performance. Additionally, it helps reduce cloud costs by optimizing bandwidth utilization, resulting in up to 95% reduction in cloud egress costs. By understanding traffic patterns and associated costs, organizations can engage in more efficient planning. Furthermore, Riverbed’s solution optimizes cloud performance by delivering 33 times faster cloud app performance to users regardless of their location.

Whether it’s for cloud, Windows 11 or VDI, Riverbed offers a range of solutions to reduce the risk of IT change for digital transformation initiatives.

Process transformation

Process transformation entails reimagining and optimizing existing business processes by leveraging digital technologies. This involves automating manual tasks, improving efficiency, and enhancing collaboration through digital tools and platforms.

By implementing automation, organizations can automate repetitive tasks, thereby freeing up employees to focus on higher-value activities. Additionally, process transformation involves implementing cloud-based collaboration tools, enabling teams to work seamlessly across geographical boundaries and fostering innovation through enhanced communication and knowledge sharing.

Powered by the Riverbed LogiQ Engine, the Riverbed portfolio uses AI, correlation, and automation to streamline repeatable processes with minimal human intervention, lower costs, and improved user satisfaction. Riverbed uniquely offers broader automation use cases that extract insights across Riverbed monitoring data and existing 3rd party tool silos to enable faster time to resolution. With its powerful automation, analytical and integration capabilities, Riverbed delivers solutions such as automated incident response, intelligent ServiceNow ticketing, automated desktop remediation and intelligent incident response for IT Ops and Service Desk Teams.

Business model transformation

Business model transformation involves reinventing an organization’s fundamental approach to value creation and revenue generation. It requires identifying new opportunities and leveraging digital technologies to deliver unique value propositions to customers.

For instance, the rise of the sharing economy, powered by platforms like Uber and Airbnb, exemplifies business model transformation. These companies disrupted traditional industries by providing on-demand transportation and accommodation services, respectively, using digital platforms that connect customers with providers. By unlocking underutilized resources and delivering convenience and personalized experiences, they created entirely new business models and market opportunities.

Organizational digital transformation

Organizational digital transformation encompasses the cultural and structural changes necessary to support and sustain digital initiatives. It involves fostering a digital mindset across the organization, empowering employees to embrace change, and promoting a culture of innovation.

To successfully navigate organizational digital transformation, organizations must invest in a comprehensive Digital Employee Experience solution. Riverbed’s Aternity provides companies a complete view of the total digital employee experience by tightly correlating both quantitative and qualitative measures of experience. Aternity already offers the deepest quantitative insights, such as application and performance data, into the digital experience and the most powerful insights into the customer experience. With its ability to gauge employee feedback via Aternity Sentiment surveys and the ability to benchmark digital experience against industry peers, Aternity delivers aggregated insights based on application and device performance data along with human reactions, ultimately providing total experience management from an organization’s employees to their customers.

Start your digital transformation journey with Riverbed

By embracing domain transformation, process transformation, business model transformation, and organizational digital transformation, businesses can unlock new opportunities, enhance customer experiences, and stay ahead of the competition.

With its Riverbed and Acceleration offerings, Riverbed can guide companies in their digital transformation projects from start to finish. Before kicking off the project, Riverbed professionals will help organizations ensure that their new investments are targeted and prioritized based on issues that have the most impact on user experience. During the implementation, Riverbed will track progress, provide recommendations on strategy adjustments and provide guidance based on full data visibility.

So, let the digital transformation journey begin, and let innovation and growth propel your organization to new heights.

]]>
Five Scary End User Services Metrics and How to Address Them with DEX https://www.riverbed.com/blogs/address-end-user-services-metrics-with-dex/ Fri, 26 May 2023 12:31:00 +0000 /?p=21199 The pressure on end user services teams continues to grow. Hybrid work, the modernization of the digital workplace, and increasing demand for consumer-like experiences all combine to complicate the role of service desk and digital workplace services teams. IT leaders and staff in these teams must balance incident response with innovation, while supporting the transformation agendas of their organizations. And with economic uncertainty and fears of recession, they must do this while controlling costs and reducing unnecessary spending.

Identifying areas to improve service and cut costs requires understanding the targets. What does “good” look like? How does your organization compare to industry benchmarks? Where is the “low-hanging fruit?”–the areas within easy reach that can be made better with less effort and investment. Luckily, organizations can rely on data from analysts, industry, and Digital Employee Experience (DEX) management tools like Digital Experience can help them address these challenges.

This blog covers five key end user services metrics and the role that Aternity DEX plays in helping organizations improve them.

Metric 1: Number of contacts per service desk agent

The first data point comes from Gartner’s report, IT Key Metrics Data 2023: End-User Services Measures — IT Service Desk Analysis (subscription required). This research contains high-level Service Desk cost efficiency and staff productivity benchmarks based on data collected throughout 2022 from a global audience of CIOs and IT leaders. It shows that the average service desk agent handled 4,444 contacts per year in 2022, about the same as in 2022. Note the wide distribution in the data by size of the environment. That’s a lot of contacts! The data point out the intense nature of service desk work–especially with increasingly demanding end-users who expect a consumer-like experience for their corporate IT. The data also points out the importance of automated remediation in enabling service desk staff to deal with large numbers of contacts.

service desk, call volume, end user services
The number of agent-handled contacts per service desk agent FTE has remained nearly constant over the past two years. Source: IT Key Metrics Data 2023: End-User Services Measures — IT Service Desk Analysis, 8 December 2022 – ID G00779738

How Aternity DEX helps improve Metric 1

The first goal of any service desk is to improve the quality of service delivered in order to reduce the volume of inquiries and issues. The second goal is to reduce the time required to resolve inquiries and issues through self-service and automation. Aternity comes with an extensive library of pre-built automation scripts that can be applied to a single user’s device or to a group of devices. Aternity automated remediation enables IT to automate the recovery actions necessary to resolve the most commonly expected end user issues. Watch this short video to see how Aternity automated remediation enables IT to proactively identify and resolve end user issues.

Watch Video

Metric 2: Number of devices per digital workplace services staff

The next data point comes from a different Gartner document based on the same 2022 survey. Like the research above, the report IT Key Metrics Data 2023: End-User Services Measures–Digital Workplace Services Analysis (subscription required) contains cost efficiency and staff productivity benchmarks, but this time focused on digital workplace teams. Survey data show that investment in digital workplace services is 8.8% of total IT spending. Similarly, digital workplace services Full Time Equivalents (FTEs) represent 8.4% of total IT FTEs, including contractors. That’s not a lot of staff. Especially when you consider that a digital workplace services staff member supports 394 end user devices on average. Again, that number varies widely based on the size of the organization. Digital workplace teams must support a wide range of devices–laptops, PCs, virtual devices, and company-provided tablets and smart devices. And with macOS devices now reaching 23% in the enterprise, these teams require tools that enable them to support a broad range of vendors.

digital workplace, end user services, desktop engineering
On average, a digital workplace services agent supports 394 end user devices, up slightly over the past year. Source: IT Key Metrics Data 2023: End-User Services Measures — Digital Workplace Services Analysis, 8 December 2022 – ID G00779735

How Aternity DEX helps improve Metric 2

Automated remediation certainly helps digital workplace teams manage larger numbers of devices per FTE. Aternity also integrates with ServiceNow Incident Management, so that digital workplace staff never have to leave ServiceNow to analyze Aternity device health and performance data or user information when resolving an issue. Not only can they see Aternity digital experience data right in the ServiceNow Incident Management interface, they can also execute Aternity automated remediation actions from there as well. This helps reduce resolution time, improving service, reducing costs, and enabling digital workplace services staff to support a larger environment. Watch this short video to see how it works:

Watch Video

Metric 3: Task allocation for digital workplace services staff

A second datapoint from the digital workplace services report shows the breakdown of how these staff members spend their time. In 2022, digital workplace teams spent 67% of their time on incident resolution and service requests, up from 63% in 2021. The four percent increase came out of engineering time. The data clearly show that when staff members are overwhelmed with fire-fighting and responding to routine service requests, they have less time to devote to higher value innovation or digital transformation projects.

digital workplace services, end user services
Digital workplace services teams spent 4% more time on incident resolution and service requests in 2022 than they did in 2021, at the expense of higher value engineering work. Source: IT Key Metrics Data 2023: End-User Services Measures — Digital Workplace Services Analysis, 8 December 2022 – ID G00779735

How Aternity DEX helps improve Metric 3

Aternity provides a variety of capabilities that enable digital workplace teams to quickly identify digital experience improvement areas that will have the biggest payoff. Aternity Experience Insights provide digital workplace services with a list of issues, sorted by impact to employee productivity, so they can identify which improvement projects should be tackled first. Here’s another short video to show how this works:

Watch Video

Riverbed Aternity Digital Experience Index (DXI) automatically identifies digital experience hot spots across your enterprise impacting employees and customers, then sets you on a path to action and improvement. You can establish goals for particular areas, based on industry benchmarks, and prioritize the importance of each area affecting digital experience. You can benchmark your own digital experience against your industry peers and compare the digital experience of different parts of your organization. Aternity DXI can serve as the foundation of your continuous improvement program for digital experience. Here’s how it works:

Watch Video

Metrics 4 & 5: Wasted SaaS licenses & prevalence of shadow IT

SaaS usage has skyrocketed in organizations of every type. That’s no surprise. In 2008, Gartner reported that worldwide SaaS revenue hit $6.4 billion. This year, SaaS spending is projected to reach $195 billion, up from $167 billion in 2022. And the Gartner End User Services report referenced above shows that in 2022, SaaS comprised 40% of digital workplace services software spending, up from 34% in 2021.

The Zylo 2023 SaaS Management Index Report provides more detail. According to Zylo, the average organization has 291 applications in their SaaS portfolio and spends $50M annually on SaaS. This equates to about $4600 per employee. While companies have committed to SaaS to run their business, Zylo reports significant waste. On average, organizations use only 56% of their SaaS licenses. The remaining 44% are wasted. Do the math. That’s an average of $22M in wasted software spending, every year. 

Shadow IT
Shadow IT is a big issue. According to Zylo, individuals are responsible for 6% of SaaS spend, but 37% of the number of applications. Source: Zylo 2023 SaaS Management Index

The Zylo 2023 SaaS Management Index goes further to identify who is buying these SaaS licenses. It’s not IT. IT purchases only 31% of SaaS by spend and 18% by number of applications. Shadow IT is a big issue. While only 6% of application spend is on shadow IT, the category accounts for 37% of total application quantity, according to the report.

How Aternity DEX helps improve Metrics 4 and 5

One of Aternity’s key capabilities is IT asset cost reduction. Aternity automatically discovers every application in use in the enterprise, no matter what type it is. Thick client, web, SaaS, even Shadow IT. Aternity enables you to track actual usage of every type of application in your enterprise, and save money by identifying the software licenses that are unused or under-used.

IT asset cost reduction, software license cost reduction
Aternity identifies the actual usage of every type of application in your enterprise so you can save money by eliminating the licenses that are unused or under-used.

Get your end user services teams started with Aternity today

You can explore how Aternity enables you to address these key end user services metrics by registering for a Request Demo of Riverbed Aternity. Download our software to understand how our approach to DEX helps you reduce costs, improve productivity, and deliver better customer satisfaction.

]]>
Ensuring Compliance for Better Business Resilience https://www.riverbed.com/blogs/ensuring-compliance-with-npm-for-business-resilience/ Thu, 25 May 2023 12:38:37 +0000 /?p=21041 In today’s hybrid environments, network performance management (NPM) is critical for any organization’s success. Networks are the backbone of modern businesses, enabling communication, collaboration, and information sharing. However, with the increasing complexity of networks and the rise in cyber threats, ensuring network performance can be a challenge.

Why compliance is a pillar of business resilience

Business resilience is the ability of an organization to withstand, adapt and recover from disruptions and challenges. One critical aspect of business resilience is compliance, which refers to adhering to legal, regulatory, and organizational standards that apply to your business. Compliance plays a crucial role in network performance management and can help organizations fortify their network.

What compliance looks like for your hybrid network depends on your industry. For example, highly-regulated industries like government, medical and financial services usually have more stringent compliance requirements. Careful adherence to security and operational standards, however, is a necessity to some degree in every hybrid network.

When your network fails to meet internal and external compliance requirements, you risk creating security gaps and incurring fines. A hybrid network actively managed to operational and security standards, however, is able to remain compliant even in instances of network disruption. This level of compliance allows organizations to effectively maintain resilience on older applications and services while introducing new technologies.

Compliance can help improve network performance by:

  1. Enhancing Security: Compliance regulations often require the implementation of security measures that can help protect against cyber threats and minimize the risk of data breaches. By implementing mandated security measures, organizations can improve network performance by reducing downtime caused by security incidents and ensuring the confidentiality and integrity of sensitive data.
  2. Reducing Network Downtime: Compliance regulations also require organizations to establish failsafe procedures to ensure business continuity in the event of a cyberattack or system failure. By implementing organization or governmental compliance measures, businesses can reduce network downtime, ensuring that critical business processes can continue even during an outage. This can help improve network performance by minimizing the impact of network disruptions on business operations.
  3. Streamlining Network Management: Compliance regulations often require the documentation of network configuration and management processes. By implementing standardized processes and procedures, organizations can streamline network management, making it easier to monitor and troubleshoot issues. This can help improve network performance by reducing the time required to identify and resolve network problems.

Compliance is not just a legal requirement but is also a strategic imperative for businesses looking to optimize their network performance. By prioritizing compliance and implementing the required security measures, policies and procedures, organizations can ensure their networks are performing at their best while also meeting regulatory requirements. This ensures a secure and reliable digital experience for employees and customers, safeguarding people, assets and overall brand equity.

Ensure operational governance and compliance

The Riverbed NPM portfolio helps network teams with oversight through orchestration and data management. Compliance, whether directed by organizational or governmental requirements, is a way to safeguard the network in addition to the business. With new operational governance features like automated orchestration, IT teams can stand up, take down and redeploy Riverbed NPM products to a known safe state seamlessly. Riverbed NPM also now accommodates governmental standards like the Federal Information Processing Standard (FIPS) and Section 508 to ensure uniform practice and accessibility.

Failure to comply with such regulations can result in major fines, loss in revenue and negative customer sentiment. So, whether you are addressing requirements for security, fiduciary, accessibility, or other standards, Riverbed NPM ensures business resilience by leading the industry with regulatory compliance requisites for the modern hybrid network.

For more information on business resilience and how the Riverbed Network Performance Management portfolio can help your organization, please visit this page.

]]>
How Do You Reduce Your IT Costs? https://www.riverbed.com/blogs/how-do-you-reduce-your-it-costs/ Tue, 23 May 2023 12:53:42 +0000 /?p=21177 IT is the backbone of every business. Without a strong and robust IT team that can maintain a high level of performance and reliability, business can suffer due to a lack of employee productivity, decreases in customer satisfaction, and overall poor performance.

However, even given its critical nature, the reality is IT is a huge expenditure for many businesses and there are often ways to reduce the IT budget without sacrificing digital experience, employee productivity, or customer satisfaction. The tricky part is determining where exactly those cuts can be made and where to start.

In this blog, we’ll provide three tips on reducing your IT costs: how to identify the right devices to upgrade, the importance of IT budgeting, and an IT cost reduction checklist.

Identify the right devices to upgrade

The first step to optimizing IT costs is to evaluate your existing infrastructure. This evaluation will help you determine which devices need to be upgraded or replaced.

Here are three tips to help you identify the right devices to upgrade:

  1. Extend the life of devices: While many businesses will replace devices based on their age, you can save money by focusing on device performance. Sometimes older devices are still performing well and don’t have to be replaced, this can result in huge device cost savings.
  2. Right size employee devices: Ensure you are providing your employees with appropriately powered devices. When refreshing your employee devices, evaluate their needs, if an employee is primarily using light applications, they may not need a high-powered device. On the other hand, an employee that spends their day using resource intensive applications will need a device that can support their use-case.
  3. Identify poorly performing devices: Like extending the life of older devices that are still performing well, some newer devices may not be performing as well as expected. By identifying these devices, you may be able to proactively fix performance issues to save on expenses.

Importance of IT budgeting

Once you have identified the devices that need to be upgraded, the next step is to develop an IT budget. IT budgeting is critical to managing IT costs effectively.

Here are a few key benefits of IT budgeting:

  • Optimize software licenses: Evaluate the software you use and the licenses you have. It is possible that you are paying for licenses that aren’t being used, employees are using redundant software, or shadow IT applications are increasing your software costs.
  • Assess network infrastructure: Assess the network infrastructure and determine where bottlenecks are occurring. This assessment will help you identify areas where you can either upgrade or streamline network infrastructure to reduce bandwidth costs.
  • Evaluate cloud spend: Cloud costs can rise quickly as you move to cloud-native or hybrid cloud environments. It’s critical you closely examine and understand the bills coming from your cloud provider and take steps to minimize unnecessary cloud traffic.
  • Prioritize spending: Identify the areas of IT investment that will provide the most ROI and focus on those spending areas first. Measure the impact of planned and ongoing changes on things like the digital experience, application performance, device health, and network performance to ensure you are getting the most bang for your buck.

IT cost reduction checklist

To help you reduce IT costs, follow this quick checklist to get started:

  • Align IT with business goals: Ensure your IT investments align with business goals, enabling you to optimize IT costs while driving business growth.
  • Determine a device refresh strategy: Identify the devices that need to be replaced, which ones can be fixed, and which ones can continue being used.
  • Identify savings opportunities: Look for ways to save on existing spend in areas like software licenses, cloud usage, and network bandwidth.
  • Automate IT processes: Automate IT processes to reduce manual labor and increase efficiency.

In conclusion, reducing IT costs is critical for every business, and the key is to optimize IT infrastructure while minimizing unnecessary expenses.

 

]]>
Enhanced Network Security for Better Business Resilience https://www.riverbed.com/blogs/enhanced-network-security-for-business-resilience/ Wed, 17 May 2023 12:17:55 +0000 /?p=21043 Imagine you’re the CEO of a business that relies heavily on your company’s network to keep things running smoothly. Hybrid network workflows combine both on-premise data centers and cloud environments, as well as users accessing applications from various devices and locations. All of these elements, as well as the data that passes through them, need to be protected. One day, you get a call from your IT department telling you that your network has been hacked. Panic sets in.

What do you do? Were you prepared for this? What is the financial or reputational impact to the business? Does your network have business resilience?

Why security is a pillar of business resilience

Business resilience is a critical factor in today’s fast-paced and dynamic business environment. Network performance management (NPM) plays a significant role in ensuring business resilience by managing network performance, compliance, and security. Security is one of the most crucial areas of focus for business resilience in the context of NPM. Improving your network’s security, making it more adaptable, can help it respond favorably to a rapidly evolving threat landscape. Not only will you weather potential attacks better, recovering faster and with less damage, but you may be able to avoid others altogether.

According to the Enterprise Strategy Group (ESG) 2023 Technology Spending Intentions Survey, 65% of IT professionals anticipate spending more on cybersecurity than any other area. Modern networks struggle to keep pace with an ever changing threat landscape. As threats and threat actors evolve and grow more sophisticated, you need a resilient hybrid network that leverages data to help your team find and fix issues faster, remediate threats, and avoid risks.

Taking steps to mitigate security risks

As this is often a daunting task for the IT organization to figure out and manage, NPM gives NetOps and SecOps teams to data and functionality to mitigate security risks. When evaluating potential NPM offerings in the context of security, identify solutions that have the following characteristics:

  • In the event of a cyber attack, look for NPM products that are able to be deployed, taken down and restored to a safe state automatically having no impact to the network.
  • Look for functionality like intelligent forensic analysis that can automate threat identification and reduce future risks.
  • For proactive threat hunting, full-fidelity data capturing every packet, flow, and device metric in your hybrid network without sampling.
  • And when finding and fixing security issues faster, look for products with anomaly detection backed by AI/ML to automate data analysis.

To build resilience into security in network performance management, businesses need to take a proactive and holistic approach. Here are some best practices:

  1. Develop a comprehensive security strategy: This should include clear objectives, metrics, and processes for monitoring and ensuring security.
  2. Invest in the right tools and technologies: Effective security requires the right tools, such as traditional threat prevention tools and methods as well as products that produce forensic telemetry that find threats that traditional security tools might miss. Businesses need to evaluate their needs and choose the tools that best fit their requirements.
  3. Monitor and analyze network traffic: By monitoring network traffic, businesses can identify potential security threats and take action before they cause damage.

Engage intelligent security methods against cyber threats

Riverbed NPM products play a strategic role in the overall security of hybrid networks. NPM products need to be seamlessly integrated into an organization’s automated processes to remove the potential risk from manual administration.

With new features like automated orchestration, IT teams have the ability to restore Riverbed NPM products to a known safe state without manual intervention in the event of cyber-attacks or other potential internal or external network threats. In addition, the Riverbed NPM portfolio provides full fidelity data by capturing every packet, flow and device metric without sampling for forensic purposes. This helps identify potential risk exposures that traditional security tools might miss. Solid security competencies drive business resilience by reducing both the risk of negative business impacting events and the magnitude of when they occur.

For more information on business resilience and how the Riverbed Network Performance Management portfolio can help your organization, please visit this page.

]]>
How to Measure Digital Employee Experience (DEX) https://www.riverbed.com/blogs/how-to-measure-digital-employee-experience-dex/ Tue, 16 May 2023 12:31:00 +0000 /?p=21120 Digital employee experience has become increasingly important as digital technologies continue to play a larger role in the workplace. Employees today rely on digital tools to perform many of their job functions, and a positive digital employee experience is critical for maintaining employee engagement and productivity.

Companies can measure digital employee experience (DEX) through various methods, including:

  • Surveys: Companies can use surveys to gather feedback from employees about their digital experience, such as the usability of software, availability of technical support, and accessibility of training materials.
  • User analytics: User analytics can track how employees interact with digital tools and systems, such as login frequency, time spent on different pages, and click-through rates. This data can provide insights into areas where employees may be experiencing issues or frustrations.
  • Performance metrics: Performance metrics, such as productivity, error rates, and customer satisfaction, can be used to assess the impact of digital tools on employee performance.
  • Net Promoter Score (NPS): NPS measures the likelihood that an employee would recommend a company’s digital tools to a colleague. This metric can provide insights into the overall satisfaction of employees with the digital tools they are using.

By using these methods, companies can gain a better understanding of their employees’ digital experience and identify areas for improvement to enhance their overall digital employee experience.

What are the pros and cons of the various ways to measure DEX?

Of course, each of these methods of measuring digital employee experience has advantages and disadvantages. The following table summarizes them.

Method Advantages Disadvantages
Surveys
  • Provide in-depth feedback from employees and highlight specific pain points.
  • Easy to administer and can be distributed to a large number of employees.
  • Identify trends over time and track progress in improvement efforts.
  • May not capture the full scope of the employee experience, as employees may not feel comfortable providing honest feedback.
  • Can be time-consuming to analyze, and responses may be subject to interpretation bias.
User Analytics
  • Provide objective data, allowing companies to pinpoint specific areas for improvement.
  • Track changes over time, measure the impact of improvement initiatives.
  • Identify user patterns and trends that may not be apparent through other methods.
  • Can be limited in scope; may not provide a complete picture of the employee experience.
  • May not capture subjective aspects, such as frustration or confusion.
Performance Metrics
  • Provide a direct link between the digital employee experience and business outcomes.
  • Help companies prioritize improvements that will have the highest impact on overall performance.
  • Objective and easy to measure.
  • May not capture the full range of factors that contribute to employee performance, such as training or workload.
  • May not provide insight into the specific aspects of the digital employee experience that are causing issues.
Net Promoter Score (NPS)
  • A simple, easy-to-understand metric that can be used to track overall employee satisfaction with digital tools.
  • Identify areas where improvements are needed; measure progress over time.
  • Can be used as a benchmark to compare against industry standards.
  • May not provide detailed insights into specific areas that need improvement.
  • May not capture the nuances of the employee experience or provide context for why employees may be dissatisfied.

Given the pros and cons of various methods of DEX measurement, organizations should look for a DEX that provides a combination of these approaches. The Aternity Digital Experience Management platform combines objective measures of actual user experience with subjective measures of employee sentiment to provide IT and HR teams with the best of both worlds. Watch this short video to see how Aternity combines qualitative and quantitative measures of DEX to provide the best overall view of employee experience:

Watch Video

Six business reasons why you should pay attention to DEX

Paying attention to digital employee experience can lead to a more engaged and productive workforce, reduced turnover, and improved business outcomes, making it a critical factor for organizational success in today’s digital age. The business drivers for addressing digital employee experience include the following.

  1. Increased Productivity: An employee’s experience with digital tools can have a significant impact on their productivity. When digital tools are easy to use and efficient, employees can work more effectively and efficiently, leading to increased productivity.
  2. Improved Employee Engagement: Digital tools that are designed with the employee experience in mind can help foster a sense of engagement and satisfaction among employees. When employees feel that their tools are working for them, they are more likely to feel invested in their work and committed to their organization.
  3. Enhanced Collaboration: Digital tools that facilitate collaboration can help employees work together more effectively, regardless of their physical location. This can lead to increased knowledge sharing, better decision-making, and more effective teamwork.
  4. Reduced Turnover: A positive digital employee experience can contribute to employee retention. When employees feel that they have the tools and support they need to do their jobs effectively, they are more likely to stay with their organization.
  5. Cost Savings: A positive digital employee experience can also result in cost savings for organizations. When digital tools are easy to use and efficient, employees require less support and training, reducing the time and resources required to maintain them.
  6. Competitive Advantage: In today’s digital economy, organizations that prioritize digital employee experience are better positioned to attract and retain top talent. By providing employees with the tools and support they need to work effectively, organizations can differentiate themselves from competitors and create a more attractive work environment.

How does digital employee experience work?

Digital employee experience is a subset of employee experience, which includes all aspects of an employee’s experience within an organization. Employee experience includes factors such as work culture, management style, physical workspace, training, opportunities for advancement, and benefits. Digital employee experience focuses specifically on the experience an employee has with digital tools and technologies used in the workplace. This includes ease of use, performance, accessibility, integration with other tools, and training and support for digital tools.

Overall, these factors can have a significant impact on the employee’s digital experience. Organizations that pay attention to these factors and prioritize the employee experience are better positioned to create a more engaged and productive workforce.

What is the difference between DEX and DEM?

Digital employee experience (DEX) and digital experience management (DEM) are related but distinct concepts.

Digital employee experience (DEX) specifically focuses on the experience an employee has with digital tools and technologies used within the workplace. This includes factors such as ease of use, performance, accessibility, integration with other tools, and training and support for digital tools.

Digital experience management (DEM), on the other hand, is a broader concept that encompasses all aspects of the customer or employee’s digital experience with a company. This includes not only digital tools and technologies but also digital marketing, e-commerce, and customer service.

Gartner, DEX, digital employee experience, DEMIn other words, while DEX focuses specifically on the employee’s experience with digital tools and technologies in the workplace, digital experience management encompasses a wider range of digital experiences that customers or employees may have with a company.

You can access the complimentary Gartner Research document “How to Successfully Deploy a DEX Tool” for further information on implementation and measuring ROI. According to Gartner, many organizations are adopting new DEX tools or evolving existing deployments beyond tactical use case but miss key implementation steps that ensure ongoing ROI. IT leaders can use this research to successfully deploy a new DEX tool or expand the use of existing ones.

 

 

Digital Experience Management vs Digital Experience Monitoring

To make things extra confusing, there’s more than one DEM. In addition to digital experience management, there’s digital experience monitoring. The latter category of software involves monitoring the performance and availability of digital systems and applications in order to ensure a high-quality digital experience, for employees and customers. And, in the case of Internet of Things (IoT), even non-human digital agents, like instrumented bridges or gas turbines. Digital experience monitoring focuses specifically on identifying issues and incidents that may affect the digital experience and providing real-time insights and alerts to help resolve them quickly.

What are important digital employee experience tools?

With low unemployment rates, companies are focused on attracting and retaining top talent. So multiple vendors claim capabilities for DEX and DEM. Every company that provides solutions for some type of IT monitoring discusses their products in the context of improving digital experience. When evaluating digital employee experience monitoring products, consider the following capabilities to ensure that the platform meets the needs of your organization.

  • Real-time monitoring: Look for a DEX product that provides real-time monitoring of application performance, infrastructure, and user experience. This will allow you to quickly identify and address issues as they occur.
  • User-centric analytics: When it comes to digital experience, it’s not about device or application performance metrics. It’s about what humans experience when they use those devices or applications. Look for a DEX product that provides user-centric analytics, such as user journey mapping. This will allow you to understand the end-to-end digital experience of your users and identify opportunities for improvement.
  • Multi-channel monitoring: Employees use all types of applications throughout their day. Not just web and mobile, but thick-client applications and those that run on virtual environments too. Look for a product that supports monitoring across multiple channels, including web, mobile, and other digital channels. This will allow you to gain a comprehensive view of the digital experience across all touchpoints.
  • Root cause analysis: Monitoring digital experience is only the first step. IT teams must focus on identifying and resolving issues. Look for a DEX product that provides insights into root cause analysis, so you can quickly identify the underlying causes of performance issues and take corrective action.
  • Automated remediation: Most leading DEX products provide automated remediation to address the most common user experience issues. This enables IT to resolve issues quickly and automatically, improving service and reducing costs.
  • Integration with other tools: IT shops rely the tools from multiple vendors in their operations centers. Look for a DEX product that integrates with other tools in your IT ecosystem, like ServiceNow. This will allow you to gain a comprehensive view of IT performance and ensure that issues are quickly resolved within the workflows you’ve implemented for your teams.
  • Proactive identification: Look for a DEM product that provides proactive analytics capabilities, such as machine learning and artificial intelligence. This will allow you to proactively identify potential performance issues and rapidly resolve the issue before users complain. Riverbed IQ is designed to do just that.

Get started with digital employee experience today

You can explore digital employee experience management now with by registering for a Request Demo of  Aternity. Download our software to understand how our approach to DEX helps you reduce costs, improve productivity, and deliver better customer satisfaction.

]]>
Optimize the Digital Employee Experience with Aternity Sentiment https://www.riverbed.com/blogs/digital-employee-experience-with-aternity-sentiment/ Mon, 15 May 2023 12:55:00 +0000 /?p=21180 In today’s fast-paced business world, delivering a superior digital experience is essential for driving employee productivity, satisfaction, and customer experience. IT departments are constantly seeking ways to improve digital experiences, but the challenge lies in understanding users’ perceptions of device and application performance.

A recent Forrester Report states, “while many organizations focus on tools to measure and enhance DEX, the path to success starts long before the tools discussions. Your strategy must embrace a flexible philosophy for happier employees. Then you can explore a variety of technologies to fulfill that vision.” To truly understand the complete digital experience, IT teams need to correlate qualitative employee feedback with full-fidelity quantitative performance metrics.

This is where Aternity Sentiment comes in.

Introducing Aternity Sentiment

Aternity Sentiment empowers IT teams to identify user experience issues, take targeted prescriptive actions, and enhance employee productivity, satisfaction, service quality, and overall business performance. By tightly correlating quantitative and qualitative measures, Aternity Sentiment offers the most comprehensive view of the digital employee experience, setting a new standard for DEX.

Watch the video to learn how Aternity Sentiment empowers total experience management from employees to customers:

Explore benefits of Aternity Sentiment

Empower employees and drive productivity to improve business performance

Aternity Sentiment significantly enhances employee engagement and productivity, leading to improved business performance. By capturing real-time feedback through tailored surveys, Sentiment complements existing Aternity application and device performance data, offering a comprehensive understanding of employee satisfaction. This approach allows IT teams to pinpoint areas that require improvement and implement targeted measures to optimize the digital experience. The use of flexible survey components ensures an accurate assessment of user satisfaction across various devices and locations. Ultimately, this empowers employees and drives productivity, resulting in better overall business performance.

Accelerate digital transformation adoption with targeted employee engagement

Digital transformation is a complex process that requires broad adoption of new technologies and processes across organizational boundaries. Employee acceptance is crucial for successful technology and process changes. Aternity Sentiment facilitates this acceptance by providing workflow integration of qualitative telemetry and analysis in the context of actual user data. Customized branding and precise timing of survey deployment to targeted user groups foster user trust and raise response rates. By engaging employees and addressing their concerns, Aternity Sentiment accelerates the adoption of digital transformation initiatives, ensuring your organization remains competitive and agile.

Deliver total experience management for a comprehensive view of employee and customer experience

 Aternity Sentiment enhances Aternity’s total experience management capabilities, providing a comprehensive view of both employee and customer experiences. Aternity’s unique click-to-render insights, end-user experience data, and user journey analytics offer valuable customer insights. By integrating Sentiment’s qualitative feedback with these capabilities, Aternity enables IT teams to rapidly isolate the cause of delays, uncover hidden issues, and optimize the overall digital experience. This holistic approach ensures a seamless and enjoyable experience for employees and customers alike, leading to higher satisfaction and loyalty.

Manage IT more proactively and efficiently with real-time feedback collection

 Aternity Sentiment extends Aternity’s proactive incident management by offering an early warning system through periodic, real-time feedback collection. As a result, IT Operation teams can quickly identify problems before they become systemic, widespread issues. This proactive approach reduces downtime, prevents loss of productivity, and helps maintain a positive user experience. In addition, Sentiment’s trending analysis of qualitative feedback helps identify patterns in user behavior, uncover recurring or common issues, and track service quality improvement efforts. This empowers IT teams to make data-driven decisions and manage resources more efficiently.

Improve IT service quality by implementing experience-level agreements (XLAs)

Aternity Sentiment supports organizations implementing XLA metrics, which focus on employee experience and understanding how IT influences productivity. Unlike traditional SLAs that measure transactional metrics by department, XLAs emphasize the importance of a positive employee experience. With Sentiment’s out-of-the-box and customizable surveys, organizations can analyze survey responses by various attributes and correlate employee satisfaction with device and application performance. This enables IT and LOB leaders to measure the productivity impact of technology changes, determine why a user (or group) may prefer one application over another, and analyze trends in the context of business processes. As a result, leaders can make informed decisions to improve policies, prioritize investments, and identify skills gaps, ultimately enhancing IT service quality and driving better business outcomes.

Sentiment is a game-changer for DEX, providing organizations with the ability to correlate qualitative employee feedback with quantitative performance metrics. This innovative approach empowers IT teams to deliver better digital experiences, drive productivity, and improve overall business performance.

Don’t let your organization fall behind—embrace the future of DEX with Aternity DEM. Learn more by visiting riverbed.com/DEX.

]]>
Riverbed Changes the Incident Response Paradigm with Intelligent Ticketing https://www.riverbed.com/blogs/intelligent-ticketing-with-alluvio-unified-observability/ Mon, 08 May 2023 12:24:00 +0000 /?p=21104 What if your solution could quickly identify and isolate the root cause of problems and provide intelligent recommendations for remediation before a ServiceNow ticket has been generated? What if the ServiceNow ticket was automatically assigned the right severity and routed to the right level based on the relevant context and insights? With Riverbed’s full fidelity insights, complex ticketing workflows become razor sharp, highly automated processes.

In today’s market, IT Operations teams need fast, context-driven insights to optimize business performance. The increasing complexity of managing incidents in multi-cloud environments, Service Desk and Network Operations teams has resulted in overwhelming volumes of data, alerts and tickets. However, siloed domain-specific monitoring tools fail to provide context or actionable insights. Additionally, limitations in automation scope, diagnostic information gathering and time-consuming steps for ticket documentation negatively affect first level resolution rates and costs, making it hard to effectively prioritize the flood of tickets.

Riverbed’s Unified Observability portfolio solves these problems by delivering full-fidelity telemetry and actionable insights for an organization’s entire technology stack, from applications and infrastructure to end-user experience. With its integration with ServiceNow, the market leader for ITSM platforms, Riverbed provides deep ServiceNow incident context to Service Desk agents and Network Operations teams. Riverbed’s triage, diagnostic and remediation automations streamline ServiceNow ticket creation and escalation.

Riverbed and ServiceNow cross-portfolio integration

Riverbed’s integration with ServiceNow aligns with ServiceNow’s vision of a single, unifying platform for companies advancing digital transformation programs. The combined solution delivers targeted incident response context and automation across Digital Experience Management (DEM), Application Performance Management (APM), and Infrastructure Management (IM) operational data domains. Riverbed offers direct integration within individual Riverbed portfolio products when event-driven ticketing makes sense, and delivers proactive ticketing with built-in intelligence across complex incidents.

Provide targeted, smarter incident response for IT operations teams

With its ability to easily integrate with third party observability tools, Riverbed IQ reduces the noise and eliminates the source of duplicated and false-positive ticketing, significantly reducing the volume of tickets created in ServiceNow. It replicates advanced investigative processes by correlating operational data across public cloud, private cloud, and data center infrastructure layers, looking for anomalous behaviors indicative of an emerging incident. When Riverbed IQ detects an anomaly, it automatically performs an investigation. If the behavior is identified as important based on anomaly thresholds, it creates a ServiceNow ticket with the right severity and assigns it to the right team, providing supporting incident context, cutting through the noise caused by event-based ticketing.

Empower L1 service desk agents to resolve issues faster

Riverbed Aternity provides Service Desk teams with extensive insights and tools to troubleshoot issues faster, make accurate decisions and resolve incidents without escalation. Aternity monitors end user devices, correlates device and application performance with user behavior, and identifies potential issues with the end user digital experience. When a degraded end-user experience issue is detected, Aternity automatically creates a ServiceNow incident and embeds employee-specific insights within the ServiceNow ITSM UI. With just one click, Service Desk agents can remotely perform investigative actions on any device to accelerate their troubleshooting.

Deliver higher-order incident response for network operations

Together with Riverbed IQ, the RIverbed Network Performance Management (NPM) suite revamps the reactive stance of NOCs, involving manual correlation of event data, by automatically correlating full-fidelity operational data, not just events, and surfacing actionable situations as ServiceNow tickets. Rivered IQ zeroes in on the root cause of a problem and provides the specific context immediately upon ServiceNow ticket creation. Riverbed offers end-to-end visibility and embeds actionable insights directly into the ServiceNow ticket, reducing escalations.

Riverbed’s Unified Observability portfolio, integrated with ServiceNow, empowers IT Operations teams to proactively resolve issues and optimize business performance. By delivering targeted incident response context and automation across operational data domains, Riverbed reduces noise and provides deep insights, enabling IT teams to resolve issues faster and reduce costs.

]]>
What Are Key Components of Digital Employee Experience? https://www.riverbed.com/blogs/key-components-of-digital-employee-experience-dex/ Fri, 28 Apr 2023 12:13:00 +0000 /?p=21075 The Digital Employee Experience (DEX) has become increasingly vital in today’s fast-evolving work landscape, particularly as organizations embrace remote and hybrid work environments. DEX encompasses every aspect of an employee’s interactions with digital tools, technologies, and resources that enable them to accomplish their tasks. Understanding and optimizing that employee experience and interaction with technology is essential for driving employee productivity, engagement, and satisfaction, ultimately leading to business success.

Key components of DEX

A comprehensive approach to Digital Employee Experience involves five critical components:

  1. Application performance and usability: Ensuring applications are fast, reliable, and user-friendly to support employees in their day-to-day tasks.
  2. Device performance and reliability: Providing employees with devices that are high-performing, dependable, and tailored to their specific needs.
  3. Connectivity and network performance: Facilitating fast and stable network connections that allow employees to work efficiently and collaborate seamlessly.
  4. Workspace environment and collaboration tools: Creating a digital environment that promotes effective communication and collaboration among team members.
  5. Security and data protection: Implementing robust security measures to safeguard sensitive company and employee information.

The importance of DEM solutions

To effectively manage and enhance DEX, Digital Experience Management (DEM) platforms are the vital tool in that technology toolbox for IT leaders. This is why organizations that are looking to improve their employee experience partner with Riverbed to implement Riverbed Aternity DEM across their enterprise. Aternity provides visibility into the actual user experience of performance of applications, devices, and networks, enabling IT teams to proactively identify and address issues impacting employee experience.

Overall, DEM solutions like Riverbed Aternity play a crucial role in improving DEX by:

  • Gaining real user insights into employee experiences with applications and devices
  • Proactively addressing performance issues to maintain a seamless, productive work environment
  • Optimizing IT infrastructure and resources to support employee productivity
  • Evaluating the impact of IT initiatives on employee experience and business results

Organizations looking to improve their DEX should adopt these four best practices:

  1. Establish a baseline: Measure the current state of application performance, device health, and network connectivity to create a foundation for understanding and improving employee experience.
  2. Identify and address bottlenecks: Use data from DEM solutions like Aternity and direct employee feedback to proactively resolve performance issues and maintain a seamless, productive work environment for employees.
  3. Prioritize user-centric initiatives: Focus on improving employee experiences with digital tools and resources, ensuring your organization’s technology investments yield maximum returns in terms of employee satisfaction and productivity.
  4. Measure and monitor: Regularly measure and monitor DEX metrics to track progress and ensure continuous improvement. Encourage employee feedback and promote a culture of open communication to identify areas for improvement and drive positive change within the organization.

Leveraging Riverbed Aternity for enhanced DEX

Being in Product Marketing at Riverbed, I’ve seen firsthand how our solutions have helped organizations measure and optimize employee experiences. Riverbed Aternity DEM offers valuable insights into employee interactions with applications and devices. It measures the employee’s true, actual experience, enabling IT teams to proactively identify and resolve performance issues, optimize IT infrastructure and resources, and measure the impact of IT initiatives on employee experience and business outcomes.

By leveraging Aternity’s DEM capabilities, organizations can:

  • Better understand the end-user perspective and identify opportunities for improvement
  • Foster a culture of continuous improvement focused on enhancing employee experiences
  • Streamline IT decision-making based on accurate and actionable insights
  • Enhance collaboration and communication across teams and departments

Watch Video

As remote and hybrid work environments continue to become the norm, organizations must prioritize digital employee experience. By implementing a robust DEM solution like Riverbed Aternity and following best practices, organizations can unlock the full potential of their workforce, driving overall business success.

Emphasizing the importance of DEX in decision-making processes and technology investments will lead to a more engaged, productive, and satisfied workforce, which in turn positively impacts customer experiences and business outcomes. By focusing on continuous improvement and fostering a culture of open communication, organizations can stay ahead of the curve and thrive in today’s rapidly changing digital landscape.

Forrester recently published a best practices report, Make Digital Employee Experience the Centerpiece of Your Digital Workplace Strategy, where they emphasize how optimizing the digital employee experience (DEX) has become a critical factor for today’s diverse, hybrid workforce, and how improving employee experiences translates to better business outcomes. Forrester states that “while many organizations focus on tools to measure and enhance DEX, the path to success starts long before the tools discussions. Your strategy must embrace a flexible philosophy for happier employees. Then you can explore a variety of technologies to fulfill that vision.”

A comprehensive DEX solution like Riverbed Aternity is crucial for improving employee experiences and driving success in hybrid and remote work settings. You can download a complimentary copy of the Forrester Report here.

]]>
What is End User Experience Management (EUEM)? https://www.riverbed.com/blogs/what-is-end-user-experience-management-euem/ Wed, 26 Apr 2023 05:31:00 +0000 /?p=21054 Organizations use End User Experience Management (EUEM) to ensure that their technology systems are functioning effectively when it comes to delivering excellent digital experiences to end users. The goal is to ensure users are able to access the resources they need to perform their jobs (in the case of employees) or to interact with a company (in the case of consumers).

End User Experience Management, digital experience management, DEM
Get your complimentary copy of the Gartner Market Guide for Digital Experience Management by clicking on the image above.

End User Experience Management is an important consideration for businesses of all sizes, as it can have a significant impact on the productivity of the workforce, the satisfaction of customers, and ultimately the success of the organization. At its core, EUEM is focused on providing a positive experience for end users, whether they are employees, customers, or partners. This can involve a range of capabilities, including monitoring, analytics, and implementing strategies for improving the performance of system, application, and devices.

As clear as this sounds, vendors and thought leaders in the market make it confusing by using a variety of related terms to describe this goal. Digital Experience Monitoring, Digital Experience Management, and Digital Employee Experience Management are all different names for similar categories of software. Review the Gartner Market Guide for Digital Experience Monitoring for a good overview of representative vendors.

The role of monitoring in EUEM

Monitoring is a key foundational element of End User Experience Management. This involves tracking the performance of various components of the technology environment that affect user experience, including devices, servers, applications, and network resources. By collecting data on these elements, and correlating them together, businesses can gain insight into the root causes of performance issues negatively affecting end user experience and identify opportunities for improvement.

For example, if users are experiencing slow load times when accessing a particular application, IT can use end user experience monitoring tools to track the performance of the application and identify any bottlenecks or other issues that may be causing the problem. Products like our Riverbed Aternity Digital Experience Management Platform enable IT to isolate the source of slowness to the employee device, the network, or the back-end application service. IT can then further investigate the root cause and take appropriate action to improve it. It’s important to note that monitoring metrics that indicate the performance of devices, systems, and applications is necessary, but not sufficient for effective end user experience management. Device performance management is not the same as end user experience management. It’s just one of the factors.

Digital Experience Management, end user experience monitoring, digital experience monitoring; end user experience management
Riverbed Aternity monitors actual employee experience in the context of a business process, and it breaks down overall response time into its component parts. In this case, the “search filing” activity in Thomson Reuters takes almost 12 seconds, and back-end time is the major contributor to delay.

How analytics drives End User Experience Management

Another important element of EUEM is analytics. By collecting and analyzing data on user activity and system performance, businesses can gain insight into how their technology systems are being used and identify opportunities for improvement. This can involve analyzing data on user behavior, such as how often they access particular applications or the application response time they experience when performing certain tasks within a business-critical application.

Most EUEM products enable IT to proactively identify and address issues before they become major problems. By monitoring system performance and analyzing user behavior, businesses can identify potential issues early on and take steps to prevent them from causing significant disruptions. Products like Riverbed Aternity contain automated remediation capabilities to address the most commonly expected end user issues. With automated remediation, IT can often remedy an end user experience issue even before employees notice. Watch this short video to see automated remediation in action:

What is an example of end user experience?

The performance of an application or website is a common example of end user experience that everyone is familiar with. End users expect applications and websites to load quickly and be responsive. If an application takes a long time to load or is slow to respond to user inputs, user experience suffers. This can have a major impact on the business. For example, data from Hobo shows the following:

  • The ideal website load time for mobile sites is 1-2 seconds.
  • 53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.
  • A 2-second delay in load time resulted in abandonment rates of up to 87%.

Other technical factors that can impact the end user experience include network connectivity, server availability, and the quality of the user interface. For example, if a user is accessing an application over a slow or unreliable network connection, this can lead to poor performance and frustration. Similarly, if a server is experiencing high levels of traffic, this can lead to slow load times and other performance issues.

The challenge for IT is that with so many employees working from home, factors such as Wi-Fi signal strength and ISP bandwidth and performance also affect end-user experience. But those factors are outside of the direct control of IT. IT requires a monitoring system like that ingests telemetry from all across the IT environment, then analyzes them to identify issues.

Why is improving end user experience important?

Using End User Experience Management to provide a seamless and responsive user experience enables businesses to improve their workforce productivity and customer satisfaction. Benefits include the following:

For employees:

  • Increased productivity: If employees have access to technology systems that are fast, reliable, and easy to use, they can perform their tasks more efficiently, which improves productivity.
  • Reduced frustration and stress: If employees are able to use technology systems without experiencing performance issues, they are likely to feel less frustrated and stressed, which can improve morale and job satisfaction.
  • Improved job performance and retention: Employees who are satisfied with their technology systems are more likely to stay with their current employer.

For consumers:

  • Enhanced satisfaction and loyalty: If consumers have a positive experience when using a company’s technology systems, they are more likely to be satisfied with the company’s products or services and become loyal customers.
  • Increased sales and revenue: Customers who have a positive experience are more likely to make repeat purchases and recommend the company to others, which can lead to increased sales and revenue.
  • Improved brand reputation: Companies that prioritize end user experience and provide a positive experience for their customers are likely to develop a strong reputation for quality and customer service, which can help attract new customers and retain existing ones.

How has hybrid work changed the importance of end user experience management?

remote work; COVID-19The prevalence of hybrid or remote work has increased the importance of end user experience management for several reasons:

Increased reliance on technology

With more employees working remotely, businesses are relying on technology systems to facilitate communication, collaboration, and productivity. With this, end user experience has become even more critical, as employees need technology systems that are fast, reliable, and easy to use in order to perform their tasks effectively.

Greater complexity

Hybrid or remote work environments can be more complex than traditional office environments, with employees accessing systems and applications from multiple locations and devices. This added complexity can make it more difficult to manage and optimize the end user experience.

Heightened security concerns

Remote work also brings with it heightened security concerns, as employees may be accessing sensitive company data from unsecured networks or devices. Ensuring a positive end user experience while maintaining robust security measures requires businesses to find a balance between security and ease of use.

Increased competition for talent

With the rise of remote work, businesses are no longer limited to hiring employees from their local area. This means that businesses are competing with a wider range of companies for top talent, and end user experience can be a key factor in attracting and retaining employees in a low unemployment market.

Here’s an overview of how Riverbed helps address these hybrid work challenges.

Five steps to improve end user experience

Improving end user experience can be challenging when IT budgets are tight. However, there are several practical ways that businesses can improve end user experience while watching expenses.

  1. Conduct an end user experience assessment: Assessing current systems and processes can help identify areas of inefficiency or frustration for end users and can provide insights into how to improve the overall experience. Companies use periodic surveys to gather employee experience data, but they can also do the same with their EUEM tools.
  2. Prioritize user feedback: User feedback is an invaluable tool for improving end user experience. By prioritizing user feedback and making changes based on that feedback, businesses can demonstrate that they value their employees’ and customers’ feedback.
  3. Optimize existing systems: Often, businesses have existing systems and processes that can be optimized to improve end user experience. This might include removing unnecessary steps from a process, streamlining workflows, or optimizing the performance of existing technology systems.
  4. Implement self-service tools: Self-service tools, such as knowledge bases or chatbots, can help reduce frustration for end users by providing them with quick and easy access to information or assistance. These tools can be relatively inexpensive to implement and can help improve end user experience by reducing wait times and increasing accessibility.
  5. Provide training and support: Providing training and support to end users can also help improve the overall experience. This might include offering training sessions on new systems or processes or providing dedicated support personnel to help troubleshoot technical issues.

Overall, improving end user experience doesn’t necessarily require a large investment of money. By prioritizing user feedback, optimizing existing systems, implementing self-service tools, and providing training and support, businesses can make meaningful improvements to the end user experience even in restricted budget environments.

Take the first step to better end user experience now

You can explore end user experience management now by registering for a Request Demo of Riverbed Aternity. Download our software to understand how our approach to end user experience management helps you reduce costs, improve productivity, and deliver better customer satisfaction.

]]>
Effective Network Performance for Better Business Resilience https://www.riverbed.com/blogs/effective-network-performance-for-better-business-resilience/ Mon, 24 Apr 2023 12:10:33 +0000 /?p=21033 Whether you are a small business or a major enterprise, network performance can make or break a business of any size. Now that networks are stretched far beyond the data center, maintaining a consistent level of performance in branches, campuses or even the cloud is massively challenging. To add to that pile, with users connecting from various locations, their expectation is that the network is always on and available.

With network performance and user experience as mission critical initiatives, IT teams are constantly under the microscope. When NetOps teams lack visibility into their applications, servers, and cloud-native environments, they’re unable to correctly troubleshoot network issues like unchecked security threats, application slowdowns and other performance issues. For hybrid networks, a lack of visibility often stems from insight latency. The speed and clarity with which insights are delivered can be the difference between prompt action and a large outage.

Why performance is a pillar of business resilience

Little do people know what it takes to keep these modern hybrid networks going! Performance across the network is critical. This makes performance a key pillar of business resilience. Business resilience is the ability of a company to adapt and recover quickly from unexpected disruptions.

In today’s digital world, network performance management (NPM) plays a crucial role in ensuring business resilience. By effectively managing network performance, companies can build a more resilient network infrastructure that can withstand unexpected disruptions and provide a consistent user experience.

Effective monitoring, testing, and optimization of the network can help identify and resolve performance issues, such as bottlenecks, latency, or packet loss. Ensuring that the network is performing optimally can help avoid disruptions and provide a consistent user experience.

Elevate your network’s visibility and performance

The Riverbed NPM portfolio delivers increased business resilience, enabling and accelerating operational transformation from legacy to hybrid and multi-cloud networks. Our solution helps IT teams adapt to disruptions while maintaining continuous operations and safeguarding people, assets, and overall brand equity.

Unlike other NPM solutions, Riverbed NPM delivers granular visibility across network domains with full-fidelity data, extracted from packets, flows and device metrics giving insight across hybrid environments. With new performance enhancements like increased data capabilities, faster processing rates and third-party vendor support, Riverbed NPM sees more telemetry than ever before allowing real time visibility across networks, servers, applications, and the cloud.

New Riverbed NPM performance enhancements help address growing network demands and mitigate compromising network events by delivering full fidelity insights at lightning-fast speed to NetOps and SecOps teams. Corporate mandates require an IT environment that is nimble to accommodate new business requirements, particularly now that networks are evolving beyond the data center. The shift to complex, multi-cloud networks is driving the need for greater scalability, accelerated insights, integration enhancements and increased performance.

For more information on business resilience and how the Riverbed Network Performance Management portfolio can help your organization, please visit this page.

]]>
Spinning Plates, Readiness and Business Resilience https://www.riverbed.com/blogs/business-resilience-network-performance-management/ Tue, 18 Apr 2023 12:13:49 +0000 /?p=20936 Keeping up with your hybrid network can be overwhelming. Nowadays, a mixture of on and off premise technology is the new normal. Users are accessing the network and applications from various locations. The network is stretched well beyond the data center and users are accessing applications from the cloud.

IT teams are constantly spinning plates. It’s only a matter of time before something breaks.

Keep the plates spinning with NPM

So, what can you do to keep the network spinning?  Strengthen your network to be more adaptable and responsive, delivering a better digital experience to your organization’s employees and users. This is business resilience.

A solid prevention or contingency plan for a possible damaging event truly tests the mettle of IT teams. Readiness for all possible negative scenarios seems an impossible task. Business resilience is crucial for companies to adapt and recover quickly from unexpected disruptions, such as natural disasters, cyberattacks, or economic downturns.

In today’s digital world, network performance management (NPM) plays a critical role in ensuring that objective. By effectively managing network performance, compliance, and security, companies can build a more resilient network infrastructure.

Three focus areas for business resilience

Network performance management is the process of monitoring and optimizing the performance of a company’s network infrastructure. Here are the key areas of focus for business resilience in the context of network performance management:

Performance

Performance is the cornerstone of network performance management. Effective monitoring, testing, and optimization of the network can help identify and resolve performance issues, such as bottlenecks, latency, or packet loss. Ensuring that the network is performing optimally can help avoid disruptions and provide a consistent user experience.

Compliance

In today’s regulatory environment, compliance is a critical concern for businesses. Compliance requirements vary depending on the industry and the region, but they all aim to protect the privacy and security of sensitive data. NPM can help ensure compliance with organizational or governmental regulations by providing visibility into network traffic, monitoring access controls, and delivering oversight and data management.

Security

With the increasing sophistication of cyberattacks, security is a top priority for businesses. A security breach can lead to data theft, financial losses, and reputational damage. NPM can help secure the network by monitoring for unusual traffic patterns, provides forensic analysis, and delivers granular network data for quick response and troubleshooting.

How to build resilience into NPM

To build resilience into network performance management, businesses need to take a proactive and holistic approach. Here are five best practices:

  1. Develop a comprehensive network performance management strategy: This should include clear objectives, metrics, and processes for monitoring and optimizing network performance, compliance, and security.
  2. Invest in the right tools and technologies: Effective network performance management requires the right tools, such as network monitoring hardware/software that focusses on packet capture, flow monitoring and device metrics. Businesses need to evaluate their needs and choose the tools that best fit their requirements.
  3. Automate routine tasks: Automation can help reduce manual effort (and mistakes from human intervention) and improve efficiency. This includes automating network configuration and patch management, as well as implementing machine learning and artificial intelligence to detect and resolve issues.
  4. Build a culture of security: Security is everyone’s responsibility. Businesses need to educate employees on security best practices, establish clear security policies and procedures, and regularly test and audit their security measures.
  5. Continuously monitor and adapt: The network environment is constantly changing. Businesses need to continuously monitor network performance, compliance, and security, and adapt their strategies and tools to keep up with the evolving threat landscape.

So in order to keep the plates spinning, NPM has to be a critical component of business resilience. By focusing on performance, compliance, and security, businesses can build a more resilient network infrastructure that can withstand unexpected disruptions and provide a secure and consistent user experience.

For more information on business resilience and how the Riverbed Network Performance Management portfolio can help your organization, please visit this page.

]]>
Real Time Customer Experience Visibility for Modern ATM Fleets https://www.riverbed.com/blogs/digital-experience-management-for-atm-fleets/ Fri, 14 Apr 2023 12:44:05 +0000 /?p=20317 Modern automatic tellers are packed with sophisticated hardware and software to enable an up-to-date user experience for customers. All this sophistication naturally brings with it some challenges for the banks and IT teams. In this blog, let’s discuss what those pitfalls are and how Riverbed’s Aternity Digital Experience Management platform can help with clearer customer experience observability.

Get 30k foot view of customer experience

If the telemetry coming back in from the ATM fleet in the form of logs, metrics or traces is still being analyzed in an ad-hoc manner, that is almost like taking a step back in time by at least a decade. Once affected by an incident, without a holistic observability, understanding the underlying root cause of degraded user experience is wrapped in a few assumptions and incoherent analytics or trends.

Compare that, for instance with a screen like below, summarizing the customer experience across the entire ATM fleet, out-of-the-box. To maximize the value of your screen real estate while providing dashboards with high level view of system wide health, Riverbed Aternity diligently distills various analytics and metrics from the entire ATM fleet into a handful of carefully curated numbers, like UXI Score. The UXI Score incorporates health indicators like Crashes, Hanging, and Wait Time.

Alluvio Aternity customer experience dashboard
Riverbed Aternity provides a 30k foot view of customer experience.

Riverbed Aternity also captures and provides the following categories of raw of ATM Observability metrics, which are further heuristically distilled into various cocktail metrics like UXI User Experience Index:

  • ATM Performance and Stability
  • Application Performance and Stability
  • Customer Activity Performance

Ultimately, even the most well-provisioned ATM hardware and connectivity can leave gaps in customer experience. That is because modern applications running in production are themselves at mercy of layers of unpredictability. From bugs in the operating system to a manual configuration error in the networking components, the root causes of user performance degradation has many areas from which to sprout. It is therefore very important that measurements of the user experience is a first-class citizen in your observability tools and practices.

Below is an example of another real-time dashboard with a list of each activity performed by users overall. With each activity, we also showcase the breakdown of delays split by network component, server component and user component.

Another Alluvio Aternity Customer experience dashboard
Riverbed Aternity Customer Experience Activities

To get you the highest fidelity customer experience data points, Riverbed Aternity breaks down each customer’s transactions by activities. The following diagram clearly shows the relation between a customer’s transactions and what Aternity classifies as activities:

Customer transaction diagram
Riverbed Aternity Activities From a User’s Transaction Session

Diagnosing problems in the customer journey

Observability with Riverbed Aternity includes monitoring so diagnosing problems with a poorly performing ATM and looking through each and every transaction performed on it is straightforward. And yes, as you hover over each activity a tool-tip pops up, as outlined in green below, in which each transaction showcases a split of response time by client time, network time and backend time.

Alluvio Aternity User Transaction dashboard
Riverbed Aternity User Transactions

Inspect all transactions for an ATM

We can even further deep-dive into individual transaction for a timeline-based view of the performance of transactions. In the image below, we focus on a single activity, Insert Card. It is clear that this transaction typically completes under 20 seconds, as indicated by the highlighted band close to the x-axis. However, there are various outliers, some of them showing exacerbated network time. Once such data-point is highlighted and a detailed tool-tip shows its full details:

Alluvio Aternity User Transaction Analysis view
Riverbed Aternity User Transaction Analysis

Identifying the frequency and trend of these outliers can help narrow down the conditions which cause such degraded experience—a more data driven approach to solving performance problems in your ATM fleet.

Analyze transaction performance

Which machines from the fleet are taking the longest to process this transaction? To answer such questions we can develop further customized dashboards like the one below where we list each activity that users perform sorted by the average time taken on that ATM. This identifies consistently poor performing ATMs where troubleshooting should be prioritized.

Customized dashboard of Analyze Transaction Performance across ATM Fleet
Analyze Transaction Performance across ATM Fleet

Analyze SLA compliance

In Riverbed Aternity, we can also set compliance SLA thresholds for ATM transactions and identify which cities and geographies which are consistently non-compliant with SLAs. The violating transactions are clearly listed along with their average, minimum and maximum values. We can then troubleshoot the poorly performing transactions in their geographies to understand how the customer experience can be improved, whether by improving network time or device or back-end performance.

Customer Experience SLA Compliance dashbaord
Customer Experience SLA Compliance

Visit this page to learn why Riverbed Aternity is the right solution for your customer experience monitoring, whether it is at ATMs, web or desktop.

]]>
Eliminate Application Performance Bottlenecks to Improve User Experience https://www.riverbed.com/blogs/application-performance-monitoring-improves-user-experiences/ Mon, 10 Apr 2023 21:34:00 +0000 /?p=20929 Last week our house flooded. It wasn’t a major flood but we did get some damage in a couple of rooms. A storm came out of nowhere and deluged the house with water for about 30 minutes. Turns out that our storm water drainage had bottlenecks that we weren’t aware of and just wasn’t up to the task.

The same things can happen with Application Performance. How do you you know that your network and applications are going to give you the reliability and performance your business needs? Just like a plumber could have helped us find out where the bottlenecks were before disaster struck, Application Performance Monitoring (APM) can help ensure you identify where your applications are going to be slowed down.

Get a complete view of Application Performance

APM helps organizations improve user experiences by tracking key software application performance metrics using monitoring software and telemetry data. Without Application Performance Monitoring, teams struggle to identify and resolve the numerous problems that can arise, causing customers to become frustrated and abandon the app altogether, impacting revenue and brand image.

Application monitoring is a great way to gain a full view into the user experience, application performance, and database availability. Businesses of all sizes use various applications daily for different processes and need to deploy tools throughout the application environment and supporting infrastructure to monitor real-time performance. To get a complete view of application performance, you need to monitor the following:

  • Digital/user experience encompassing both real-user and simulated experience for assessing performance in production and non-production environments. This type of monitoring collects performance metrics, including load time, response time, uptime, and downtime, by analyzing the user interface on the end-user device.
  • Application performance monitoring involves overseeing the complete application and infrastructure. This comprises the application framework, database, operating system, middleware, web application server and user interface, CPU usage, and disk capacity. Monitoring applications at this level can help identify code segments that could be causing performance issues and check the availability of software and hardware components.
  • Database availability monitoring helps assess the performance of SQL queries or procedures and the availability of the database.

But is it any different for cloud-native applications? The rise of cloud-native applications poses several challenges despite their well-established benefits. Complex applications, composed of numerous microservices, generate huge amounts of data, which needs to be centrally managed and analyzed to proactively identify performance issues. The speed at which data is generated is also a challenge. These factors have made application performance management more challenging in cloud-native environments.

The many benefits of APM

A good Application Performance Monitoring solution offers many capabilities, such as:

  • Dynamically maintain real-time awareness of application and infrastructure components through automatic discovery and mapping.
  • Gain end-to-end visibility into the application’s transactional experience to comprehend its impact on business outcomes and user experience.
  • Monitor mobile and desktop applications on browsers to track user experience across different platforms.
  • View root-cause and impact analysis to identify performance issues and their impact on business outcomes for faster and reliable incident resolution.
  • Integrate and automate service management tools and third-party sources to scale up or down with the infrastructure.
  • Analyze the impact on user experience and its impact on business KPIs.
  • Monitor endpoint devices and application performance issues.
  • Monitor virtual desktop infrastructure to maximize employee productivity.

Clearly, businesses can benefit in many ways by gaining visibility and intelligence into application performance and its dependencies. Real-time monitoring helps detect performance issues before they affect real users, expanding the technical and business benefits list, which includes:

  • Increased application stability and uptime
  • A reduced number of performance incidents
  • Faster resolution of performance problems
  • Improved infrastructure utilization

Investing in a good Application Performance Monitoring solution ensures reliable intelligence and insights that enable teams to align faster, and identify and isolate issues for faster problem resolution. Performance monitoring has rapidly expanded to encompass a broad range of technologies and use cases. Modern applications, though built using microservices, are highly complex and run in containerized environments hosted across multiple cloud services, making it even more essential to get end-to-end visibility.

And just like good storm water drainage, you’ll get the performance when you really need it.

]]>
What is Observability vs Monitoring? https://www.riverbed.com/blogs/what-is-observability-vs-monitoring/ Fri, 07 Apr 2023 12:17:00 +0000 /?p=20757 Observability and monitoring are related concepts in the field of IT operations, but they are not the same thing.

Monitoring refers to the practice of collecting and analyzing data from network, applications, infrastructure, and user experience data to detect issues or anomalies. Monitoring typically involves setting up threshold alerts to notify operators or developers when something goes wrong. The goal of monitoring is to provide insight into availability, performance, and usage.

Observability takes monitoring a step further by emphasizing the importance of understanding the internal workings of a system, rather than just monitoring its inputs and outputs. Observability involves collecting and analyzing data at a deeper level and requires full-fidelity cross-domain data to gain a holistic view of system behavior. The aim of observability is to enable proactive detection and resolution of issues, rather than just reactive problem-solving.

Monitoring is a subset of observability
Monitoring is a subset of observability

In short, observability and monitoring are like different sides of the same coin. Monitoring provides a basic level of visibility into a system, while visibility provides a more comprehensive view of performance behavior. Observability takes this even further by emphasizing the need to understand the internal workings of a system to improve its overall performance and reliability.

What is observability?

Observability is a concept used in various fields, including engineering, computer science, and systems analysis, among others. It refers to the ability to understand and analyze the internal workings of a system or process based on the data and information that it produces. Essentially, it is the degree to which we can observe and measure what is happening within a system.

In computer science, observability is often associated with software and application development. It involves the ability to monitor and debug complex software systems by collecting and analyzing data from various sources, such as application logs, metrics, and traces. By doing so, developers can identify and resolve issues within the software and improve its overall quality and performance.

It expands the concept of observability to all IT systems, including the network, infrastructure, applications and user experience. It leverages full-fidelity data, analytics and correlation, and intelligent automation to gather contextual data that supports fast identification and resolution of performance and security issues.

Overall, observability is a crucial concept that enables us to gain insight into the internal workings of complex systems and processes, which can help us improve their performance, reliability, and overall effectiveness.

What is monitoring?

Performance monitoring is the process of tracking and analyzing the performance metrics of a system or process, such as a computer system, network, or application, to ensure that it meets the required performance levels or SLAs (service level agreements). It involves monitoring various metrics, such as response time, throughput, and error rates, and comparing them against predetermined benchmarks or thresholds.

The goal of performance monitoring is to identify and diagnose performance issues, such as slow response times, high resource utilization, or system crashes, and take appropriate action to resolve them. This can involve adjusting system configurations, upgrading hardware or software components, or optimizing code or algorithms.

Performance monitoring is critical for ensuring the efficient and effective functioning of systems and processes, as well as for ensuring customer satisfaction and maintaining business continuity. It is commonly used in industries such as IT, telecommunications, finance, healthcare, and manufacturing to monitor and optimize the performance of critical systems and applications.

Observability and monitoring: what’s the difference?

Observability and monitoring are both important concepts in IT operations, but they have slightly different meanings.

Monitoring generally refers to the process of collecting data about a system, such as its performance, availability, and usage, and using that data to identify and diagnose problems or to optimize performance. Monitoring is typically done using specialized telemetry that collects and analyzes data from various sources, such as from the network or applications.

Observability, on the other hand, is a more holistic concept that refers to the ability to understand and reason about a system’s behavior and performance from its outputs. An observable system is one that provides enough information to allow IT to understand how it is behaving and to diagnose problems more easily. It typically has a well-defined interface that allows IT to collect and analyze data about its behavior.

In summary, monitoring is a subset of observability, where monitoring is a way to gather data about a system, while observability is the ability to reason about that system from its data outputs.

What are the benefits of observability?

There are several benefits of observability, including:

  1. Faster problem detection: With observability, it becomes easier to detect problems as they occur, rather than waiting for user complaints or failures. This can help reduce downtime and improve overall reliability.
  2. Faster problem resolution: Once a problem is detected, observability tools can help pinpoint the root cause of the issue. Riverbed Unified Observability uses intelligent automation to gather supporting evidence and context. This reduces the time it takes to resolve the problem and get the system back up and running.
  3. Better performance: By monitoring key metrics and indicators, observability can help identify performance areas that are not optimal. This can help improve performance of networks, applications, and user experience and prevent potential issues before they occur.
  4. Improved collaboration: Observability tools can provide visibility into the internal state of a system to multiple teams across an organization. This can improve collaboration between teams and help everyone work towards a common goal of improving performance and reliability.
  5. Better customer experiences: By detecting and resolving issues faster, observability can help improve users’ digital experiences, which leads to increased customer satisfaction and loyalty.

What is Riverbed Unified Observability?

Riverbed IQ, a SaaS-delivered Unified Observability service, surfaces impactful issues with context to solve problems fast. It leverages key metrics across a full range monitoring telemetry—from the network, infrastructure, applications, and end users—to provide the foundation of unified observability. It applies a diversity of analytics and correlates across five dimensions to group related indicators into a single incident for more accurate alerting and faster problem identification. It then employs intelligent automation that replicates the best practices of IT experts to gather evidence, build context, and set priorities. As a result, IT can fix problems faster and more efficiently.

For more information on Unified Observability and monitoring, click here.

]]>
Prioritizing Employee Experience in Digital Transformation https://www.riverbed.com/blogs/prioritizing-employee-experience-in-digital-transformation/ Thu, 06 Apr 2023 21:07:00 +0000 /?p=20836 Digital transformation is not just about implementing new technology. True digital transformation involves a complete overhaul of an organization, including enhancing operations, creating collaboration opportunities, expanding service offerings and revolutionizing the approach to user experience. By embracing true digital transformation, businesses can improve their efficiency and competitiveness in the marketplace.

In today’s digital age, digital transformation (DX) is essential for businesses to remain competitive and sustainable. However, many organizations need to remember that the success of their DX initiatives depends on the experience of their employees. If employees struggle with user experience (UX) issues, the organization’s transformation efforts are bound to fail.

In this blog, we will explore three reasons why a solid UX is crucial for any DX initiative to succeed.

The path to digital transformation

To start developing your digital transformation strategy, assess your company’s current state by identifying the processes, technologies, and business models currently in use. From there, you can identify areas that need improvement and define your goals while identifying your target audience. Mapping out user journeys and all touchpoints is also crucial. Evaluating the technology landscape and noting any gaps or inefficiencies is key.

Next, it is important to create a digital transformation team responsible for driving change and transformation across the company. This team should be cross-functional and develop a roadmap for implementing changes. It is essential they continuously monitor and improve the digital transformation process as it unfolds, paying special attention to these three areas of the digital transformation strategy:

  1. Productivity: Productivity is significantly impacted when employees struggle with poor UX. Today’s workforce uses more applications, systems, and tools than ever before, and adding layers of new tools and workflows can further complicate matters. It is important to determine what will work within legacy systems and to keep out the things that won’t.
  2. Employee Experience: Operational excellence is directly affected by poor UX. As employee turnover and recruitment goals increase, an intuitive UX is key for existing and new employees to handle the IT overhaul. In today’s remote and hybrid work environments, a streamlined UX can help ease the transition and eliminate organizational silos. This allows employees to collaborate and fill institutional gaps necessary for operational excellence.
  3. Security: Security is significantly impacted by poor UX. A strong UX makes it easier for people to follow security measures, as it is not practical to expect employees to follow best practices if they get in the way of their daily responsibilities. Security should be baked into all systems from the start, and automation can help ensure that access is granted only for the duration of the employee’s employment. Automation can also help organizations identify abnormal behavior and spot potential problems before they escalate.

Managing change made easy

Managing change is a critical component of successful digital transformation, and a vital part of this is ensuring a positive employee experience. Here are some additional strategies for improving change management and employee experience:

  • Develop a change management plan: Create a detailed plan for managing the changes that come with digital transformation. This should include a timeline, a list of key stakeholders, and a communication strategy to keep employees informed and engaged throughout the process.
  • Provide training and support: Make sure your employees have the resources they need to succeed in their new roles. This might include training on new technologies and processes and ongoing support to answer questions and troubleshoot issues.
  • Foster a culture of collaboration: Encourage collaboration and open communication among teams to build a sense of community and ensure everyone feels included in the process. This can reduce resistance to change and increase overall employee engagement.
  • Collect feedback and make adjustments: Monitor the progress of your digital transformation and collect feedback from employees to identify areas where improvements can be made. Use this feedback to make adjustments and refine your change management plan.
  • Recognize and reward success: Celebrate successes and recognize employees who have gone above and beyond to support the digital transformation initiative. This can help build momentum and foster a positive employee experience.

Ultimately, a successful digital transformation requires a holistic approach that considers both the technological and human aspects of the process. By focusing on improving change management and employee experience, organizations can create a culture that supports ongoing innovation and growth.

Quantify cost savings with Riverbed Aternity DXI

The Riverbed Aternity’s Digital Experience Index (DXI) is a metric used to measure the overall digital experience of end-users, such as employees, customers, and partners, who interact with an organization’s digital services and applications. The DXI provides a single score that represents the overall satisfaction of end users with their digital experience and breaks it down across five main categories, namely:

  • Critical Business Applications
  • Collaboration Tools
  • Devices
  • Productivity Tools
  • Other Applications

Alluvio Aternity Digital Experience Index (DXI) score

By leveraging DXI, organizations can quickly identify areas where end users are experiencing issues or frustrations with digital services and applications, therefore taking steps to improve their digital experience. This can ultimately lead to increased productivity, improved customer satisfaction and better business outcomes, all of which has a direct, positive impact on cost savings. Check out this video to see how DXI helps companies along their digital transformation journey.

Besides overall DXI, another key metric that Riverbed Aternity measures is User Experience Index (UXI). UXI provides real-time insights into the digital experience of the company’s end users. It tracks and measures various metrics, such as application page load time, hang and wait time, as well as crashes and errors, to provide a holistic view of the user experience. Based on UXI, companies can quickly visualize and identify areas of improvement in their digital offerings and take proactive measures to optimize the user experience. This can result in quantifiable cost savings, as improvement in user experience can reduce the amount of time and resources required to address the issues and improve productivity.

Alluvio Aternity User Experience Index (UXI) shows areas of improvement

Additionally, by measuring and monitoring the impact of user experience changes, companies can identify the most effective strategies to reduce IT spending. These strategies include going beyond the age of the devices to analyze which devices are no longer delivering an adequate user experience and selectively refreshing those devices, identifying and eliminating under-used software licenses and streamlining cloud network bandwidth cost while optimizing performance.

Realize productivity gains from effective change management

By utilizing Riverbed Aternity’s comprehensive monitoring and analytics capabilities, companies can identify and measure the impact of changes on employee productivity in real-time, as well as before and after the change. This enables companies to make informed decisions about the timing, scope and impact of changes, minimizing disruption and ensuring the successful adoption of new processes. Aternity can also help companies proactively identify and mitigate potential issues before they impact productivity, reducing the risk of unexpected downtime and delays. By leveraging Aternity’s advanced reporting and analytics features, companies can easily track and measure the results of their change management initiatives, identifying areas for improvement and optimizing their processes for even more significant productivity gains in the future.

In the below example, let’s look at a real-life use case of an Aternity customer, where productivity improved as user experience improved, as a result of a positive backend application change.

Alluvio Aternity customer example

Prior to the change, users complained about latency and hang time while loading a critical business application mainly used to create sales orders and retrieve business reports. As we can see from the Response Time column, there were a lot of pages with higher than 4 seconds response time on average, mostly spent on backend processing, as highlighted in dark blue. Some of the activities recorded had close to 10 seconds response time, which resulted in low activity scores and negatively impacted user experience with the business application.

Alluvio Aternity customer example

The backend IT team has since identified the root cause and applied the necessary backend change. This change helped reduce Response Time for all business web application pages to less than two seconds on average. In particular, the “SALES & BOOKING” page’s average response time was only 1.19 seconds, compared to 5.14 seconds before the change. This is a five times reduction in page load time; therefore, user experience while working with the business application has improved drastically. From this example, Aternity has proven to provide invaluable insights for organizations to quantify productivity gains due to effective change management practices.

Achieve end-to-end visibility and foster cross-collaboration with Riverbed Portal

Riverbed Portal provides holistic performance visibility from a broad spectrum of data sources, including end-user experience, application performance and network and infrastructure monitoring. This is achieved via the ability to create a dashboard that visualizes the flow of data from the end user, via the network, to backend infrastructure or even cloud infrastructure. Each stakeholder and interest group may have different applications and key performance indicators that need to be monitored and tracked. This is often the basis of creating IT silos in most companies and organizations.

The critical benefit of a dashboard is to provide the right information to the right people at the right time while serving as the single source of truth that facilitates collaboration across different stakeholders, such as the device team, network team, system team and application team, that eliminates IT silos. Together with dashboards, application maps are used to model and define the systems and software used to support a specific business application and the users accessing that application.

The benefits of application maps are:

  • Quickly identify an area of operational issues
  • Reducing Mean-Time-To-Identify (MTTI) and eventually Mean-Time-To-Resolution (MTTR)

Riverbed Alluvio Portal dashboards provide holistic performance visibility from a broad spectrum of data sources

Read more about Riverbed’s end-to-end visibility workflow in this blog, Demonstrating End-to-End Visibility from the Client to the Cloud.

Riverbed Aternity can provide real-time insights into the user experience of applications running on endpoints such as desktops, laptops and mobile devices, and it can automatically resolve performance issues in real-time, without human intervention. Automated remediation in Riverbed Aternity is possible through monitoring, analysis, event triggering and scripting.

]]>
Transforming Global Financial Services with Riverbed Aternity DEM https://www.riverbed.com/blogs/transforming-global-financial-services-with-alluvio-aternity-dem/ Thu, 06 Apr 2023 20:40:10 +0000 https://www.riverbed.com/?p=76123 Does your organization support financial services companies? If so, you know that the global financial services industry has undergone a rapid digital transformation in recent years, driven by evolving customer expectations, remote and hybrid work, infrastructure modernization, and an increasingly competitive landscape. Financial institutions now face the challenge of ensuring exceptional digital experiences for their customers and employees while adhering to strict regulatory standards and maintaining optimal performance across a diverse range of digital services. This challenge comes especially as both customers and employees demand a lot more when they interact with technology—whether that is a customer interacting with the financial institutions site and mobile app or the employees who work for these organizations.

This is why the leading global financial organizations leverage the Riverbed Aternity Digital Experience platform—a comprehensive solution designed to address the unique challenges they face while maintaining their competitive advantages. With Riverbed Aternity, companies can gain insights into customer journeys, both converting and non-converting, and track user experience at every step of their journey, identifying and optimizing the highest-converting paths and eliminating any roadblocks. In addition, the platform can monitor the employee experience for critical business applications used to support customers, reducing friction during customer journeys due to broken links and other issues.

Enhancing customer experience

At the heart of every financial institution’s success lies exceptional customer experience.  Riverbed Aternity DEM helps global financial services companies monitor and optimize the performance of their digital services, ensuring seamless and satisfying experiences for customers. With Riverbed Aternity’s advanced analytics and insights, financial institutions can proactively identify and resolve performance issues, thereby minimizing customer frustration and maximizing satisfaction.

One of these ways is with Aternity’s User Journey Intelligence (UJI) functionality. With UJI, Riverbed Aternity leverages advanced Real User Monitoring (RUM) technology to track user journeys and analyze web page load time, providing insights into top-line business metrics such as revenue, customer engagement, and customer abandonment. The platform also offers Synthetic Transaction Monitoring (STM) for proactive issue identification. This is unique to Riverbed Aternity, where financial services companies can gain a competitive edge over pure-play DEM vendors lacking RUM, STM, or both.

Boosting employee productivity & accelerating digital transformation

In addition to improving customer experience, Riverbed Aternity DEM also enhances employee productivity by providing insights into the performance of internal applications and systems. By identifying bottlenecks and performance issues, Riverbed Aternity allows financial institutions to optimize their internal processes and workflows, resulting in increased efficiency and reduced operational costs.

As the global financial services industry continues to evolve, organizations must be agile and adaptable to stay ahead of the curve.  Riverbed Aternity DEM has supported financial institutions in their digital transformation journey by providing the tools and insights needed to optimize and innovate their digital services. With Riverbed Aternity, financial institutions across the globe have confidently embraced new technologies and delivered exceptional digital experiences to both customers and employees.

When Swiss Re, a leading provider of insurance, reinsurance and other forms of insurance-based risk solutions, underwent their digital transformation strategy they aimed to simplify collaboration among global teams and between Swiss Re and its external partners and customers. However, their existing device performance monitoring tool couldn’t provide a comprehensive understanding of their workforce’s experience, making it hard to interpret and scale. With Riverbed Aternity, Swiss Re was able to remotely, proactively, and non-invasively measure actual end-user experience, improving visibility and efficiency gains.

“What we particularly liked with Riverbed Aternity was the ease in which we could analyze and correlate data. Riverbed Aternity makes this insight easily available to a broader audience in a format that is scalable and sharable with our internal stakeholders,” said Joost Smit, Digital Workplace Solution Architect and Engineer at Swiss Re.

Looking ahead

These are just a couple of examples of how the Riverbed Aternity Digital Experience Management platform is the ideal solution for global financial services companies looking to navigate the complexities of today’s digital landscape. By providing real-time insights into application performance, end-user experience, and overall system health, Riverbed Aternity DEM empowers financial institutions to deliver exceptional digital experiences, maintain regulatory compliance, and drive business growth.

To learn more about how Riverbed Aternity is helping financial services companies stay competitive in while addressing today’s challenges, watch our webinar on “How IT in Financial Services Can Stay Competitive in Economic Uncertainty,” featuring Emma Beckers, Senior Lead Engineer at Wells Fargo.

]]>
How to Reduce Costs with Riverbed Acceleration https://www.riverbed.com/blogs/how-to-reduce-cloud-costs-with-riverbed-acceleration/ Mon, 03 Apr 2023 12:43:05 +0000 /?p=20664 Cloud costs in general and bandwidth costs in particular are taking a big hit on the bottom line over the long run. There is a growing awareness of this phenomenon across the industry, especially with the technology sector experiencing a slow shrinkage with no end in sight.

Andreesen Horowitz released an eye opening report in 2022 on what this can mean for a company’s economics and how high the committed cloud spend percentage is when compared to revenue. According to the report, controlling public cloud spend is a heavy uplift spanning many areas like system-redesign, moving workloads to more efficient hardware or even third party efficiency solutions. Andreesen Horowitz went further, sharing an example of how Dropbox used various optimization techniques and, despite a small dip in revenue, were able to improve their margins just by controlling their cloud spend.

Dropbox cloud savings
a16z Dropbox Financials

How Riverbed Acceleration can help

Riverbed’s Acceleration solutions have three purpose-built algorithms for optimization to match different needs—Cloud Accelerator, Client Accelerator, and SaaS Accelerator. The optimization engine uses three techniques to reduce bandwidth use: blog or file caching, file compression, and byte stream look-ahead. More details can be seen in this helpful video:

Watch Video

To visually demonstrate how much traffic can be saved in a branch-to-cloud, the image below shows one of Riverbed’s own offices where cost was a pain-point. it ended up reducing data usage by close to 30% without any manual tweaking required on the end servers or services.

Riverbed Cloud Accelerator Report Showing 30% Reduction in cloud costs
Riverbed Cloud Accelerator Report Showing 30% Reduction

Visibility and cost transparency

The first step in reducing cloud cost is identifying potential causes for high spend, particularly bandwidth usage. Determining the root-cause of high cost bandwidth can be a difficult task without the right tools in place. Riverbed observability tools will remain your faithful companion along this journey.

Wouldn’t it be great if you had interactive charts like below—showcasing the big groups or the individual bandwidth hogs in a visual report—rather than going through logs or cumbersome data mining exercises?

Alluvio NetProfiler Report
Riverbed NetProfiler Report

Trust Riverbed Acceleration for controlling cloud costs. To learn how Riverbed solutions can help reduce your organizations IT asset costs, visit this page.

]]>
What is an Example of Hybrid Working? https://www.riverbed.com/blogs/what-is-an-example-of-hybrid-working/ Fri, 31 Mar 2023 12:34:00 +0000 /?p=20637 hybrid working; remote work, hybrid workMy previous blog discussed the differences between remote work and hybrid work. In this blog, we’ll dig a little deeper into some common questions about hybrid work. First, let’s set the stage with an example of hybrid working.

One example of hybrid working could be a team that consists of both in-office and remote workers who collaborate through digital tools and communication channels. For instance, the team may use the office for important meetings, team building activities or project kick-off, and then switch to remote work.

In this scenario, the remote workers have the flexibility to choose when and where they work, based on their job responsibilities and personal preferences. They can use digital tools and collaboration platforms to stay connected with their in-office colleagues, share files and information, and collaborate on projects. Meanwhile, the in-office workers have the opportunity to work face-to-face and build relationships with their colleagues, while also benefiting from the flexibility of remote work.

Overall, the hybrid working model enables teams to have the best of both worlds by allowing for flexibility, collaboration, and an optimal work-life balance. This example of hybrid working can apply across multiple industries—technology, creative services, professional services, etc.

What does a hybrid workplace look like?

A hybrid workplace can take on different forms depending on the organization’s needs and the preferences of its employees. However, most hybrid workplaces have some common features:

  1. A combination of in-person and remote work: By definition, a hybrid workplace allows employees to work both from the office and from remote locations, such as their homes or co-working spaces. When, and how often they do so, depends on the organization’s hybrid workplace policies.
  2. Flexibility: Employees have the flexibility to work in different environments, depending on their job responsibilities, personal preferences, and the model of hybrid work addressed below. The choice of where and when to work may not be entirely up to the employee. They may have the option to work from the office for important meetings or team activities and work remotely for the rest of their workweek.
  3. Digital tools and communication channels: To enable remote work and collaboration, a hybrid workplace relies on digital tools such as video conferencing software, messaging platforms, and cloud-based collaboration tools.
  4. Adequate infrastructure: A hybrid workplace requires adequate infrastructure, such as high-speed internet, secure VPN connections, and laptops or other mobile devices, to support remote work. The technical infrastructure enables employees to work productively and securely while remote.
  5. Work-life balance: Hybrid workplaces aim to provide employees with a better work-life balance by offering more flexibility in their work schedules, reducing commute times and costs, and allowing employees to work from home when needed.

Different types of hybrid working

Multiple hybrid working models exist. The best model for each organization will vary depending on the organization’s industry, customer base, and requirements for how the work gets done. Businesses can customize their hybrid working arrangements to meet the unique needs of their employees and organization.

  1. Rotation Model: In this model, teams rotate between working in the office and working remotely. For example, a team may work from the office for two days a week and remotely for the remaining three days. Many organizations are adopting this model to ensure that a critical mass of employees is in the office on particular days. This can help foster collaboration and a strong company culture.
  2. Flexibility Model: Here, employees have the flexibility to choose when and where they work, depending on their job responsibilities and personal preferences. For example, an employee may work from the office for important meetings and work from home for tasks that require more concentration. This model works best when employees are highly self-directed and when a high degree of trust exists within the organization. Managers have to be comfortable with empowering their employees to know when they need to come into the office.
  3. Task-Based Model: In this model, employees work remotely or in the office based on the nature of their job responsibilities. For example, an employee may work from the office for tasks that require collaboration and work remotely for tasks that require more independent work. Or, employees with certain roles may be required to be in the office most days.

Do employees prefer hybrid work?

Even as we put the COVID-19 pandemic in the rear-view mirror, many employees prefer a hybrid work model. A survey by McKinsey Company found that 75% of remote workers prefer a hybrid work model. PwC’s US Pulse Survey found that 72% of employees would like a mix of remote and in-person work.

The reasons for this preference vary, but some potential advantages of hybrid working for employees include a better work-life balance, lower commute times and expenses, increased flexibility, and the ability to avoid office distractions while still maintaining social connections with coworkers. The pandemic has shown that remote work can be effective, and many employees now value the flexibility that it offers.

Individual employee preferences vary, and some may still prefer full-time remote work or full-time in-person work depending on their job responsibilities, personal preferences, and individual circumstances. Management should consider the needs and preferences of their employees when designing their work arrangements.

When do employees prefer in-office work?

While many employees prefer hybrid or remote work models, some still prefer to work in an office. Some aspects of in-office work which may appeal to employees, or which are fundamental to the nature of the job include:

  1. Face-to-face interaction: Some employees prefer to work in an office because they enjoy face-to-face interaction with colleagues and the social atmosphere of a workplace. They may feel more energized and engaged when they are surrounded by people. On-the-job learning is far more effective in person, and the in-office environment is especially useful for younger people just entering the workforce, or those who are new to their jobs.
  2. Structure and routine: Employees who prefer to work in an office may also value structure and routine in their workday. They may prefer the clear separation between work and home life that an office provides. There’s a certain feeling of satisfaction and relief in being able to leave the job behind when you go home for the day.
  3. Collaborative environment: Some employees may work better when they collaborate with colleagues in person. They may prefer the ability to bounce ideas off of others and work together on projects. Teams involved in creative problem solving are more likely to do better in an office environment where the ideas can flow freely, and non-verbal communication can thrive.
  4. Specific resources or equipment: Some employees may require access to specific resources or equipment that are only available in an office environment. For example, they may need access to a laboratory, specialized software or hardware, or specific tools for their job. There’s just no getting around the workplace when you work on an assembly line, a blast furnace, or in a biopharmaceutical lab.
  5. Difficulty focusing at home: Some employees may find it challenging to focus on their work when they are at home. They may find it difficult to avoid distractions, such as household chores, family members, or to be sure, napping on the couch!

The IT challenges of hybrid work and how Riverbed addresses them

The shift to hybrid work presents several challenges for IT organizations as they strive to support their fellow employees. Some of the main challenges include:

  1. Security: With employees working from remote locations, there is an increased risk of security breaches and cyber-attacks. IT organizations must ensure that their remote workers have secure connections to company networks, and that company data is protected from unauthorized access. When security operations teams conduct forensic analysis of security threats, they need full-fidelity insight into every packet and flow, such as that provided by Riverbed Network Performance Management.
  2. Technology infrastructure: Hybrid work requires reliable and robust technology infrastructure to support remote work, such as high-speed internet, VPN access, cloud-based applications, and video conferencing tools. IT organizations must ensure that their systems can handle increased traffic and that remote workers have access to the necessary technology. Riverbed Acceleration enables organizations to accelerate any application over any network, to employees wherever they work.
  3. Device management: With employees using a variety of devices and operating systems to access company data, IT organizations must manage and secure these devices to ensure data protection and compliance with company policies. Only by measuring actual employee experience, like with Riverbed Aternity, can digital workplace teams ensure that they’re providing their employees with the devices and applications that enable them to be productive, wherever they work.
  4. Collaboration tools: Hybrid work requires effective collaboration tools that enable remote workers to stay connected with their in-office colleagues and collaborate on projects. IT organizations must provide reliable and easy-to-use collaboration tools that are accessible from any location. IT teams must manage the entire portfolio of collaboration tools on which their employees rely. Riverbed Aternity enables them to do that.
  5. Support: With remote workers, IT organizations must provide effective and timely technical support to address any issues that arise. This requires a different support model that can respond quickly to remote workers’ needs. The highly distributed nature of complex remote work environments makes this a challenge. IT teams can leverage Riverbed IQ to proactively identify and resolve complex issues, while minimizing the need for expensive “war room” troubleshooting processes.
  6. Training and education: With the adoption of new technologies and tools, IT organizations must provide ongoing training and education to their employees to ensure they are using these tools effectively and securely. Gartner refers to this as the “Digital Dexterity Gap.”

You can learn more about Riverbed’s solutions for managing the complexity of hybrid work by visiting our hybrid work solution page. Even better, you can start a Request Demo of our software to determine if it’s right for your organization.

]]>
Transforming Global Financial Services with Riverbed Aternity DEM https://www.riverbed.com/blogs/digital-experience-management-for-global-financial-services/ Tue, 28 Mar 2023 13:22:00 +0000 /?p=20720 The global financial services industry has undergone a rapid digital transformation in recent years, driven by evolving customer expectations, remote and hybrid work, infrastructure modernization, and an increasingly competitive landscape. Financial institutions now face the challenge of ensuring exceptional digital experiences for their customers and employees while adhering to strict regulatory standards and maintaining optimal performance across a diverse range of digital services. This challenge comes especially as both customers and employees demand a lot more when they interact with technology—whether that is a customer interacting with the financial institutions site and mobile app or the employees who work for these organizations.

This is why the leading global financial organizations leverage the Riverbed Aternity Digital Experience Management (DEM) platform—a comprehensive solution designed to address the unique challenges they face while maintaining their competitive advantages. With Riverbed Aternity, companies can gain insights into customer journeys, both converting and non-converting, and track user experience at every step of their journey, identifying and optimizing the highest-converting paths and eliminating any roadblocks. In addition, the platform can monitor the employee experience for critical business applications used to support customers, reducing friction during customer journeys due to broken links and other issues.

Enhancing customer experience

At the heart of every financial institution’s success lies exceptional customer experience.  Riverbed Aternity DEM helps global financial services companies monitor and optimize the performance of their digital services, ensuring seamless and satisfying experiences for customers. With Riverbed Aternity’s advanced analytics and insights, financial institutions can proactively identify and resolve performance issues, thereby minimizing customer frustration and maximizing satisfaction.

One of these ways is with Aternity’s User Journey Intelligence (UJI) functionality. With UJI, Riverbed Aternity leverages advanced Real User Monitoring (RUM) technology to track user journeys and analyze web page load time, providing insights into top-line business metrics such as revenue, customer engagement, and customer abandonment. The platform also offers Synthetic Transaction Monitoring (STM) for proactive issue identification. This is unique to Riverbed Aternity, where financial services companies can gain a competitive edge over pure-play DEM vendors lacking RUM, STM, or both.

Watch Video

Boosting employee productivity & accelerating digital transformation

In addition to improving customer experience, Riverbed Aternity DEM also enhances employee productivity by providing insights into the performance of internal applications and systems. By identifying bottlenecks and performance issues, Riverbed Aternity allows financial institutions to optimize their internal processes and workflows, resulting in increased efficiency and reduced operational costs.

As the global financial services industry continues to evolve, organizations must be agile and adaptable to stay ahead of the curve.  Riverbed Aternity DEM has supported financial institutions in their digital transformation journey by providing the tools and insights needed to optimize and innovate their digital services. With Riverbed Aternity, financial institutions across the globe have confidently embraced new technologies and delivered exceptional digital experiences to both customers and employees.

When Swiss Re, a leading provider of insurance, reinsurance and other forms of insurance-based risk solutions, underwent their digital transformation strategy they aimed to simplify collaboration among global teams and between Swiss Re and its external partners and customers. However, their existing device performance monitoring tool couldn’t provide a comprehensive understanding of their workforce’s experience, making it hard to interpret and scale. With Riverbed Aternity, Swiss Re was able to remotely, proactively, and non-invasively measure actual end-user experience, improving visibility and efficiency gains.

“What we particularly liked with Aternity was the ease in which we could analyze and correlate data. Riverbed Aternity makes this insight easily available to a broader audience in a format that is scalable and sharable with our internal stakeholders,” said Joost Smit, Digital Workplace Solution Architect and Engineer at Swiss Re.

Looking ahead

These are just a couple of examples of how the Riverbed Aternity Digital Experience Management platform is the ideal solution for global financial services companies looking to navigate the complexities of today’s digital landscape. By providing real-time insights into application performance, end-user experience, and overall system health, Riverbed Aternity DEM empowers financial institutions to deliver exceptional digital experiences, maintain regulatory compliance, and drive business growth.

]]>
Riverbed NetProfiler STIG Approved by DISA for Secure Network Performance Monitoring https://www.riverbed.com/blogs/alluvio-netprofiler-earns-disa-stig-approval-for-secure-network-peformance-monitoring/ Tue, 28 Mar 2023 12:27:00 +0000 /?p=20722 It’s safe to say that no sector of our federal government places a higher priority on defending against cyber attacks than the Department of Defense and our national security and intelligence agencies. And for good reason, given the level of cyber threats and activity by adversaries and our federal government’s commitment to rigorously pursue a Zero Trust security strategy.

That’s why it is a significant accomplishment to earn approval from the Defense Information Systems Agency (DISA) for the release of a Security Technical Implementation Guide (STIG) for any technology solution. STIG validation means that DISA has conducted a review and approved the solution’s defined configuration and security standards. Federal agencies and users that implement that IT solution and follow the STIG guidelines are complying with DoD security policies. They can feel confident they’re using best practices to protect their networks, data and users from cyber threats.

DISA approves Riverbed NetProfiler STIG

DISA recently announced that it has reviewed and approved the STIG for Riverbed’s NetProfiler, which provides network performance monitoring, network flow analytics and centralized reporting tools. NetProfiler affords agencies continuous visibility into the performance of endpoints, applications and users on the network, including providing alerts on any significant changes in network behavior. Those capabilities are key for maintaining strong IT security controls and enhancing user performance and user experience.

DoD and other federal users with CAC cards with appropriate security certificates can access the Riverbed NetProfiler STIG on the DoD Cyber Exchange website.

A critical role in compliance and security

In today’s environment, the roles of DISA and the DoD Cyber Exchange are more critical than ever. Part of DISA’s mission is to make the DoD network secure and resilient against cybersecurity threats, focusing on infrastructure and network security, and strengthening cybersecurity measures, including boundary defense and endpoint security. Its ultimate goal is achieving information dominance by providing an enterprise infrastructure so effective that its users have a significant advantage in combat.

The DoD Cyber Exchange provides one-stop access to cyber information, policy, guidance and training for cyber professionals throughout the DoD. Some portions of the Cyber Exchange website are also available to other Federal Government employees and to the general public. By making some of these resources available to the public, the Cyber Exchange helps users comply with rules, regulations, best practices and federal laws.

Since 2009, NetProfiler has a consistent track record of successful performance across the DoD and other the federal agencies. Now this latest milestone—STIG certification—provides additional assurance that NetProfiler can be implemented easily and securely and used with confidence to solve network visibility challenges.

For more information on NetProfiler including use cases, data sheets and other customer success stories, please visit our website.

]]>
What Is Hybrid vs Remote Work? https://www.riverbed.com/blogs/what-is-hybrid-vs-remote-work/ Mon, 27 Mar 2023 12:31:00 +0000 /?p=20628 remote work; COVID-19As the COVID-19 pandemic continues to reshape the way we work, discussions of hybrid work vs remote work have become increasingly prevalent. While both models involve working outside of a traditional office setting, there are distinct differences between the two.

Hybrid work involves a combination of in-person and remote work, while remote work is entirely location-independent. Understanding the differences between these two models is essential for businesses and employees as they consider the most effective and sustainable ways to work in a post-pandemic world.

This blog covers the differences between hybrid work and remote work and the advantages and challenges associated with each model. It also provides insights into employee preferences on hybrid vs remote work, how businesses can choose the right model for their workforce, and the challenges for IT organizations in supporting hybrid work.

What is remote work?

Remote work is a work model that enables employees to work from a location other than a traditional office setting. In remote work, employees can work from home, a coworking space, a coffee shop, or any other location with an internet connection. Remote work can be full-time, part-time or occasional and is enabled by technology such as video conferencing, messaging platforms, and cloud-based collaboration tools that allow employees to communicate and collaborate with their colleagues.

Remote work has become increasingly popular in recent years due to the flexibility it provides to employees and the benefits it offers to employers, such as reduced overhead costs and access to a broader pool of talent. With remote work, employers are no longer constrained to the workforce located geographically close to their offices. The COVID-19 pandemic accelerated the adoption of remote work, as many companies had to transition their employees to remote work to maintain business continuity while ensuring the safety of their workforce. Now that we’re emerging from the pandemic, companies are evaluating and evolving their remote work policies.

What is hybrid work?

Hybrid work combines remote work and in-person work in a flexible way. It allows employees to work either from a physical office or from a remote location, such as their home, co-working spaces or other remote locations.

In a hybrid work model, employees have more flexibility to choose where and how they work, depending on the nature of their work and personal preferences. This can be achieved through the use of technology such as video conferencing, messaging platforms, and cloud-based collaboration tools that enable remote work.

The hybrid work model has become increasingly popular in recent years, particularly since the COVID-19 pandemic, which forced many companies to adopt remote work practices. Hybrid work provides companies with the flexibility to adapt to changing circumstances while maintaining a collaborative work culture and achieving business objectives. Post-pandemic, many employees have continued to express a preference for hybrid work, as evidenced by commercial office vacancy rates remaining below pre-pandemic levels.

Do employees prefer hybrid work vs remote work?

The preference for hybrid vs remote work varies among employees and can depend on various factors, including job responsibilities, personal preferences, and individual circumstances. Some employees may prefer remote work because it allows them to have a more flexible work schedule, reduces commuting time and expenses, and provides a better work-life balance. Other employees may prefer hybrid work because it allows them to have a balance of in-person collaboration and the flexibility of remote work.

Several surveys conducted during the COVID-19 pandemic suggest that many employees prefer a hybrid work model, which allows them to have the best of both worlds. For example, a survey by McKinsey Company found that 75% of remote workers would prefer a hybrid work model in the future. PwC’s US Pulse Survey found that 72% of employees would like a mix of remote and in-person work. However, it’s important to note that individual preferences may vary, and some employees may still prefer full-time remote work or full-time in-person work.

The prevalence of hybrid work varies across industries

The prevalence of hybrid work varies across industries based on several factors such as the nature of the work, the level of customer interaction required, and the extent to which technology can facilitate remote work. Some industries are more likely to embrace hybrid work arrangements, while others may be more hesitant.

Here are some examples:

  • Technology: The technology industry has been at the forefront of remote work and hybrid work arrangements, given the nature of the work and the tools available. Many tech companies have been able to seamlessly transition to hybrid work during the pandemic, and some have announced permanent hybrid work arrangements.
  • Finance: The finance industry has traditionally been more resistant to remote work due to the need for face-to-face client interactions and the security concerns associated with handling sensitive financial information. For example, Goldman Sachs’ CEO David Solomon is a strong proponent of in-office work, and 65% of the Goldman Sachs employees now report to the office 5 days a week. However, the pandemic has forced many finance companies to adopt remote work and hybrid work arrangements, and some have found that it can be done successfully.
  • Healthcare: The healthcare industry has also been slow to adopt remote work due to the nature of the work and the need for in-person patient care. However, some healthcare companies have found ways to incorporate hybrid work arrangements for administrative staff and other non-clinical roles. The Centers for Disease Control report a significant increase in the use of telemedicine.
  • Education: The education industry has also had to adapt to hybrid work arrangements during the pandemic, with many teachers and professors teaching remotely or using a hybrid approach. However, some aspects of education, such as lab work and hands-on training, are more challenging to do remotely.
  • Manufacturing: The manufacturing industry has traditionally been more resistant to remote work due to the need for physical presence on the production line. However, some manufacturing companies have found ways to incorporate remote work for administrative and support staff.

The technology challenges of hybrid work

Hybrid work can make the job of the IT team more challenging in several ways:

  1. Supporting remote and in-office workers: With hybrid work, the IT team needs to support both remote and in-office workers, which can require additional tools and infrastructure to ensure that everyone has access to the same resources and can work seamlessly together. Digital Experience Management software, such as Riverbed Aternity, provides visibility to digital workplace teams to ensure that employees get a great experience no matter where they’re working.
  2. Ensuring network security: Hybrid work can increase the risk of cyber-attacks, as remote workers may be using personal devices or working from unsecured networks. The IT team needs to ensure that the network and data are secure, even when employees are working remotely. When security operations teams conduct forensic analysis of security threats, they need full-fidelity insight into every packet and flow, such as that provided by Riverbed Network Performance Management
  3. Managing multiple collaboration tools: With hybrid work, employees may use multiple collaboration tools, such as video conferencing, messaging apps, and project management tools, which can make it challenging for the IT team to manage and secure all of these tools and ensure that they work well together. Aternity supports the leading collaboration tools, such as Teams, Zoom, and Webex.
  4. Balancing flexibility and control: Hybrid work requires a balance between flexibility and control, as the IT team needs to ensure that employees have the tools they need to work effectively while also maintaining control over the network and data. This can be a delicate balancing act that requires careful management.
  5. Addressing technical issues: With remote work, employees may experience technical issues that are more difficult to resolve remotely. In addition, with remote work, technical issues can be caused by factors outside of IT’s control, such as Wi-Fi signal strength, ISP bandwidth, or SaaS performance. The IT team needs to be able to address these issues quickly and efficiently to minimize disruptions to work. Learn more about how Riverbed enables teams to “shift left” with automated remediation, to resolve issues at the lowest level possible and as fast as possible.

Overall, hybrid work can make the job of the IT team more challenging, as they need to support a more diverse and distributed workforce while maintaining network security, managing multiple collaboration tools, and addressing technical issues. Learn more about Riverbed’s solutions for hybrid work that make it possible to manage these challenges and enable a successful hybrid work environment. You can start a Request Demo of our solutions to see if they make sense for you.

]]>
Custom Data Collection & Enhanced Monitoring for IT Service Teams https://www.riverbed.com/blogs/digital-experience-management-for-it-service-teams/ Fri, 17 Mar 2023 12:29:00 +0000 /?p=20548 The principle of IT Service Management is to enable better Service Delivery by focusing on how IT teams can manage the end-to-end delivery of IT services to not only their users, but their customers as well. While Riverbed Aternity can support IT teams in meeting their goals by providing out of the box real-time visibility and actionable insights to help helping troubleshoot and improve user experience and the level of service, some customers require a digital experience management solution that is easy to use, configure and customize.

Customization becomes relevant when the IT teams need specific performance analysis based on data that Aternity is not currently monitoring out-of-the-box. This is also true when the Service Desk team needs to avoid connecting remotely to a device to verify data such as current configurations, installed software, services in use and system details. Riverbed Aternity addresses this need by providing a mechanism to customize and automate data collection and the dashboards based on stakeholder needs.

One example of the need to customize these metrics came from a large German industrial company. The organization needed to create dozens of custom PowerShell, WMI and event log monitors to collect custom data based on their IT teams’ requirements. They used the custom monitoring to provide the IT daily business with enhanced and extended visibility, allowing deeper analysis and more informed decision making aligned with their organization’s goals.

Support the daily business of the Service Desk

To support the daily need of the Service Desk team, Riverbed Professional Services initially used the Out-Of-the-Box (OOB) Riverbed Aternity dashboards, which were found to be an excellent starting point. This enabled the team to investigate problems with malfunctioning devices that employees in the organization were complaining about. The dashboard summaries, where the main and most important device metrics were visible, helped the support engineers with initial troubleshooting without having to connect to devices remotely. This saved time and accelerated their troubleshooting. Without the need to ask a number of questions for a support issue or browsing to the remote device, the Service Desk teams could quickly see device CPU speed and type, memory size, the model, disk free space, corporate browser usage, and much more. However, there were times where the Service Desk team required additional knowledge on the properties of a device, location or user, which Aternity did not detect OOB.

Creating custom monitoring to collect custom data enables deeper troubleshooting and enterprise-wide analysis. This is because custom data can be used to more easily group together items that share a distinct property and monitor performance. Moreover, custom data can then be exposed in the Riverbed Aternity Rest APIs or in the OOB Analyze dashboards.

The German industrial sector company mentioned in the example earlier configured a custom monitor to report the device RAM channel mode, and then created a custom dashboard to compare the performance of Single vs Dual RAM channel mode. In their other use cases, the customer’s Service Desk goal was to proactively improve the digital end user experience by looking for specific configurations impacting users. As an example, the Service Desk team created a monitor that collected information about several services to decide if they were being used and if they could disable them via Group Policy.

The following list provides further examples of the custom monitors created by this company for the daily work of their Service Desk team:

  • Get RAM Channel Mode
  • Get WiFi drivers
  • Check Autologon enabled
  • Check usage of services: disable non used services by GPO policies
  • Get Battery Wear check when more than 2 batteries in place
  • Has Dbutil Installed
  • License Recovery Dialog Shown
  • MS Defender Version
  • Network Access Users
  • Network Discovery Monitor LDAP
  • DMA Kernel Protection Status
  • Get Aternity Browser Extension Version
  • Get Batch Logon Users
  • Get HVCI Status
  • Get Open Firewall Rule
  • Get Secure Boot Status
  • Get Service Logon Users
  • Get Reliability Grade /WMI

This enhanced visibility meant that the Service Desk team were more efficient without involving users with remote sessions, allowing them to compare different configurations based on performance/cost criteria. It also allowed them to implement proactive optimizations to improve the digital user experience.

Enhanced analysis and troubleshooting for better decision making

The Riverbed Aternity UI allowed the IT Service Desk teams to see the most valuable information for their daily business. This enabled them to create custom-data based dashboards to expose and analyze data as required for informed decision making.

Using this methodology, the customer created several custom monitors in Aternity for the security team. Using these monitors, the team was able to automatically and proactively detect potential security issues related to wrong proxy client configurations, unsigned drivers and available security features. In the Aternity Dashboard shown below you can see the Netskope configurations derived from custom data collection:

Tunnel Status Error Messages
Netskope Proxy Client Configuration

As shown above, Riverbed Aternity provides a flexible and easy to use mechanism to collect custom data based on the IT Service teams’ actual needs and to enhance and extend Riverbed Aternity’s Out Of the Box real-time visibility and actionable insights. This allows enhanced monitoring, which in turn increases team efficiency, enables collaboration, saves time, reduces costs, maximizes uptimes, and provides informed decision making aligned with organizational goals.

]]>
Help Your Customers Get More Out of Their IT Budgets with Riverbed Aternity DEM https://www.riverbed.com/blogs/help-your-customers-get-more-out-of-their-it-budgets-with-alluvio-aternity-dem/ Mon, 13 Mar 2023 20:46:01 +0000 https://www.riverbed.com/?p=76124 As we continue to hear and read about rising inflation, ongoing supply chain challenges, and a potential recession, enterprises around the world are tightening their budgets. IT teams are clearly feeling the pressure with CIOs and IT buyers predicting their tech spend will only increase by 5.5% this year—a meaningful deceleration from previous expectations and below last year’s annual inflation rate of 8.3%. In other words, despite rising costs, IT teams will spend less this year when adjusted for inflation, reflecting stagnant IT budgets that aren’t keeping pace with economic realities.

Having to make do with less purchasing power is challenging, but there are opportunities to help your customers generate efficiencies within IT and get more out of every penny. In this blog, we explore how Digital Experience can help your customers’ teams reduce costs while maintaining a flawless digital experiences.

What is Aternity DEM?

Digital Experience is a full spectrum, digital experience platform that provides insight into the business impact of customer and employee digital experiences. It achieves this by capturing and storing technical telemetry at scale from employee devices, business applications, and cloud-native application services.

Equipped with this comprehensive visibility into the actual user experience and device performance, IT teams can create better experiences for users and leaders can make informed business decisions on IT spend. Here’s how:

Smart Device Refresh

Typically, IT teams will refresh devices based on their age, say, every three or four years. But age alone doesn’t speak to the actual health or performance of a device. Some perfectly good devices may be thrown out too soon, and other faulty devices need to be replaced a bit sooner so an employee can optimize their productivity. Riverbed Aternity DEM offers insight into actual user experience and device performance, informing teams on when to replace devices based on performance.

What it means for your clients: intelligent device replacement helps save them money by refreshing devices exactly when they need to be replaced, and not a moment sooner.

Eliminated Software Bloat

We all keep subscriptions longer than necessary, and the same is true for enterprises. A SaaS trends report found the average company wastes more than $135,000 annually on unused, underused, or duplicate SaaS tools and this cost increases dramatically for large enterprises. Riverbed Aternity DEM gives IT the power to automatically identify software licenses that are going unused or aren’t used often.

What it means for your clients: Instantly reduce software bloat by cutting licenses that are going mostly unused and redeploy those savings in ways that can better help the business.

Curtailed Shadow IT

All too frequently, teams across an enterprise will purchase SaaS tools without going through the proper IT channels. This inevitably leads to redundancies, increased risk, and headaches for IT. But Riverbed Aternity DEM can identify shadow IT software, and either direct usage to an approved application to eliminate the additional expense, or leverage approved purchasing channels to better handle the spend.

What it means for your clients: By curtailing shadow IT, IT teams can better understand and manage the software being used by employees. At the same time, it helps IT identify and eliminate duplicate and wasteful solutions so budgets are more effectively and efficiently utilized.

Cut costs and improve performance

Many IT departments have room to gain operational efficiencies by eliminating waste, thus maximizing every dollar. These efficiencies don’t have to come at the expense of the user experience. On the contrary, reducing wasteful spending can add money back into budgets that can then be used to hire talent and fill labor gaps, reducing the burden on IT departments so they’re more productive. Riverbed Aternity DEM helps organizations save on their IT costs while at the same time enabling even better digital experiences. It’s a win-win.

To how you can put this into action for your team, register for our upcoming webinar “Budget Getting Tight? How IT Leaders Reduce Costs Without Sacrificing User Experience.”

]]>
Get More Out of Your IT Budget with Riverbed Aternity DEM https://www.riverbed.com/blogs/get-more-out-of-your-it-budget-with-alluvio-aternity-digital-experience-management/ Mon, 13 Mar 2023 12:24:58 +0000 /?p=20448 As we continue to hear and read about rising inflation, ongoing supply chain challenges, and a potential recession, enterprises around the world are tightening their budgets. IT teams are clearly feeling the pressure with CIOs and IT buyers predicting their tech spend will only increase by 5.5% this year—a meaningful deceleration from previous expectations and below last year’s annual inflation rate of 8.3%. In other words, despite rising costs, IT teams will spend less this year when adjusted for inflation, reflecting stagnant IT budgets that aren’t keeping pace with economic realities.

Having to make do with less purchasing power is challenging, but there are opportunities to generate efficiencies within IT and get more out of every penny. In this blog, we explore how Digital Experience can help IT teams reduce costs while maintaining a flawless digital experiences.

What is Riverbed Aternity DEM?

Riverbed Aternity DEM (Digital Experience Management) is a full spectrum, digital experience management platform that provides insight into the business impact of customer and employee digital experiences. It achieves this by capturing and storing technical telemetry at scale from employee devices, business applications, and cloud-native application services.

Equipped with this comprehensive visibility into the actual user experience and device performance, IT teams can create better experiences for users and leaders can make informed business decisions on IT spend. Here’s how:

Smart Device Refresh

Typically, IT teams will refresh devices based on their age, say, every three or four years. But age alone doesn’t speak to the actual health or performance of a device. Some perfectly good devices may be thrown out too soon, and other faulty devices need to be replaced a bit sooner so an employee can optimize their productivity. Riverbed Aternity DEM offers insight into actual user experience and device performance, informing teams on when to replace devices based on performance.

What it means for you: intelligent device replacement helps save you money by refreshing devices exactly when they need to be replaced, and not a moment sooner.

Eliminated Software Bloat

We all keep subscriptions longer than necessary, and the same is true for enterprises. A SaaS trends report found the average company wastes more than $135,000 annually on unused, underused, or duplicate SaaS tools and this cost increases dramatically for large enterprises. Riverbed Aternity DEM gives IT the power to automatically identify software licenses that are going unused or aren’t used often.

What it means for you: Instantly reduce software bloat by cutting licenses that are going mostly unused and redeploy those savings in ways that can better help the business.

Curtailed Shadow IT

All too frequently, teams across an enterprise will purchase SaaS tools without going through the proper IT channels. This inevitably leads to redundancies, increased risk, and headaches for IT. But Riverbed Aternity DEM can identify shadow IT software, and either direct usage to an approved application to eliminate the additional expense, or leverage approved purchasing channels to better handle the spend.

What it means for you: By curtailing shadow IT, IT teams can better understand and manage the software being used by employees. At the same time, it helps IT identify and eliminate duplicate and wasteful solutions so budgets are more effectively and efficiently utilized.

Cut costs and improve performance

Many IT departments have room to gain operational efficiencies by eliminating waste, thus maximizing every dollar. These efficiencies don’t have to come at the expense of the user experience. On the contrary, reducing wasteful spending can add money back into budgets that can then be used to hire talent and fill labor gaps, reducing the burden on IT departments so they’re more productive. Riverbed Aternity DEM helps organizations save on their IT costs while at the same time enabling even better digital experiences. It’s a win-win.

To how you can put this into action for your team, register for our upcoming webinar “Budget Getting Tight? How IT Leaders Reduce Costs Without Sacrificing User Experience.”

]]>
Optimize App and Network Performance with Riverbed Acceleration https://www.riverbed.com/blogs/optimize-network-and-application-performance-with-riverbed-acceleration/ Thu, 09 Mar 2023 13:29:00 +0000 /?p=20135 The rapid pace of digitization plus transformational shifts to hybrid work, modern application architectures, and hybrid cloud networks are all making it difficult for IT teams to keep digital services accessible, high-performing, and secure for customers and employees. According to Riverbed’s Hybrid Work Global Survey, 88% of business leaders are concerned about the digital disparity between in-office and remote workers.

If you’re dealing with these types of challenges, Riverbed Acceleration is your solution!

Help on the way!

Riverbed’s Acceleration solutions support peak performance to every user, everywhere and provide fast, agile, secure acceleration of any app over any network to users, whether mobile, remote, or in the office. Thousands of organizations across the globe use our Acceleration solutions plus WAN acceleration technology. In a recent EMA survey on WAN transformation, 71% of respondents reported that they use WAN acceleration technology to improve application performance on their network.

Read on to learn how one of Riverbed’s customers leverages our Acceleration solutions to boost document transfer speed and reduce costs.

Hear from a customer

Since 2005, Quarles & Brady, a law firm on the American Lawyer’s AM Law 200 list of the largest U.S. based legal service providers, has relied on Riverbed Acceleration solutions to speed up processes when dealing with clients’ documents and data.

With over 1,100 users, including 500 attorneys, Quarles’ IT infrastructure handles significant amounts of information. The firm’s attorneys share, upload, and download large volumes of client data on a day-to-day basis.

Before working with Riverbed, attorneys at Quarles used a Microsoft file server and Microsoft Exchange to transfer legal documents to and from clients. The speed of this process is essential to keep costs low while the firm focuses on providing excellent service to clients. However, they faced immense challenges with this approach, including:

  • Long document upload, download, and transfer times, which is challenging as attorneys work through cases
  • High cost of attorney fees due to long transfer times
  • Hybrid, distributed workforce that needs accessible, secure, and high-performing apps to perform job duties

Riverbed Acceleration solutions

Quarles uses both Riverbed SteelHead and Client Accelerator to address these challenges:

  • Riverbed SteelHead dramatically speeds up the performance of applications anywhere in an organization while delivering the best end-user experience, even under sub optimal network conditions. Highly usable and compatible with customer environments, the solution is engineered for fast, seamless network integration into remote sites and branch offices, or data centers with scalable performance designed to support a growing number of users, devices, data, and application types.
  • Riverbed Client Accelerator delivers leading-edge application acceleration to dispersed workforces. The solution provides mobile employees with fast, reliable access to leading enterprise SaaS and on-prem applications. Compatible with your infrastructure (e.g., it supports leading operating systems), the solution improves network and application performance, security, and productivity for users anywhere.

Lower costs and better infrastructure

For Quarles, after implementing Riverbed Solutions, the speed boost in document management, such as downloads and uploads, went from taking minutes to just seconds. “What Riverbed allows us to do, no matter what application we roll out, is to almost guarantee that document transfer will be more efficient with the Acceleration solutions in place,” said James Oryszczyn, Director of Security and Network Services at Quarles.

Increasing speed via the Riverbed Acceleration solutions has also helped the firm cut costs, since it does not need to increase bandwidth and buy larger circuits to allow remote users with a below-par Internet connection to share documents. For more details on this case study click here!

You can also learn more about Riverbed Acceleration solutions on our web site, by reading this blog post, or by signing up for a Request Demo.

]]>
Deliver Total User Experience with Aternity Sentiment https://www.riverbed.com/blogs/enable-total-digital-experience-management-with-aternity-sentiment/ Tue, 07 Mar 2023 13:44:00 +0000 /?p=20243 As companies shift towards hybrid IT models, measuring device and application performance metrics alone is not enough to provide a comprehensive understanding of the employee experience. Directly engaging with employees can provide visibility beyond app and device performance data, providing a pathway to improve the digital experience.

Objective data can provide insights into digital experience. However, it doesn’t capture the actual feelings of the user that are necessary to truly understand how they’re interacting with technologies and capture frustrations that go beyond the device and app performance metrics. Service desks can send out email surveys, yet, these tend to have poor response rates, as users will either ignore or overlook those types of touchpoints. Having a blindside on what and how the employees feel on a day-to-day basis can negatively impact business outcomes—impeding digital transformation initiatives, adoption of new services or increasing turnover.

The solution needs to be frictionless and fast for users. To bridge this gap, RiverbedAternity has released Aternity Sentiment in public beta, a holistic solution for digital experience management that captures both quantitative and qualitative data. By integrating Sentiment with digital experience management (DEM) workflows, organizations can assess the total user experience and discover hidden issues tied to how users feel about the technologies they interact with, IT teams can analyze this data and prioritize where to make investments to meet XLAs.

How does it work?

By capturing both objective and qualitative data, Aternity Sentiment gives IT leaders a comprehensive understanding of the digital employee experience by adding the human element to the data they collect. IT teams create customizable surveys to capture accurate feedback from users and address issues. For example, IT can get details on how a certain application may be running on a user’s Windows 10 desktop after an update rolls out, or assess their experience with battery performance on a laptop that other users have reported.

IT leaders can even analyze the users’ feelings regarding a digital transformation initiative by having multiple checkpoints during rollouts of new services, enabling a direct, two-way communication of real-time information. They can even engage directly with employees on potential issues related to their systems and provide details to address those issues without the need to log in a ticket, saving time for both end-users and IT teams.

By also viewing Aternity Digital Experience Index (DXI) data, IT can identify hot spots that require employee engagement by gathering their actual experience where unaddressed issues could result in poor Aternity DXI scores. With built-in filtering capabilities that show performance by business unit, device manufacturer, and more, Aternity’s DXI capabilities show immediate, targeted performance insights and set IT on the right path to diagnose root cause and solve issues that go beyond the device and app metrics.

Aternity Sentiment survey
IT can create customizable surveys to capture accurate feedback from users.

For end-users, Aternity Sentiment empowers them to provide insights and give them a voice, giving them a channel to provide feedback on new technology rollouts, application and device experiences, and overall company initiatives. When they log into their machines, they will get a notification indicating a survey question the IT team has targeted them to address.

Sentiment survey response data
IT teams can analyze Aternity Sentiment survey response data.

From there, IT teams can analyze the results (OOTB or Create Your Own) or export them to their own tool via our REST API and augment that with the data that Aternity DEM already collects. From there, they can focus on what areas to improve. For example, when Aternity detects that Excel is performing poorly, do users notice or is it a background process that went undetected? Based on the responses, IT teams can prioritize accordingly. This ensures they’re leveraging all the tools at their disposal to ensure they’re meeting experience goals.

Aternity Sentiment is a game-changer in digital experience analytics. By capturing qualitative feedback along with objective performance data, Aternity Sentiment provides a complete understanding of the digital employee experience, enabling organizations to drive increased customer satisfaction and employee productivity. To learn more, check out the Aternity Sentiment Beta user guide.

]]>
Rubber Bands, Bad Apples and Automated Orchestration https://www.riverbed.com/blogs/rubber-bands-bad-apples-and-automated-orchestration/ Fri, 03 Mar 2023 13:24:13 +0000 /?p=20233 As modern networks stretch well beyond the data center, vulnerabilities are being exploited more and more by threat actors. Much like a rubber band, as you stretch it out and pull it tighter, there is always the risk of it breaking.

When networks were confined to just the data center, it was easier to monitor. But now that networks stretch significantly outside the data center—all the way to the campus, remote office or cloud—the threats to the network become more prevalent because your rubber band is stretched to its limits. In doing so, keeping your distributed network compliant and secure is a challenge.

Beware bad apples

Bad actors are always trying to exploit those vulnerabilities in the far stretched network. It only takes the baddies one time to find their way into a network by running a corrupt file, non-compliant application or old operating system. If an application or OS is out of compliance, no longer supported, or is riddled with security issues, this can translate to serious loss of productivity, customer sentiment and revenue. In addition, a company can end up paying millions in fines and can have additional financial impact just in the recovery process alone.

Many highly regulated industries like finance, government or medical are trying to police themselves when it comes to such issues of compliance and security. It’s better to monitor themselves and hold their businesses up to a higher standard instead of the government passing major regulations that cause wide sweeping change, often at great expense to the business. Government regulation is often a last resort so better to look after yourselves. If the government gets involved, it’s often due to a catastrophic event propagated by a single vendor effecting a client and millions of their customers. And those regulations ultimately could impact every network vendor in that particular industry.

One bad apple can financially ruin the apple cart for all parties involved.

Secure networks with Riverbed NPM

Riverbed’s Network Performance Management (NPM) portfolio recently implemented a feature enhancement across its products as a result of regulated institutions requiring this change of all their vendors. In response to the prevalence of Ransomware attacks happening across various sectors, industry leaders mandated their vendors who operated products within their networks to implement what Riverbed calls Automated Orchestration.

This recent feature integrated across Riverbed AppResponse, NetProfiler, NetIM and Portal allows any of these products to be stood up; and in the event of an internal or external threat, have the product taken down and redeployed automatically to a known safe state. This in turn saves time, money and mitigates risks associated with manual intervention. Automated Orchestration across your NPM portfolio will ensure compliance as well as security so that your network keeps running and avoid the risk of potential fines or negative financial impact.

For more information on the Riverbed NPM portfolio of products, please visit this page.

]]>
Rubber Bands, Bad Apples and Automated Orchestration https://www.riverbed.com/blogs/rubber-bands-bad-apples-and-how-automated-orchestration-solves/ Thu, 02 Mar 2023 03:59:07 +0000 https://www.riverbed.com/?p=76125 As modern networks stretch well beyond the data center, vulnerabilities are being exploited more and more by threat actors. Much like a rubber band, as you stretch it out and pull it tighter, there is always the risk of it breaking.

When networks were confined to just the data center, it was easier to monitor. But now that networks stretch significantly outside the data center—all the way to the campus, remote office or cloud—the threats to the network become more prevalent because the customer’s rubber band is stretched to its limits. In doing so, keeping their distributed network compliant and secure is a challenge.

Beware bad apples

Bad actors are always trying to exploit those vulnerabilities in the far stretched network. It only takes the baddies one time to find their way into a network by running a corrupt file, non-compliant application or old operating system. If an application or OS is out of compliance, no longer supported, or is riddled with security issues, this can translate to serious loss of productivity, customer sentiment and revenue. In addition, a company can end up paying millions in fines and can have additional financial impact just in the recovery process alone.

Many highly regulated industries like finance, government or medical are trying to police themselves when it comes to such issues of compliance and security. It’s better to monitor themselves and hold their businesses up to a higher standard instead of the government passing major regulations that cause wide sweeping change, often at great expense to the business. Government regulation is often a last resort so it’s better for your customers to look after themselves—with your help. If the government gets involved, it’s often due to a catastrophic event propagated by a single vendor effecting a client and millions of their customers. And those regulations ultimately could impact every network vendor in that particular industry.

One bad apple can financially ruin the apple cart for all parties involved.

Secure networks with Riverbed Network Observability

Riverbed’s Network Observability portfolio recently implemented a feature enhancement across its products as a result of regulated institutions requiring this change of all their vendors. In response to the prevalence of Ransomware attacks happening across various sectors, industry leaders mandated their vendors who operated products within their networks to implement what Riverbed calls Automated Orchestration.

This recent feature integrated across Riverbed AppResponse, NetProfiler, NetIM and Portal allows any of these products to be stood up; and in the event of an internal or external threat, have the product taken down and redeployed automatically to a known safe state. This in turn saves time, money and mitigates risks associated with manual intervention. Automated Orchestration across your client’s NPM portfolio will ensure compliance as well as security so that their network keeps running and avoid the risk of potential fines or negative financial impact.

For more information on the Riverbed Network Observability portfolio of products, please visit the Riverbed Network Observability solution section of the Partner Portal.

]]>
Riverbed IQ Leverages Third-Party Data https://www.riverbed.com/blogs/alluvio-iq-unified-observability-leverages-third-party-data/ Mon, 27 Feb 2023 13:44:00 +0000 /?p=20054 According to Enterprise Management Associates, 64% of organizations use 4-10 monitoring tools, while another 17% use 11 or more. This tools sprawl exacerbates the challenge of correlating disparate data sources to determine root cause of complex incidents. Additionally, problems such as alert fatigue, death by dashboards, and a lack of technical expertise also often coincide with tools sprawl.

However, many of these monitoring tools are necessary to provide different perspectives of network, application, and end user performance. Yet, some tools can be so entrenched that any change or attempt to consolidate is significant endeavor. To move away from these ingrained tools often means incurring significant costs and time.

Riverbed IQ Unified Observability

Riverbed IQ, Riverbed’s SaaS-delivered Unified Observability service, empowers IT to identify and fix problems fast. It leverages Riverbed full-fidelity end user experience, network and application data, then applies machine learning (ML) to contextually correlate the disparate data streams to identify business-impacting events. This intelligence informs IQ’s automated runbooks that gather supporting context, filter out noise, and set priorities. As a result, Riverbed IQ reduces alert overload and accelerates root cause analysis of the most impactful alerts.

Riverbed IQ now includes third-party data

Now the Riverbed team recognizes that when a company uses an abundance of monitoring tools, they want to integrate all their data in Riverbed IQ to truly simplify the troubleshooting process. So, we added the capability to import data from other solutions (think third-party monitoring tools or business intelligence). Plus, IQ can export intelligent insights to third-party solutions like Slack, ServiceNow, custom scripts, Ansible runbooks, etc.

The third-party data is added through the automated runbooks where the data can then be used as if it’s native Riverbed data. Use it for decision making (if X happens, get relevant data from solution Y and visualizations. Because IT can now use all data in its environment, IQ better tailors the automated investigations to the organization’s troubleshooting process.

Alluvio IQ runbook
Riverbed IQ lets you import or export data from/to third-party solutions. This runbook sends an alert with supporting data to ServiceNow when the impact of the event is deemed critical.

How it works

The integration process is simple yet flexible enough to support a non-Riverbed solutions in just a few, easy steps. First you need to authenticate with the third party solution. Then you can build an “HTTP Request,” which enables IQ to leverage data from any solution with a public REST API. Finally, “Transform” translates the third-party data into terms Riverbed IQ understands.

Watch this video to see how to build and use third-party integrations:

Leverage any data

With this announcement, Riverbed IQ can leverage any data in the IT or business environment that could inform troubleshooting. In fact, one Riverbed IQ customer in the petroleum industry pulls in oil viscosity data when troubleshooting certain issues.

For more information about Riverbed IQ or how to leverage third-party data within runbooks, visit the Riverbed IQ web page.

]]>
Illuminating Virtual Environment Blind Spots with Riverbed https://www.riverbed.com/blogs/digital-experience-illuminates-virtual-environment-blind-spots/ Thu, 23 Feb 2023 22:10:00 +0000 /?p=20094

To protect critical customer, business and employee data, more and more businesses are opting to migrate workspaces to virtual environments. Migrating to a virtual environment has many upsides, however can bring unforeseen challenges that can impact performance and undermine the initiative altogether.

The biggest challenge in a virtual environment is consistency in system performance. Resource allocation and oversubscription, network latency to the users end point, coupled with application-related backend processing time, all culminate to the poor experience that the end users may have—resulting in a hit to NPS cores and workforce productivity.

virtual environment

Four primary blind spots in virtual environments

These are the primary blind spots associated with a dip in the performance of applications or services delivered in a virtual environment. Teams need to proactively identify and fix these blind spots to ensure seamless performance:

Latency

Remote display latency is the delay in communication between endpoints in both directionsusers to the virtual server or desktop and back. We identify problems in the network path by trending remote display latency protocols, analyzing latency by the end users’ locations and device metrics and correlating application performance to the virtual system that users are accessing. This can easily identify those network segments, branches, or users that need attention.

virtual environment latency

Application Performance

A “Business Activity” is defined as the discrete actions that users perform in an applicationfrom when they click a button to when the response loads. Aternity has the unique ability to validate the actual user experience for all applications by monitoring these user activity response times, from their perspective, at the very point of consumption.

In addition, Aternity shows the breakdown of that application response time, be it the device processing itself, network or backend related delays. It can also detect crashes, errors, and resource consumption of each application providing a 360 degree view of both application health and user experience.

application performance, user experience

Host Utilization

Host Resource Consumption (HRC) is a critical metric in identifying poorly performing devices. Aternity finds devices with the highest CPU or memory usage, identifies the users connected and the applications they are running in the virtual environment with their respective demand for resources. This provides a way to right-size the virtualization environment to accommodate to demand and, where needed, remove those superfluous applications.

Host Utilisation

Device Health

It is essential to track errors on hardware and system on both the front-end and the virtual devices, as well as the components impacted. Aternity analyzes performance metrics to identify patterns with specific health events and find correlations with attributes like the device model, RAM, location, disk type, etc. This easily surfaces those needles in a stack-of-needles, identifying the patterns to quickly feed remediation.

Device Health

Test drive Riverbed Aternity

Riverbed Aternity is installed via a lightweight agent at the point of service consumption for virtual environments which then automatically monitors any type of devices, be it physical laptops, desktops, or virtual machines. Once the agent is installed it will begin monitoring any type of application, be it a thick-client, web, published virtual application, as well as apps or SaaS applications. Through ML and AI models, the Aternity agent will begin assessing what is normal and begin surfacing areas that are impacting user performance. It does this over any application, any device, no matter where the users are located.

Overall status and performance reporting of critical devices helps identify the reason for latency, such as:

  • Poor Wi-Fi connection
  • Poor device hardware health
  • Poor system health
  • Resource utilization of laptops
  • Other apps running at the same time as virtual desktop connection

All of these metrics can be provided through Riverbed Aternity in any type of virtual environment, specifically Citrix XenApp, XenDesktop, Microsoft WVD, HVD and VMWare VDI.

Currently, Riverbed Aternity is monitoring more than 4 million endpoints across our clients globally. Our proven technology supports organizations globally with transitioning to a virtualized environment and keeps things running optimally post-migration, mitigating risk and ensuring workforce productivity. Try Riverbed Aternity for free today. 

]]>
Riverbed IQ Solves Zero Trust Blind Spots https://www.riverbed.com/blogs/alluvio-iq-unified-observability-solves-zero-trust-blind-spots/ Wed, 22 Feb 2023 13:24:00 +0000 /?p=20038 As companies have shifted their employee workspace environments from the office to a “work from anywhere” model, the security perimeter has extended to cover remote users, data centers, SaaS applications, IaaS applications and more. In modern distributed environments, it’s common to have at least three different WAN routing options for traffic: direct to internet, corporate VPN, and Cloud Access Security Brokers (CASB). There are often routing rules in place where business applications use one route, such as the CASB,  while other applications go direct to internet. The route used can have a significant impact on application performance and user experience.

With adoption growing quickly, Zero Trust Network Architectures (ZTNA) like SASE and Security Service Edge (SSE) enable users to securely access to their applications, devices, data, etc. wherever they are located. For companies, this means that a threat can be easily contained and isolated in the event of a breach. The problem is that the tunnels that secure the data also reduce visibility and add monitoring and troubleshooting complexity.

ITOps teams can no longer look into traffic directly within these environments. As the traffic enters a Zscaler or Netskope tunnel, for example, it gets combined and homogenized. As a result, IT loses the detail it needs to identify where slowdowns are occurring and what is causing them.

Riverbed IQ observes Zero Trust environments

Riverbed IQ Unified Observability leverages end user experience metrics plus advanced logic and correlation to deliver much needed visibility into Zero Trust environments. By viewing the application traffic before it enters the VPN or CASB gateways, IT can now monitor and troubleshoot access and performance issues.

When problems occur, Riverbed IQ surfaces performance indicators, including valuable context about the scope, severity, and probable root cause. Key measurements that Riverbed IQ uses include:

  • Which applications are having performance issues?
  • Which users are impacted? Are there users who are not impacted?
  • Which locations are impacted?
  • How severe is the impact?
  • How are the impacted users accessing the application? (CASB, VPN, etc.)
  • Is the issue caused by a specific ISP?
  • Is the VPN or gateway causing the problem?

Watch this video to learn more about how Riverbed IQ works in Zero Trust environments.

Riverbed IQ for the win

Providing IT teams with the means to troubleshoot problems within today’s modern architecture is not always easy. Many organizations resort to synthetic testing, but this only lets you know there’s a problem. It does not provide root cause details.

Riverbed IQ delivers the unified observability that IT teams need to diagnose and resolve new blind spots by integrating and correlating user experience data from Riverbed Aternity. IT teams can now determine where problems reside, who is impacted, and problem severity. Armed with this information, they can now troubleshoot previously difficult-to-diagnose issues in hybrid work and Zero Trust environments.

]]>
ChatGPT and Digital Experience Management – How’s the Weather Where You Are? https://www.riverbed.com/blogs/chatgpt-and-digital-experience-management/ Wed, 15 Feb 2023 13:15:00 +0000 /?p=20010 AIOps and ObservabilityLike many, I enjoyed seeing the responses that people have received by asking various questions to OpenAI’s ChatGPT. In the last week, I’ve read poetry, essays and even definitions of market segments like Digital Experience Management. Experimenting with ChatGPT seems like a lot of fun, but large language model (LLM) driven chatbots like ChatGPT can provide serious business benefits too. As chatbots become capable of having more human-like interactions, the potential for leveraging them for customer interactions increases. Forbes and VentureBeat recently wrote about the opportunity for ChatGPT to improve customer experience and customer service. First generation chatbots rely on pre-determined scripts to respond to expected customer queries. But LLM-based chatbots like ChatGPT can provide answers to questions by understanding the intent of person asking them. Juniper Research projects that AI-powered chatbots will handle up to 70% of customer conversations by the end of this year. Even so, companies will still need to combine ChatGPT and Digital Experience Management.

Here’s why.

How’s the weather where you are?

atmospheric riverWe’ve all experienced it. You’re on the phone with a contact center agent who works for your favorite retailer, insurance company, or bank. You provide your account number and verify your identity. There’s an awkward pause. Then the inevitable question. It could be about the local weather. “How did you do in the big freeze (or rain storm caused by the latest atmospheric river) last weekend?” It could be about how your local sports team did in their last game. “What did you think of the Superbowl?” or, how you feel about this year’s Grammys. “Isn’t it amazing that Beyonce passed Georg Solti and won her 32nd award? But, don’t you think she must be disappointed after not winning album of the year for the fourth time?”

We all know why the contact center agent engages us in small talk. They’re waiting for their applications to respond so they can continue helping us with the transaction or query we’re trying to make. It’s easy to be frustrated with them, but it’s not their fault. Poor performing applications—whether CRM, or EHR, or inventory—have a negative impact on both customer experience and employee experience. I wonder whether AI-powered chatbots like ChatGPT will be programmed to ask us these types of innocuous questions to fill up the time taken by slow performing business applications. Even they can, IT organizations will still need to augment ChatGPT with Digital Experience Management.

Employee experience and customer experience are tightly connected

The interaction between customers and employees in the contact center scenario shows the tight connection between employee experience and customer experience. Both types of digital experience depend on the technology, the applications and infrastructure, that connect the user journeys of employees and customers. A poor experience in one area causes a poor experience in the other. Especially for industries that rely on omni-channel interactions with customers or patients or citizens. Digital Experience Management is required to deliver excellent customer experience and employee experience, even with the advent of ChatGPT.

ChatGPT and Digital Experience Management for customer experience

Here’s the first example: While LLM-powered chatbots like ChatGPT will certainly be of value to guide customers along their intended journeys, ChatGPT will be able to answer many of the expected (and even unexpected) questions that customers have when they interact with a company. However, it won’t be able to answer all the questions that businesses have around ensuring an excellent customer experience. Organizations will continue to rely on Digital Experience Management to answer questions like:

  • What are the most prevalent user journey paths taken across my website?
  • What steps in the customer journey are taking too long?
  • What is the business impact of slow-loading web pages, in terms of revenue or abandonment rate?
  • What is the projected business benefit if we invest to improve page performance by 2 seconds?

Riverbed’s Digital Experience Management platform, Riverbed Aternity addresses the questions and more through Aternity User Journey Intelligence. Watch this short video to see it in action:

ChatGPT and Digital Experience Management for employee experience

While ChatGPT will prove to be a useful tool for contact center staff to leverage in their interactions with customers, it won’t help eliminate the cause of poor application performance. Small talk about the weather, sports, or culture will still be required. I suppose ChatGPT can certainly help there too! But IT teams will still require Digital Experience Management products like Riverbed Aternity to proactively identify and resolve issues associated with the full range of business-critical applications on which employees rely. With Aternity, companies can establish Experience Level Agreements based on targets for response time for the most common business processes employees do. Things like looking up a customer record, processing a claim, or checking inventory.

Here’s another short video:

Get started today

As you think about incorporating ChatGPT into your customer service strategy, also think about how Digital Experience Management can help you achieve your goals. Unlike other vendor’s products, Riverbed Aternity delivers full-spectrum digital experience management, by contextualizing data across every enterprise endpoint, app and transaction to inform remediation, drive down costs and improve productivity. To see how Aternity can help you on your journey to improve digital experience, register to begin a Riverbed Demo today.

]]>
Solving Remote Work Visibility Challenges for NetOps https://www.riverbed.com/blogs/solving-remote-work-visibility-challenges-for-netops/ Mon, 13 Feb 2023 13:44:03 +0000 /?p=18791 When employees work from an office, the network team is responsible for application access and delivery. The network team is responsible for identifying issues where employees can not access applications or application performance is degraded due to network performance.

In a remote work or work-from-anywhere environment, the responsibility of identifying and troubleshooting access and performance issues still falls on the network team. When it comes to remote workers, Level 1-2 techs need to be able to identify network access and performance issues for end users accessing business applications.

They need to be able to:

  • Understand the scope and severity of the issue so that they can prioritize appropriately and understand if they need to escalate to level 3.
  • Understand the impact on end users so that they can document and communicate the incident to the affected end users.
  • Understand the cause of the issue so they can know which resources to call (ISP, CASB supplier, application owner, Security team, device issue, etc.) and understand when the issue might be resolved.

However, the problem space has changed. There are several environmental challenges that limit NetOps visibility into application performance.

Remote work visibility challenges for NetOps teams

Split Tunnels

In modern remote work environments, it’s common to have three different routing options for traffic: direct to internet (no tunnel), corporate VPN, and a Cloud Access Security Broker (CASB). There are often routing rules in place where specific applications use one route (such as the CASB) and other applications go direct to internet. The routing or tunnel being used can have a significant impact on application performance and end user experience.

CASBs

CASBs are widely adopted and create a bottleneck for performance while optimizing for security. CASBs are often implemented by the security team. They make it more difficult for the network team to troubleshoot as the tunnels add complexity and reduce visibility through encryption of traffic. In a few ad hoc tests, CASB bandwidth is as low as 3Mbps and there is added security scanning time for an additional slowdown.

Multiple Gateways

There are typically multiple gateways being used by each type of tunnel. For example, users in the northeast united states may have CASB traffic tunneled to gateway X, while users in central US are connecting to gateway Y. If only one gateway is causing problems, it’s difficult to determine that. This gateway issue is also applicable to corporate VPNs.

SaaS vs Corporate Applications

The percentage of companies using SaaS to meet their software needs is steadily increasing, with 80% of companies relying on SaaS apps in 2022. Whereas the remaining corporate applications are usually hosted in a data center. Remote user traffic traverses a physical network which can cause additional slowdown. This is still the responsibility of the network team to diagnose.

ISP Variables

Remote workers typically use their own ISP. This variability is an additional challenge when trying to identify root cause.

Home Network Variables

Remote workers are also responsible for their home network. Variables such as poor Wi-Fi or congestion on the home network is an additional challenge when trying to identify root cause.

Many Locations

Finally, in remote work environments, location is less specific than with on-premises users. There may be users in a general geographic area that are having issues due to an ISP or gateway, but it is not as easy to use a specific site or location to identify problems.

Alluvio IQ provides NetOps with rich visibility into remote work issues.
Riverbed IQ provides NetOps with rich visibility into remote work issues.

Riverbed IQ brings visibility to remote work

By adding Aternity end user experience metrics to Riverbed IQ, Riverbed’s SaaS-based unified observability solution, NetOps teams gain basic visibility into traffic that leave the home computer and goes to a data center or SaaS application.

IT teams can now answer problems such as:

  • Which applications are having network performance issues?
  • How many users are impacted? And how severe is the impact?
  • How are the impacted users accessing the application? (CASB, VPN, Direct to internet)
  • Which locations are affected?
  • What’s causing the problem? Is the CASB / VPN causing the problem? Or a specific gateway? Is it an ISP problem, a VPN problem, or a problem with the user’s device itself?

Visit this page to learn more about how Riverbed IQ helps organizations shift left.

]]>
Solving Hybrid Work Challenges for NetOps https://www.riverbed.com/blogs/solving-hybrid-work-challenges-for-netops/ Fri, 10 Feb 2023 13:33:00 +0000 /?p=18829 According to Gartner hybrid work is here to stay, with 75% of hybrid or remote knowledge workers say their expectations for working flexibly have increased. If an organization were to go back to a fully on-site arrangement, it would risk losing up to 39% of its workforce. However, hybrid work architectures often leverage tunneling technologies to establish “work from anywhere” environments and these tunnels create blind spots that complicate troubleshooting and problem resolution.

When employees work from an office, the network team is responsible for application access and network transport issues, and has access to a mature toolset to help identify and resolve issues. As work from anywhere proliferates, the responsibility for identifying and troubleshooting remote issues in these new direct-to-cloud environments still falls within the network teams’ domain. Yet, because of the new blind spots, they lack the visibility to be effective.

When it comes to hybrid work, Level 1-2 techs need to be able to identify network access and performance issues for end users accessing business applications. They need to be able to understand:

  • The scope and severity of the issue so that they can prioritize appropriately and understand if they need to escalate to level 3.
  • The impact on end users so that they can document and communicate the incident to the affected end users.
  • The cause of the issue so they can know which resources to call (ISP, CASB supplier, application owner, security team, device issue, etc.) and understand when the issue might be resolved.

However, the problem space has changed. There are several environmental challenges that limit NetOps visibility into application performance.

Hybrid work visibility challenges for NetOps teams

Split Tunnels

Hybrid work is the new norm but there are significant barriers to effective troubleshooting.
Hybrid work is the new norm but there are significant barriers to effective troubleshooting.

In modern hybrid work environments, it’s common to have three different routing options for traffic: direct to internet, VPN, or through a security broker such as a CASB or ZTNA. There are often routing rules established where specific applications use one route (such as the CASB) and other applications go direct to the internet. The routing or tunnel being used can have a significant impact on application performance and end user experience.

CASB

CASBs are widely adopted and create a bottleneck for performance while optimizing for security. CASBs are often implemented by the security team. They make it more difficult for the network team to troubleshoot as the tunnels add complexity and reduce visibility through encryption of traffic. In a few ad hoc tests, CASB bandwidth is as low as 3Mbps and there is added security scanning time for an additional slowdown.

Multiple gateways

There are typically multiple gateways being used by each type of tunnel. For example, users in the northeast United States may have CASB traffic tunneled to gateway X, while users in central United States are connecting to gateway Y. If only one gateway is causing problems, it is difficult to determine that. This gateway issue is also applicable to corporate VPNs.

SaaS vs corporate applications

The percentage of companies using SaaS to meet their software needs is steadily increasing, with 80% of companies relying on SaaS apps in 2022. The remaining corporate applications are usually hosted in a data center. Remote user traffic traverses a physical network which can cause additional slowdown. This is still the responsibility of the network team to diagnose.

ISP variables

Remote workers typically use their own ISP. This variability is an additional challenge when trying to identify root cause.

Home network variables

Remote workers are typically responsible for their home network. Variables such as poor Wi-Fi or congestion on the home network are an additional challenge when trying to identify root cause.

Many locations

Finally, in hybrid work environments, location is less specific than with on-premises users. There may be users in a general geographic area that are having issues due to an ISP or gateway, but it is not as easy to use a specific site or location to identify problems.

Riverbed IQ brings visibility to hybrid work

By adding Riverbed Aternity end user experience metrics to Riverbed IQ, Riverbed’s SaaS-based unified observability solution, NetOps teams can gain visibility into traffic that leaves the home computer and goes to a data center or SaaS application.

IT teams can now answer questions like:

  • Which applications are having network performance issues?
  • How many users are impacted, and how severe is the impact?
  • How are the impacted users accessing the application? (VPN, Direct to internet)
  • Which locations are affected?

To learn more about how Riverbed IQ helps organizations shift left, visit this page.

]]>
Riverbed IQ Accelerates Troubleshooting with New Integrations https://www.riverbed.com/blogs/alluvio-iq-accelerates-troubleshooting-with-new-integrations/ Tue, 07 Feb 2023 13:05:00 +0000 /?p=19650 According to Enterprise Strategy Group’s 2023 Technology Spending Intentions Survey, more than half (53%) of respondents said their organization’s IT environment is more or significantly more complex than it was two years ago. The most common reason for this added complexity is the increase in remote and hybrid work driven by the COVID-19 pandemic. While some organizations have returned to pre-pandemic levels of in-office work, supporting remote and hybrid work strategies continues to be an issue for many IT teams.

Often related to the remote work issue is SSE (Security Service Edge), which security teams use to secure cloud computing, edge computing and remote work. SSE is the security portion of SASE (Secure Access Service Edge). Its capabilities include access control, threat protection, data security, security monitoring, and acceptable use control.

Gartner predicts that by 2025, 80% of enterprises will have adopted a strategy to unify web, cloud services and private application access from a single vendor’s SSE platform.

The problem with both remote work and security service edge is that monitoring network performance is extremely difficult by traditional means. Riverbed customers have voiced that the security team uses SSE to build tunnels upon tunnels, making it next to impossible to figure out where slowdowns are occurring. This is why we announced the next release of Riverbed IQ unified observability service.

Riverbed IQ supports Riverbed Aternity

Customers need to know when connectivity or transport issues are affecting performance, regardless of where the user is located or which type of application they are using (data center, cloud, or SaaS app). With the integration of Riverbed Aternity, Riverbed IQ unified observability surfaces impactful incidents from both physical networks, remote work environments, cloud apps, as well as SSE-protected apps.

This release adds the first of the Aternity metrics, specifically Application-Location and Application-Activity data. Application-level metrics are native and derived high-level metrics associated with application performance from the user perspective. Activity-level metrics provide detailed metrics associated with the “activities” that comprise an application, such as download a Salesforce report. Activity-level metrics offer excellent visibility into issues affecting a specific activity or part of an application.

Riverbed IQ leverages these metrics to identify performance problems where hybrid work or SSE tunneling is involved. It includes details regarding where the problems are occurring, the user impact, and problem severity, including:

  • Which applications are having network performance issues?
  • How severe is the impact?
  • How many users are impacted?
  • Which locations are impacted?
  • How are the impacted users accessing the application? CASB, VPN, Direct to internet?
  • Is the issue caused by a specific ISP or VPN?

Read the solution brief on Monitoring Remote Work and SSE Environments.

Third-party integrations

Importing/exporting third-party data is the number one question asked about Riverbed IQ. Riverbed IQ now allows customers to use data from any third-party solution with a public REST API. Riverbed IQ can pull third-party data into its runbooks for use in decision branches to make decisions based on this data or to add visualizations to impact summaries to tailor the automated investigations to an organization’s specific troubleshooting processes.

Alternatively, Riverbed IQ runbooks can push incident data to third-party solutions. For example, Riverbed IQ can send context-rich, actionable alerts to solutions like Slack or ServiceNow for consolidated alerting.

Integrating ServiceNow data into Alluvio IQ
New third-party integrations feature enables user to the import or export of actionable insights to solutions like ServiceNow.

Check out the solution brief on how Riverbed IQ’s third-party integrations work.

Other important new features

Riverbed IQ has achieved SOC2 Type II and ISO 27001 security certifications. These certifications give organizations confidence that Riverbed IQ has the policies, procedures, and technology to keep their data secure and private. Lastly, Riverbed IQ is now also hosted in Frankfurt, Germany to support our European customers.

More info

Powered by full-fidelity telemetry and leveraging a combination of AI/ML and workflow automation, Riverbed IQ unified observability service detects business-impacting events while facilitating fast, efficient problem diagnosis. Visit this page for more information on Riverbed IQ or, to start a free trial, click here.

]]>
Ensure Peak SD-WAN Performance with Riverbed Acceleration https://www.riverbed.com/blogs/ensure-peak-sd-wan-performance-with-riverbed-acceleration/ Fri, 03 Feb 2023 13:40:35 +0000 /?p=18036 The ESG Technical Validation report is out now and confirms our data on the benefits of deploying Riverbed Application Acceleration with SD-WAN performance. A link to the detailed report is available below, but first let’s cover the basics of SD-WAN and why its performance needs optimization.

SD-WAN’s impact

SD-WAN has become an ubiquitous technology. Today, SD-WAN has mostly replaced traditional wide area network infrastructure. It helps improve WAN fault tolerance, makes cloud connectivity easier, and addresses the difficulty in managing geographically spread-out network devices. However, though SD-WAN excels in these areas, it falls short in alleviating performance issues, and this is where application acceleration shines.

The benefits and shortfalls of SD-WAN

SD-WAN is very effective at monitoring link performance and determining the best paths for a specific application. This capability was a great step forward in wide area networking a few years ago, and today it goes without saying that a branch would have two or possibly three WAN connections for fault tolerance and path diversity. Typically, these would be lower cost connections such as broadband or locally available fiber.

An SD-WAN controller uses various methods to monitor link performance and make automatic decisions based on policies and thresholds regarding which path a specific application should use. For example, if an application requires no more than 150ms of jitter, and a particular link reports more than 150ms of jitter, the SD-WAN can dynamically swing traffic to another path. However, paths selected by SD-WAN can adversely affect latency and therefore application performance.

Application acceleration addresses the underlying WAN related TCP behavior which typically adversely affects application performance. Therefore, these technologies, when working together, provide the most robust and performant application delivery method possible.

Latency and server turns

Beyond a hard-down or significant packet loss the biggest threat to an application’s performance, and therefore the end-user experience, is latency. In basic terms, latency is, the amount of time it takes for a packet to go from a source to its destination. The time it takes depends on a variety of factors, driven by the path traveled by packets. So, improving latency, or in other words, decreasing the amount of time it takes to go from source to destination, is not something that can be done by adding bandwidth.

Packets traverse a network close to the speed of light, but they often get held up by security devices inspecting that traffic for threats or router queues as a result of traffic shaping. This applies to any endpoint, whether it’s an end-user at home, a server in a private data center, a virtual server instance in public cloud, or even an application delivered by a SaaS provider.

What is worse is that applications poorly using TCP and causing even a 1 millisecond increase in one-way latency has been known to debilitate certain applications. Therefore, it is crucial that alongside excellent WAN resiliency with SD-WAN, we must still solve for latency.

Application Acceleration and bandwidth

If you had unlimited bandwidth, would that provide better experience for your users? The answer as in most cases, depends on the situation. However, in demanding situations such as heavy and long file transfer sessions, Application Acceleration can make your application WAN link behave almost like a LAN. So, when it is most needed, our Riverbed customers have been utilizing our acceleration solutions in various form factors—branch to branch, branch to data center, data center to data center, and SaaS to end-users.

Better together, quantitatively

With a setup like the one below, one end in AWS utilizing our Flex VNF SD-WAN solution and the other on-prem using our SteelConnect EX appliance, we have realized performance benefits particularly for bulky file transfers.

SD-WAN and Riverbed Acceleration deployed together
SD-WAN and Riverbed Acceleration deployed together

In our performance testing of various large file sizes, we have achieved up to six times improvement in file transfer times. The typical benefits we usually achieve with WAN Optimization independent of SD-WAN deployments are also available when SD-WAN and WAN-OPT are service chained together.

Looking closely at the graph below, starting with the the left most bar “SteelConnect-Ex only” indicates the time taken for a large file (non-compressible binary) transferred over 100Mbps over a 160ms without any optimization. Next, as a test end machines’ TCP buffers were adjusted to match bandwidth delay product a bit better, we can see some improvements. Next on the graph, WAN-OPT is enabled using SteelHead but not much has improved in terms of file transfer times just yet. However, a compressible version (let’s say plain text file) of the same size would see a 2x speed improvement immediately in cold (cache miss) transfer as can be seen for the “Cold 2x compressible File” results. And finally, for “Hot Random File,” the right most bar on the chart, a second-pass of the same non-compressible large file would produce close to 10x file transfer time speed up or a 92% reduction in time compared to the first pass regardless of which large file was used.

SD-WAN and WAN-OPT Together File Transfer Performance
SD-WAN and WAN-OPT Together File Transfer Performance

These gains have been confirmed by ESGs testing.

ESG Technical Validation Speedup Observed
ESG Technical Validation Speedup Observed

As a further detail, it is important to note that the results of testing with increasing network RTT do correlate with intuition, without WAN-OPT increasing latency almost parabolically related to longer file transfer times. The bottom two lines are hot and cold transfer times for the 2GB file across various latency values. The top two lines are without WAN-OPT. The difference WAN-OPT made in time saved is quite staggering.

Increasing latency almost parabolic-ally related to more file transfer times
Increasing latency almost parabolic-ally related to more file transfer times

Conclusion

Application acceleration specifically solves the problem of latency using a variety of methods aimed at reducing round-trip time and eliminating the effects of server turns, thereby improving an application’s performance. It also reduces congestion on a link by eliminating unnecessary traffic, caching certain information locally so it doesn’t need to traverse the network multiple times or at all, and optimizing TCP Windows and buffers so end users don’t have to perform advanced TCP tuning. These advantages combined with SD-WAN resiliency provide the best of both worlds for our customers.

Check out the full Technical Validation Report by ESG here.

]]>
Don’t Wait for Zero Day – Proactively Detect Threats with Riverbed https://www.riverbed.com/blogs/dont-wait-for-zero-day-proactively-detect-threats-with-alluvio/ Wed, 25 Jan 2023 23:00:00 +0000 /?p=19778

Your personal information being leaked or sold online is something that strikes fear into the hearts of most people. Identity theft takes this one step further and can destroy your credit ratings and land you on blacklists for services such as utilities, rental housing or mobile phone plan.

In September 2022, Optus announced that an unknown actor had compromised their systems and accessed current and former customers’ personal information (Passport, Drivers Licenses, Medicare numbers). The unknown actor then posted proof (about 10,000 out of 2.1 million) exposing this personal information in a bid to sell the remainder.

While the impact of this leak cannot be understated and is devastating for the people involved, there is some small comfort that various government agencies and Optus are offering assistance to replace exposed identity documents.

The reputational and financial damage to Optus (or any organization that has their customer data compromised) is massive. Some customers will want to discontinue services, and potential customers may reconsider their options. Even if an organization increases their security posture, the memory of this incident will last for decades to come.

Attacks steal the headlines, but threats lie in wait

What we know about the Optus cyberattack is that it wasn’t a sophisticated one, and they could have avoided it by securing all their ports and APIs. This is a very common slip-up—which occurs most often due to rushed development or integration—and one that shouldn’t happen, but when it does, it can become a major issue.

Alternatively, when an actor decides to attack a well-secured target, they become an APT (Advanced Persistent Threat). APTs do not make much noise, as their role is to stay under the radar so they can learn as much about the target as possible. The reconnaissance period can be long as a year—they take their time to learn the environment and find things such as:

  • Where is the sensitive information saved?
  • Where is the data backed up (in the case of a Cryptolocker ransomware attack)?
  • What cyber defenses are in play?
  • What are the skills of the DFIR (Digital Forensics and Incident Response) team?
  • What does a regular usage pattern look like?

With the average APT able to remain in an environment for over 200 days without being discovered, APTs can hide in plain sight using normal protocols and authentication standards to avoid detection by signature-based and machine learning defenses. This is where proactive threat hunting becomes a crucial defense in your arsenal. Threat hunting is the process of looking at traffic patterns, log files and other telemetry to identify unusual activities that could be an IOC (Indicators of Compromise).

Games make the process a bit more interesting

I like to talk about the gamification of threat hunting which can make the process more enjoyable. We use games that offer high value and leverage the power of Riverbed NetProfiler and AppResponse full fidelity data. If you have not already played cybersecurity games I highly recommend using them. These games are a testament to how real-life simulations can advance cybersecurity skills. While playing these games you learn to see failure as a learning opportunity and prepare for real-life incidents.

APTs often use zero-day threats, since signature-based tools do not detect them because the IOCs don’t exist until after the threat has been identified. It’s not enough to only detect these threats after they are known; we need to go back in time as well and see if they have happened in the past. NetProfiler is able to run historical reports on threats based on some types of known IOCs because of its full fidelity flow storage.

log4j, full fidelity data, cyber security

The other benefit of this game is that you’re going to be asked about something that’s in the news anyway.

Let’s look at how Riverbed is helping its customers proactively find such vulnerabilities so that they can safeguard the valuable data and privacy of their end customers.

With ATPs using normal traffic to blend into the environment, it’s a smart idea to monitor for administrative traffic in places and at times that you may not expect. Something that doesn’t make sense, such as loads of data transfer, open APIs or lousy passwords are signals that need to be picked up. Riverbed can help catch the red flags and send alerts notifying you about unusual activity, so you can take action before getting locked out of your network.

Riverbed to the rescue

In the following example we have used NetProfiler to detect SSH traffic between midnight and 6AM. While we might detect the occasional developer performing a late-night change, we might also find some things we weren’t expecting as well. Other examples might be database traffic directed to places it shouldn’t in an attempt to exfiltrate records.

NetProfiler full fidelity data

Security audit or threat hunting can easily become a full-time job, but with Riverbed you can invest some time and take care of a host of activities to keep adversities at bay:

Detect unencrypted data transfers

NetProfiler full fidelity data

 

 

 

 

Analyze DNS traffic

NetProfiler full fidelity data

Analyze certificates

NetProfiler full fidelity data

Dedicating a bit of time to these activities will help you understand your environment better and know what normal looks like.

Full fidelity observation speeds up recovery and saves millions in downtime when under attack. You can go back in time and look at everything to find the extent of damage—when it all started and what services/data have been compromised.

You don’t know today what you will need tomorrow. Make Riverbed Riverbed monitoring a crucial part of your overall cyber strategy.

]]>
Diagnosing and Treating Network Challenges in Public Hospitals https://www.riverbed.com/blogs/network-performance-monitoring-public-hospitals/ Sun, 15 Jan 2023 23:41:00 +0000 /?p=19654 Technology has changed how the healthcare industry functions in almost every aspect, from providing care to managing patient records to processing insurance claims and other tedious admin tasks. Unfortunately, most applications managing these different functions don’t communicate with each other efficiently, and this can paralyse the hospital operations and disrupt critical care services.

In my experience, I have come across many end-users who often accept this network failure as an unavoidable fact or attribute it to their inability to adopt new technologies. The irony is that they do not even realise that these issues can be rectified. Let’s take a closer look at how network health can change this perception and make administering care and managing tedious admin tasks easier for hospital staff.

The technology struggles in public hospitals

While we were working with the government to help them gain visibility into their network across agencies, we found that the public hospital’s staff was struggling due to network issues. Lack of end-to-end network visibility and disparate applications working in silos was hindering the public hospitals and creating friction for their teams.

The public hospitals were struggling to manage critical day-to-day functions like uploading patient records, online prescriptions, clinical records, insurance claims processing, Medicare claims, automated sterilisation of tools and video consultation. But what was the common factor? All these functions relied on the performance of their underlying network, and it was network latency that was often causing bottlenecks.

Why network latency is a big threat to healthcare 

Over the last few decades, the healthcare sector has made some great leaps in terms of technology. Digital technologies like analytics and wearable technology that can transform patient diagnosis and treatment are becoming more and more common. Bio-tech companies have invested billions to bring about this transformation and devised technologies for preventive treatments. Technology is enabling better patient outcomes for terminal diseases and common illnesses through early diagnosis by comparing historical data and new treatment methods. And while new technologies and devices are being introduced to healthcare facilities every day, their network is not even equipped to support everyday hospital operations, let alone the latest innovations and breakthroughs in the industry.

Network challenges are compounded by the fact that hospitals have special requirements when it comes to wireless connectivity. Certain sections in hospitals are designed to block or limit radiation or some specific frequencies. Hospitals rarely take into account future IT needs during the construction stage, which means they can be challenging environments for network optimisation. Any request to make changes to hospital interiors can take months or even years to get approval. Because of these factors, the way devices are configured to use the hospital network can be very different compared to other industries. That means even if you have a good network, your applications are still likely to underperform.

From where I see it, we need to lay the groundwork on an urgent basis, and a good network partner to help you do that. While helping you build a robust underlying network, your network partner also needs to have an in-depth knowledge of the healthcare industry and its limitations. You can only support day-to-day hospital operations, enable seamless communication with external agencies and partners and improve patient experience when the foundation of your IT network is robust. Anything below 100% network efficiency is unacceptable in healthcare.

When evaluating the network performance at this federal territory’s public hospitals, we discovered a concerning diagnosis. One of the major recurring issues was the delay in medical procedures due to operational issues, made worse by network limitations. Lack of sterilised equipment, delayed test results and poor staff coordination were all symptoms of a poorly performing network. The applications supporting the scheduling of medical procedures, resourcing and housekeeping functions were not talking to each other, resulting in a massive backlog of medical procedures, poor patient experience and employee frustration.

The treatment: anywhere-to-anywhere visibility with Riverbed Unified NPM

After doing our thorough due diligence and examining the extent of the problem, we recommended an immediate course of action for the public hospitals’ network. They clearly needed a network performance monitoring solution to help monitor and remove bottlenecks. They had all the applications in place to improve patient care and employee experience, but these applications were not working in tandem with each other, and they hadn’t realised that the network was the underlying issue.

We deployed Riverbed’s full-stack unified network performance monitoring solution while ensuring alignment with government-mandated compliance requirements.

The unified solution we implemented comprised of:

  • Riverbed NetProfiler for hybrid network traffic monitoring. This was of immense help for the complex hospital network, as hospitals constantly communicate with internal and external endpoints. They were now able to manage information flow with end-to-end network monitoring and visibility, which is critical for monitoring patient health in real time, raising insurance claims, maintaining medical records and improving overall hospital operations.
  • Riverbed AppResponse, which helped monitor and analyse network-based application performance. They could now fix issues in the network as soon as they identified a problem to avoid disruptions in day-to-day operations.
  • Riverbed NetIM maps application network paths to improve IT infrastructure monitoring and troubleshooting at the granular level. This mapping is crucial as hospital staff across different functions tend to use different applications.
  • Riverbed Portal, which helps gain control of the network and provides integrated network and application insights. They are now equipped to understand if their IT systems are working as they should.

The hospital’s IT team now enjoys end-to-end, bi-directional visibility across network paths connecting them to:

  • In-house datacentres
  • GP clinics and other service providers
  • Medical device manufacturers
  • Pharmaceutical companies
  • Government agencies
  • Third-party services, entities and systems

The outcomes for hospitals

The hospitals have experienced a 180° improvement in their operations. They can now quickly find and fix network performance problems before impacting patient care, operations or data confidentiality.

They can also:

  • Minimise problems associated with unpredictable network connectivity, which previously impacted the efficiency of overworked healthcare and admin staff
  • Support wireless devices from tablets to smartphones to wearable medical devices for quick diagnosis and real-time patient monitoring
  • Roll out new network-intensive services, such as live video conferencing with surgeons from across the world when performing complex or rare surgeries

Even after investing millions in technology, I often see organisations struggling with the chaos caused by disparate IT applications and unreliable networks. This chaos increases manifold when we are dealing with critical healthcare services.

Having an experienced network partner can be invaluable to making sure your network is both efficient and stable. Get in touch with a Riverbed consultant for a no-obligation consultation to explore how your organisation can transform its network to run key operations seamlessly and reliably.

]]>
The Employee Experience and Customer Experience Connection https://www.riverbed.com/blogs/the-employee-experience-and-customer-experience-connection/ Thu, 12 Jan 2023 13:30:00 +0000 /?p=19475

 

Most organizations have treated employee experience separate from customer experience, but they’re more connected than you think. Let’s start with the definitions.

Employee experience encompasses all of a worker’s perceptions throughout their journey as part of their organization. It includes everything from their recruitment as a job candidate to initial onboarding, to career development, learning, mentorship, and career advancement. It extends to when they leave their organization. Three factors affect employee experience: culture, physical work location, and technology. The rise of remote work and the influence of Generation Z in the workplace has increased the effect of technology as a driver of employee experience. HR and IT jointly own responsibility for employee experience.

Similarly, customer experience summarizes a customer’s feelings resulting from interactions with a brand’s products and services throughout the buyer’s journey and across the multiple channels throughout which customers interact. Omni-channel strategies require a unified approach to measuring and analyzing customer experience across social, web, mobile, contact center, and in-person interactions. Like employee experience, technology plays a key role in customer experience. Because of that, customer experience is often owned by a combination of marketing, line of business and IT.

Employee experience and customer experience are connected in an omni-channel world

Intuitively, we understand the tight connection between employee experience and customer experience. We experience this all the time through our direct interactions with customer-facing employees in retail, insurance, healthcare, or even government. Retailers like Nordstrom, L.L. Bean, or Zappos have earned strong reputations for empowering their employees to go out of their way to deliver excellent customer service. And their brands are deeply associated with it. In fact, despite its acquisition by Amazon, Zappos maintains its own brand to preserve the equity it has built up over the years.

Technology plays a key role in connecting employee experience and customer experience

A closer look at the three levels of these interactions reveals the role that technology plays.

  1. User journeys: The paths that customer-facing employees take throughout their day to serve customers, or the paths that a customer takes in navigating a company’s website, interacting with a chatbot, or browsing a store. These journeys intersect in an omni-channel world. When a customer is unable to complete their journey on the website on their own, they reach out to an employee in a contact center to help them do so.
  2. Business processes: The individual steps that make up the journey, such as “look up account” or “check inventory” or “add to cart.” Individual steps in the business processes may be shared by both customers and employees, depending on who is taking the action.
  3. Business applications: The applications used by employees executing the business processes while serving customers, or the applications used directly by customers themselves. In many cases, the applications used on the website are the same as the applications used by employees in the contact center or the retail store or branch office.

In an omni-channel world, the journeys, business processes and applications intersect. Business applications can be the same, used by customers via the website or on-line portal, or by the employees serving them.

Happy employees mean happy customers

We understand the linkage between employee experience and customer experience, but what does the data say? Harvard Business Review published an analysis done by Glassdoor researchers who quantified the relationship between employee satisfaction and customer satisfaction. To do so, they associated Glassdoor employee reviews of hundreds of large U.S. companies with ratings from the American Customer Satisfaction Index (ACSI). ASCI records the opinions of 300,000 U.S. customers on products and services.

They found a direct statistical link between employee satisfaction and customer satisfaction. A one-star improvement in a Glassdoor score correlates to a 1.3-point (out of 100) improvement in customer satisfaction. This effect was more than twice as large in industries like retail, tourism, restaurants, health care, and financial services where there is close interaction between employees and customers. Of course, correlation does not mean causality.

The causal link between employee experience and customer experience

This chart shows how employee experience drives revenue and profit
Employee experience significantly drives revenue and profit. (Source: Research: How Employee Experience Impacts Your Bottom Line, HBR, March 22, 2022)

 

Business leaders tend to underestimate the role that employees play in customer experience because it has been hard to quantify. A more recent study published in Harvard Business Review took the research to the next level by proving causality.

Rather than the broad analysis of hundreds of companies done by Glassdoor, these researchers looked at a single, large retailer with more than one thousand stores. Looking at just one company enabled the researchers to isolate important factors on customer experience.

The effects of company brand, quality of products and website effectiveness were constant across all the company’s stores. The researchers correlated HR metrics like job tenure, cross-training, and skill level with the normalized revenue and profit of each store. They controlled for factors like seasonality and the income levels of surrounding populations.

They found that measures of employee experience were significant drivers of revenue and profit. The data showed a potential 50% increase in revenue and 45% increase in profit per employee-hour by improving their HR metrics from the bottom quartile to the top.

So, investment in employee experience clearly pays off.

 

 

Forbes article references Forrester Consulting study commissioned by Kyndryl

This graph shows how employee and customer experience are priorities for organizations looking to employee digital experience management (DEM)
Improving employee experience and customer experience are top priorities for over half of surveyed executives. (Source: “New Insights Into Employee And Customer Experiences,” Forrester Research, Inc., October, 2022)

Research is one thing. Market recognition is another. And the market does recognize this connection.

A recent Forbes article, Three Ways to Improve Employee Experience for Better Business Outcomes, discussed the key role that employees play in delivering excellent customer experience and how more business executives are recognizing the need to invest in employee experience as a result.

The article features Dennis Perpetua, CTO of Digital Workplace Services at Kyndryl, a 2021 spin-off of IBM and the world’s largest provider of IT infrastructure services. Kyndryl, a strategic technology partner of Riverbed, relies on Riverbed Aternity as part of its digital workplace service for global customers like Dow.

The Forbes article refers to research conducted by Forrester Consulting, part of research and advisory firm Forrester Research, on behalf of Kyndryl. As shown in the graph, improving customer experience remains the top priority of business executives. But they also recognize the importance of improving employee experience as a means to that end.

The Forrester study shows a tight connection between employee experience and customer experience exists even below the executive level. It dives into the view of employees who are directly involved in employee experience at their organizations. 45% of these respondents stated that improving customer experience is one of the most important benefits of improving employee experience.

 

Digital experience management (DEM) ties both employee and customer experiences together
Improving customer experience is one of the top benefits of improving employee experience. (Source: “New Insights Into Employee And Customer Experiences,” Forrester Research, Inc., October, 2022)

Recognition in the market: 451 Research

Riverbed recently conducted a webinar with 451 Research on the connection between employee experience and customer experience in an omni-channel world. I was joined in this webinar by Sheryl Kingstone who leads 451 Research’s coverage for Customer Experience & Commerce. Sheryl shared some compelling data on how customer experience is a catalyst for digital transformation. The chart below shows the main drivers of digital transformation according to 451’s Voice of the Enterprise: Customer Experience & Commerce, Digital Maturity 2021. The areas of biggest difference between “digitally driven” organizations and “digitally delayed” organizations all relate to top line business drivers—customer experience, competitive differentiation, new business models. Improving customer experience tops the list.

These digital transformation drivers move the needle on digital experience management (DEM)
The areas of biggest difference between “digitally driven” organizations and “digitally delayed” organizations all relate to top line business drivers. Q. In your opinion, what are the main drivers for digital transformation? Select all that apply. Base: All respondents (n=500). (Source: 451 Research’s Voice of the Enterprise: Customer Experience & Commerce, Digital Maturity 2021)

The next set of data really shows the connection between employee experience and customer experience. Employee satisfaction is the top metric tracked by organizations focused on improving customer experience. In other words, top organizations understand the tight connection between employee experience and customer experience shown in the HBR research. Metrics like average handle time and average time to respond to a customer inquiry/resolution both relate to the performance of technology used by employees to serve customers.

employee satisfaction, customer satisfaction top key performance indicators
Employee satisfaction is the top metric used to measure the effectiveness of customer experience initiatives.
Q. Is your organization tracking any of the following metrics to measure the effectiveness of its customer experience initiatives? Select all that apply. (n=351) Q. Which metric has shown the most improvement since your organization began tracking it? (n=334). Base: All respondents. (Source: 451 Research’s Voice of the Enterprise: Customer Experience & Commerce, Digital Maturity 2021)

Don’t separate employee experience and customer experience monitoring 

Despite all the data connecting employee and customer experience, and the role that technology plays in both, the market considers monitoring separate categories. As discussed in my Digital Experience Alphabet Soup blog, analyst firms consider employee experience monitoring separate from customer experience monitoring.

But the data points to the need for a comprehensive approach to digital experience management. If employee experience and customer experience are so tightly connected, then the monitoring technologies used to ensure a quality experience should be too.

This is where Riverbed’s Aternity Digital Experience  plays a role. Aternity is the only DEM solution in the market to provide full spectrum Digital Experience Management. With insights into the digital experience of both customers AND employees, Aternity arms IT with the ability to ensure a positive experience for both. To see how Aternity can help you on your journey to improve the digital experience for ALL your users, register to begin a Riverbed Demo today.

]]>
Improving Digitized Work Processes with DEX https://www.riverbed.com/blogs/improve-digitized-work-processes-dex/ Thu, 22 Dec 2022 13:15:00 +0000 /?p=19451 In my last blog article, I drew the comparison between the engine warning light in a car and IT monitoring. But what does that mean in detail? An excellent example is Digital Employee Experience (DEX): This is the warning light that indicates whether IT users are happy with IT or whether there are problems. Most of the time, the biggest dissatisfaction does not come from the fact that the device used is too slow. Most users are not IT experts. They expect a working environment to function as they know it from their computer at home. Designed digitized work processes must be put into use without a great deal of learning effort, and more importantly–that must work.

What is Digital Employee Experience (DEX)?

Companies use email surveys to learn how users feel about their IT services. As the digitalization of many work processes makes IT increasingly important, it is mandatory for IT to perform well in these surveys. However, surveys are only ever a snapshot, which makes monitoring the Digital Employee Experience (DEX) increasingly important.

In these situations, the user’s experience is directly measured on the device itself (for example, a laptop or desktop). The focus is not on checking whether the end device is technically busy or functioning properly. The question is whether users can work fluidly with the applications. It should also be analyzed whether digitized work processes in the applications are perhaps too cumbersome or simply take too long.

Make workflows and digitized work processes visible

frustrated users

At my bank this year, I experienced firsthand what it means when workflows in applications are broken. In my case, it took far too long:  I actually just wanted to open a depot for my daughter so that I could regularly save for an ETF fund. Our bank has digitized the entire process so that all compliance aspects are met, and the bank advisor doesn’t forget a single point.

One would think that opening a depot should therefore be quick. However, the system had several problems:

Since my daughter is still a minor, permission from my wife and me had to be entered beforehand. This meant that our bank advisor had to fill out additional forms digitally. This sounds easier than it was, as something didn’t work in almost every form or took an extremely long time to save. In the end, I was at the bank for over an hour to open a simple depot.

Why is a broken workflow not only an IT issue?

Now, looking at this experience in more detail, some problems are obvious:

  • Very few customers are willing to wait that long and might consider switching banks next time.
  • Faulty workflows or processes don’t just put a strain on IT. Our bank advisor was more than uncomfortable that this didn’t work and she had to apologize to the customer.
  • Digitization should simplify processes and accelerate them, especially in times of a shortage of skilled workers. The goal should be to enable existing employees to do their jobs better and faster. If this isn’t taken into account, productivity will decrease as a result of digitization instead of continuing to rise.
  • Since internal support processes sometimes take a very long time, the relevant IT employees or application managers are notified of these problems far too late. Often, support tickets are no longer opened because the employee concerned does not have the time.

Making workflows technically visible

In our Riverbed portfolio, there are two products that can make workflows visible:

Riverbed Aternity is suitable for any type of application (e.g. a classic application or web application) used by employees:

Watch Video

Riverbed UJI (User Journey Intelligence) is a solution for web applications that helps you understand how people (both internal and external) use them:

Watch Video

In principle, Riverbed can therefore make workflows visible from any application. In the simplest case, for example, it monitors how long it takes to open the calendar in Microsoft Outlook. The example with my bank is a complex case where several pieces of information from a workflow need to be monitored. An error message is displayed, for example, showing “Form A” has been used and how the user got into this form. At the same time, it is also possible to record how long it takes to save the form.

Aternity Workflow Monitoring

An important aspect is that the measurements provide context to technical elements in order to determine where a fault lies, if any. Not every problem is due to a problem in the application. IT has become so complex that the end device itself, the network or components in the data center/cloud can also be the cause of a poor experience. That’s why Riverbed combines experience and workflow data with technical data from different perspectives.

EMA Value Leader 2022

In light of this, EMA has ranked us as a Value Leader in the EMA Radar for Digital Employee Experience this year (2022).

If you like what you see and want to try it out for yourself, you can download a free trial here to experience it firsthand. Talk to us if you have any questions!

]]>
When an Engine Warning Light Goes On https://www.riverbed.com/blogs/engine-warning-light/ Fri, 25 Nov 2022 13:42:00 +0000 /?p=19335 Engine Warning Light

While on our summer vacation this year with our Volkswagen Van (“Bulli”), the engine warning light came on. Unfortunately, it was still over 700 km to our destination in the south of France. Everyone has experienced something like this, and everyone has the same questions: What does this mean now? Do we have to stop? Where can we find a Volkswagen garage in France, or might we even make it to our destination? Or, do we even have to stop immediately?

And furthermore, what does an engine warning light have to do with IT and monitoring?

How an engine warning light relates to IT and monitoring

Volkswagen Diagnosis System
(c) Volkswagen

If you think about it, we encounter situations just like this in IT as well. Usually when a “warning light” goes on somewhere in an organization, this information is immediately sent to the IT staff responsible for resolving the issue, at which time very similar questions arise as the ones asked on our vacation: What does this mean? Has something completely failed? What effect does this error message now have for the company, our employees and customers?

As with a car, troubleshooting now begins. In modern vehicles, fault diagnosis can almost only be done by the manufacturer or an authorized partner. What is always the same, however, is that a specialist must now evaluate this error message, request further information and data, and then diagnose it either using their expertise or a knowledge base. If a diagnosis is not possible, further specialists or the manufacturer must analyze the problem. Once the diagnosis is ensured, then work can be done to resolve the problem.

How can diagnosis time be reduced?

When an IT failure has an impact on the productivity of employees, or even on customers, a diagnosis must be made as quickly as possible so that a solution can be implemented right away. This is called reducing MTTR (Mean-Time-To-Repair). But how can this be achieved?

Usually, the most difficult challenge in the entire process is making a quick and correct diagnosis. This requires expertise, knowledge of one’s own environment, a structured approach and a lot of experience. Only experienced and specialized employees can use targeted questions to gather all the information needed for a rapid search for the cause—and this is where Riverbed IQ comes into play.

Riverbed IQ, Riverbed’s cloud-native, SaaS-delivered unified observability solution, automatically analyzes problems before any IT staff receive a report. In this process, the “expertise” of the staff is transferred into Riverbed IQ via a low-code UI. The result is a dashboard/report that contains all the necessary information about a problem, so IT engineers can diagnose it faster, without having to do extensive research.

Get results faster with fewer clicks

Riverbed IQ enables IT departments to work very much in the same way as authorized car repair garages. Car manufacturers have transferred their expertise to the diagnostic systems so that the engineers receive a pre-analyzed evaluation of a fault or issue in order to quickly start repairing the vehicle. In some cases, car manufacturers also refer directly to a “knowledge base article” on how to resolve the problem.

While Riverbed IQ isn’t quite there yet, Riverbed is already working on the next stage to have problems resolved automatically. Until then, Riverbed IQ has one goal: to keep IT staff from “digging” through a large amount of dashboards and data for a solution. In order to save time and resolve issues faster, IT will pre-analyze problems, much like garages do.

Watch the video below to see how Riverbed IQ reduces alert fatigue and Mean-Time-To-Repair:

If you like what you see and want to try it out for yourself, you can download a free trial here to experience it firsthand. Talk to us if you have any questions!

]]>
Excelling at Employee and Customer Experience in an Omni-Channel World https://www.riverbed.com/blogs/excelling-employee-customer-experience/ Wed, 23 Nov 2022 13:33:00 +0000 /?p=19391 customer experience, digital experience, eCommerceOrganizations are focused on omni-channel strategies to improve customer experience. It’s not just about the approach of the kick-off to the holiday shopping period. In the US, it’s the open enrollment period for employees to make selections for their health insurance and other benefits for the upcoming year. And the wild gyrations of stock markets around the world have caused trading volumes to increase for financial services companies. As a result, IT teams at companies in these industries, and in governments too, must ensure excellent digital experience at every step in the customer journey.

 

Visibility into customer experience all along the customer journey

Customer expectations around excellent service have risen over the past twelve months. In fact, a 451 Research survey shows that 86% of companies surveyed report an increase in customer expectations. (Source: 451 Research’s Voice of the Enterprise: Customer Experience & Commerce, Merchant Study 2022). Despite fears of recession, or perhaps because of them, companies must maintain their focus on customer experience in order to grow their business.

Organizations need actionable insights into the impact of IT performance on revenue, order fulfillment, and customer abandonment to improve customer experience. They also need visibility into the digital experience across the entire journey–from navigating unique paths across digital services on websites to interacting with employees in the contact center, branch, store, or back office.

Contact centers are on the front line

service desk, contact center, employee experienceToday’s contact centers are metrics-driven departments focused on continuously improving the customer experience. From customer satisfaction (CSAT) scores to first call resolution (FCR) times, to average handle times (AHT)–the alphabet soup of customer service metrics hinges on how productively people use technology. After all, if a customer can’t complete their transaction on the portal or website, they get in touch with the contact center. There’s a direct, inverse relationship between website performance and call volumes in the contact center. And that relationship affects the key metrics. When website performance suffers, volumes rise, CSAT drops, call queues and AHT increase, etc.

IT teams need more from their monitoring tools

Omni-channel strategies create a seamless experience for customers, but underlying that seamless experience is a complex infrastructure that must be managed. IT teams struggle to support the complicated combination of legacy and newly emerging technologies, ranging from voice communications to websites, to cloud-native applications, and conversational AI.

Traditional monitoring tools have limitations that prevent organizations from achieving these goals.

  1. Customer experience disconnected from employee experience. Separate domains prevent digital experience insights across the whole customer journey.
  2. Floods of technical telemetry. Performance metrics disconnected from business outcomes don’t help IT prioritize where to focus.
  3. Reliance on sampling to deal with enterprise scale. Failure to capture every application transaction means that IT will miss performance problems.
  4. Limited coverage of enterprise business applications. Provides an incomplete picture of digital experience for the hundreds of applications used to run the business.

Total experience–improving both employee AND customer experience

Riverbed’s  Aternity is the only vendor in the market to provide full spectrum Digital Experience Management. What do we mean by this term? In short, it provides total experience management:

  • Insights into the digital experience of both customers AND employees.
  • The impact of digital experience on business outcomes AND technical telemetry.
  • Unified performance visibility of both employee devices AND the application service, including cloud-native environments.
  • A big data approach that captures and stores ALL transactions without sampling.
  • The ability to measure actual employee experience for ALL types of applications.

Watch this short video to see how Aternity addresses key challenges for high-performing contact centers:

Ensure a world-class customer experience all along the digital journey

Rivebed Aternity User Journey Intelligence provides contextualized visibility and actionable insights into user journeys across complex web environments, enabling organizations to improve satisfaction and drive revenue.

With Aternity User Journey Intelligence, you can:

  • Follow every path your customers take on your website, converting and non-converting.
  • Track the digital experience of every user at each step of the journey across your site.
  • Guide users along the highest-converting paths and optimize the ones that cause drop-offs.
Aternity User Journey Intelligence, customer journey
Aternity enables you to associate user journeys and performance to revenue, conversion rate and abandonment rate to increase engagement and optimize business results.

A Value Leader in the EMA Radar for DEX

Industry analyst firm EMA has named Aternity a Value Leader in the EMA Radar for Digital Employee Experience Management. You can register to obtain a complimentary copy of this report rating the solutions of Aternity against eight other vendors. EMA cited Aternity’s application experience visibility as a key differentiator. As discussed above, visibility into the customer journey, website performance, and the performance of the key business applications used in the contact center can help ensure a world-class digital experience.

Best Application Experience Visibility Award
Aternity received the unique award for Best Application Experience Visibility

Watch our webinar on Total Experience with 451 Group

If you’re interested in hearing more about trends and investment in customer experience and how Aternity has helped high-performing contact centers ensure an excellent digital experience for employees and customers, please attend our upcoming webinar with the 451 Group. If you’re busy on December 8, don’t worry. You can catch it later on demand.

Or, if you’d like to register for a Request Demo of Aternity, you can get started today.

]]>
EMA Releases New Report on Network Observability https://www.riverbed.com/blogs/ema-network-observability-report/ Mon, 07 Nov 2022 13:32:00 +0000 /?p=19284 The recently released report by Enterprise Management Associates (EMA), Network Observability: Delivering Actionable Insights to Network Operations, is sponsored by Riverbed and helps IT buyers understand what traditional network performance management vendors mean when they use the term network observability.

New EMA report on Network Observability
This new EMA report defines network observability for IT buyers

In fact, the purpose of this report is to define network observability for IT buyers, so they can effectively communicate about emerging NetOps requirements and the innovations that vendors, like Riverbed, offer to address those requirements.

Network teams besieged

Network operations teams are struggling to maintain visibility in today’s rapidly changing digital environment. In fact, fewer NetOps teams are successful in their mission than ever before, with the number declining from 47% in 2018 to 27% in 2022.

Some of the significant challenges the survey uncovers includes:

  • Data conflicts between individual tools, limiting IT’s ability to correlate insights across data types  
  • Lack of actionable alerts generated by network tools
  • New drivers of new NetOps visibility requirements, like remote work and real-time applications 
  • Organizations prioritizing the optimization of their network tools so lower-skilled admins can do more problem-solving

Network Observability

The report also includes interesting data about what network teams are looking for in a network operations solution. Essentially, the most essential observability features include:    

  • Data visualization              
  • Traffic analysis
  • Change detection and validation
  • Automated escalations

Respondents also want to automate troubleshooting with their network observability tools, however they are most interested in automating root-cause analysis and problem isolation. Additionally, nearly half of respondents believe anomaly detection is essential for efficient troubleshooting.

Riverbed IQ Unified Observability

Riverbed IQ is Riverbed’s Unified Observability service, which empowers network teams to solve problems fast by simplifying and accelerating troubleshooting. It leverages these key features:

  • Full-fidelity Riverbed telemetry for network, infrastructure, application, and end user experience data
  • Anomaly detection (AI, ML, and correlation) to identify only the most business-impacting events
  • Automated workflows to gather relevant data for one-stop troubleshooting

Watch this short video to see Riverbed IQ in action:

For more information about Riverbed IQ, click here, or if you like to read the EMA Network Observability report, visit this link.

]]>
EMA Names Aternity a Value Leader In Digital Employee Experience https://www.riverbed.com/blogs/aternity-digital-experience-leader/ Mon, 31 Oct 2022 12:56:42 +0000 /?p=19201 The first EMA Radar™ for Digital Employee Experience Management (DEX) is hot off the press from Enterprise Management Associates®. This report focuses on solutions that “collect comprehensive contextual information on user interactions with digital technologies, analyze the data to quantify user experiences, and provide support for remediating any deficiencies.

2022 EMA Radar for Digital Employee Experience Management
Riverbed Aternity is a Value Leader on the 2022 EMA Radar for Digital Employee Experience Management.

The EMA Radar™ Report identifies and ranks the 12 leading Digital Employee Experience Management (DEX) providers. The Riverbed Aternity DEM platform received the highest ranking as a “Value Leader” and received a unique award for “Best Application Experience Visibility 2023.” 

Alluvio Aternity is a Value Leader on the 2022 EMA Radar for Digital Employee Experience Management

The Value of Digital Employee Experience Management

So what does it mean to be a Value Leader on the EMA Radar? In their own words: “EMA defines value in any solution as a comparison of the strength of the platform against its total cost of ownership.”  And as far as the actual ratings/rankings in this report, “EMA defines ‘value’ as the ratio derived from the strength of a product set against its cost-efficiency. Put simply, the more users pay for a solution, the greater the advantages they should receive in terms of breadth of functionality and supportability.” 

We couldn’t agree more. “Pricing” and “value” are commonly misused as synonyms. When committing to a large-scale solution like DEX, it’s most important to evaluate the ROI of solving the most important use cases. The Riverbed Aternity platform is developed with real-world solutions in mind, including employee experience and IT asset cost reduction. That’s also why we have unique capabilities like Aternity Experience Insights, a proactive way to solve user-facing issues before they become widespread. watch Video

Best Application Experience Visibility

The Riverbed Aternity DEM platform was also awarded “Best Application Experience Visibility.” This was based on the fact that “the granular information collected […] on application performance is more extensive than any other comparable solutions on the market today.” EMA also cited our collection of “click to render” timing of actual business activities as a unique capability. Aternity supports this type of measurement in order to go beyond traditional telemetry collection and gather information that is most impactful to the individual employee’s digital experience and the ROI for the business. You’re doing to get the most ROI from a DEX tool when you can answer questions such as “how long does it take a contact center rep to update account information for a customer?” 

Best Application Experience Visibility Award
Aternity received the unique award for Best Application Experience Visibility

Our “Application Experience Visibility” comes from our support of what we call Full Spectrum DEM. This means covering digital experience use cases for both employees AND customers, as well as giving Service Desk and Ops teams the ability to trace issues from the end-user device all the way to the backend applications and systems that power them. 

 

Aternity Digital Experience Index (DXI)
Aternity Digital Experience Index (DXI) automatically identifies digital experience hot spots across your enterprise impacting employees and customers, then sets you on a path to action and improvement.

Other callouts included our breadth of information collected, the easy-to-use web console, Aternity Digital Experience Index (DXI), and positive customer feedback on our intuitive reports and dashboard visualizations. Our support of these visualizations helps not only the employees at the service desk, but also application and line-of-business owners. These users may not be hands on with the tactical problem solving, but need insights into what issues are having the most business impact. From there, they can prioritize which projects and fixes should be the highest priority. 

]]>
Analytics Control Riverbed IQ https://www.riverbed.com/blogs/analytics-control-alluvio-iq/ Mon, 31 Oct 2022 12:27:00 +0000 /?p=19144 According to Gartner, analytics and AI continue to be the top IT and business investment priorities for organizations’ digital transformation initiatives. Emerging technologies, such as AI, improve process efficiency, enable faster decision-making with access to data, and enhance customer experiences across business domains.

Forrester Research has disclosed that without the comprehensive insights they need to succeed, technology leaders are struggling to keep up with business demand and enable future growth. The modernization of IT operations is coming at these leaders from multiple areas. It centers, however, on the need for operational insights to drive value-based and AI-driven actions.

Forrester also feels various capabilities must work together for observable insights to deliver value and, therefore, defines these four functionality categories of observability:

  • Telemetry data is the bedrock of observability. This is the origination of all data and telemetry that an observability solution might leverage.
  • Exploration leads to a deeper understanding of entities. The aggregation, standardization, and time series collection of telemetry data prepare it for analysis and processing.
  • Insights surface important opportunities to act on. The application of AI/ML and other data science approaches identify patterns, trends, correlations, and anomalies.
  • Utilization of insights delivers high value. The insights surface so the organization can take proper actions to remedy or prevent various scenarios. The goal is to progress from predominantly manual consumption and dissemination toward analytics-based automated remediation and issue avoidance as maturity grows.

Riverbed IQ leverages analytics

Riverbed IQ follows these four functionality capabilities to provide actionable insights for our customers. It extensively leverages analytics, including machine learning (ML) and artificial intelligence (AI), to identify business-impacting events and reduce the noise from low-level or related incidents.

A quick overview of Riverbed IQ’s capabilities to set the background for our analytics discussion and to show how it supports Forrester’s observability functionality categories: Key metrics from Riverbed full-fidelity data are gathered, distributed, and accessed through the Data Ocean. A subset of the metrics stream through the Analytics Pipeline to monitor the health and performance of the IT environment and alert on anomalies. The anomaly data is then accessible to the Runbooks for no-code investigations, which gather contextual information about the incident to expedite impact assessments, troubleshooting, and resolution times.

The Analytics Pipeline receives all key metrics to aid in the detection and correlation of anomalies. It processes them through multiple stages to reduce the noise associated with too many alerts:

1. Anomaly Detection

As metrics flow through the Analytics Pipeline, they are monitored for anomalies that could be leading indicators of issues. These indicators are then associates with a monitored object (i.e. Application, Device, or Interface) to provide metric-relevant context, including associated metadata.

Riverbed IQ applies machine learning and AI algorithms, like baselining, and variance to detect anomalies and surface potential problem indicators. It also leverages thresholds to set high watermark indicators.

  • Thresholds are simple “trip-wires” applied to metrics that will quickly create an indicator when the associated threshold is violated. For example, thresholds are used to detect issues like device down or when interface utilization is above 90%. Thresholds work well in situations where there is a known range, such as interface utilization. Threshold are also paired with a baseline to handle cases where high values are normal.
  • Baselines are a method of assessing performance or behavior by comparing it to a historically derived baseline. Baselining is useful for handling performance metrics that do not have a fixed range, and where it is difficult to know when a performance indicator has entered a bad state. For example, organizations today use hundreds of applications, and the performance across the applications varies widely. Static threshold for latency or response time across all applications does not work, so we use baselines to learn what is the normal behavior for each application and then create anomalies when the applications metrics are outside of the normal range
  • Variance analysis is the comparison of predicted and actual outcomes.

The Riverbed IQ engineer and data science teams are continuously updating Riverbed IQ with more machine learning tools (i.e. algorithms) to grow and improve its AI capabilities.

The Alluvio IQ Impact Dashboard reflect the results of the analytics analysis and displays results according to impact on the business.
The Riverbed IQ Impact Dashboard reflects the results of the analytics analysis and displays results according to impact on the business.

2. Correlation Engine

The correlation engine determines if there is any commonality or relationship between the detected anomalies. This is done to reduce noise. It organizes indicators into associated groupings to correlate related indicators through use of time, location, connection, and relationship maps.

3. Incident Manager

The incident manager assesses the newly reported detections to determine if they constitute a new incident or if they are associated with an existing incident. A trigger is generated for new incidents so that the proper Runbook can be executed automatically.

For more information on Riverbed IQ and how it leverages analytics and runbooks to provide actionable insights that aid customers in faster, more efficient troubleshooting, click here.

]]>
The Flexibility of Riverbed IQ Runbooks for Automating Troubleshooting https://www.riverbed.com/blogs/runbooks-for-automating-troubleshooting/ Fri, 28 Oct 2022 12:37:00 +0000 /?p=18992 Riverbed IQ, Riverbed’s SaaS-based Unified Observability service, uses automated investigative workflows, called runbooks, to enable faster, easier root cause analysis. The no-code runbooks play a significant role in automating the troubleshooting processes. In fact, it mimics an organization’s troubleshooting workflows to automate the collection of incident details.

These incident details are then stored in the Impact Summary, which show the results of the runbook investigations, so all data about an incident is in one spot. The insights are immediately actionable as they deliver context-rich, filtered results that are ready for IT.  Using a broad range of network, infrastructure, application, and end user experience data to develop the insights, means cross-domain IT teams can effectively collaborate on root cause analysis. The benefits are faster mean time to know and mean time to resolution of the most business-impacting alerts.

Out-of-the-box runbooks

Riverbed IQ ships with a library of runbooks to ensure you get immediate value with minimal effort. Out-of-the-box, Riverbed IQ provides three runbooks–Interface Analysis, Device Analysis, and Application Analysis. These runbooks automate the process of gathering evidence, building context, and setting priorities for everyday IT problems.

You can use runbooks just as they come out-of-the-box, as many customers are doing. The Riverbed engineering team has spoken to hundreds of customers about how they troubleshoot, so they are reflective of that experience.

Administrators also have full flexibility to edit runbooks. This means administrators can customize or create any runbook, so it is tailored exactly to the organization’s particular needs. Admins can also create new runbooks and export, import, duplicate or delete them.

View Riverbed IQ runbooks in action

No-code runbooks in Riverbed IQ are easy to edit or create to ensure they are tailored to your organization’s requirements. Watch the video below to see how simple it is to edit a runbook.  In this short video, we walk though how to:

  • Add another branch to an existing runbook
  • Ensure the runbook executes specific behaviors based on data unique to each incident
  • Verify it displays properly in the Impact Summary

Watch Video

In summary, Riverbed IQ unified observability provides easy control over your runbooks and troubleshooting data to simplify root case analysis and reduce alert overload. The intelligence built into the Riverbed IQ runbooks replicates the troubleshooting workflows of IT experts to gather context, set priorities and highlight events that impact the most users, devices, and/or applications. As a result, Riverbed IQ reduces the volume of alerts to the most business impacting and empowers staff at all skill levels to identify and solve problems fast. ​

]]>
Shift Left for NetOps https://www.riverbed.com/blogs/shift-left-for-netops/ Mon, 24 Oct 2022 12:30:00 +0000 /?p=18435 Shift left is not new. In DevOps, for example, shift left means involving testing teams earlier in the development process and testing at all stages to find bugs when they are easier and less costly to fix. In NetOps, it means enabling more staff to take on first-level troubleshooting responsibilities without having to escalate to the experts.

Shift left for NetOps teams using the Riverbed IQ offers significant benefits:

  • Reduces alert fatigue by identifying only business-impacting events
  • Enhances IT satisfaction by enabling junior staff while taking the burden off the IT experts
  • Improves digital experience by reducing mean time to know/resolution (MTTK/MTTR)
  • Improves IT efficiency by solving problems sooner
  • Increases productivity by enabling IT experts to focus on revenue-generating projects

Let’s explore each of these benefits in more detail…

Reduce alert fatigue

Today’s IT environments are profoundly more complex than in the past, with immensely more data and alerts to contend with. Most monitoring alerts provide little context to guide the troubleshooting process. For some companies, it’s has become impossible to manually investigate every alert; others turn alerting off altogether and wait for the phone to ring. In short, it’s becoming more difficult for IT to separate critical events from the noise, to identify business-impacting events, or to resolve incidents quickly.

Riverbed IQ can separate the noise from impactful events and get more IT staff troubleshooting at all levels, not just the experts. By leveraging AI/ML-based correlations to identify business impacting issues, and low-code investigations (runbooks) to automate the process of gathering evidence, building context, and setting priorities, the Riverbed IQ service provides the right details to enable incidents to be resolved by first-level responders.

Alluvio IQ uses AI and ML-based correlations to identify impactful events.
Riverbed IQ uses AI and ML-based correlations to identify impactful events.

Enhance IT satisfaction

Enterprises often rely on a small number of highly skilled IT to troubleshoot complex issues. These skilled team members typically have wide technical and institutional knowledge, which puts them in high demand. Frequently, when the experts aren’t available, it takes an organization longer to get to resolution, or the problem may not get resolved until the expert returns.​ And, for IT team members who get pulled into troubleshooting when it’s not their primary job, it means they are being removed from strategic projects. This may lead to project delays and cost overruns.

Riverbed IQ codifies the knowledge that resides in your experts into automated runbooks that can easily be tweaked to your organization’s requirements. These customizable troubleshooting workflows enable more IT staff to troubleshoot effectively. By spreading the burden across more people, precious expert resources won’t get burnt out and we empower the junior staff to learn faster and take on more responsibility.

Automated investigations or runbooks automate and replicate IT’s process of gathering evidence, building context, and setting priorities so the context required to troubleshoot is always available.
Automated investigations or runbooks automate and replicate IT’s process of gathering evidence, building context, and setting priorities so the context required to troubleshoot is always available.

Improve digital experience

Companies that enable level 1-2 IT staff to proactively identify and resolve problems early in the troubleshooting process achieve more first-time resolutions. Earlier resolution leads to less downtime, better service quality and increased user satisfaction.

Improve efficiency

By avoiding the need to escalate incidents to IT experts, the organization improves MTTR. Shift left also enables IT experts to focus on revenue-generating projects. By freeing up time previously dedicated to incident or fault resolutions, IT experts can focus on forward-looking initiatives that advance the organizations digital transformation.

Enable shift left with Riverbed IQ

Riverbed IQ is a cloud-native, SaaS-delivered, open, and programmable solution for Unified Observability that empowers all IT staff to identify and fix problems efficiently. It uses full-fidelity end user experience and network performance data and then applies AI and machine learning (ML) to correlate disparate data streams and identify business-impacting events. This intelligence also informs low-code investigative runbooks that replicate the troubleshooting workflows of IT experts to gather additional context, filter out noise, and set priorities. The result reduces the volume of alerts to the most business impacting, and empowers staff at all skill levels to identify and solve problems faster. ​

To learn more about how Riverbed IQ helps organizations shift left, visit https://www.riverbed.com/products/riverbed-iq.

]]>
The Power of Full-Fidelity Telemetry in Unified Observability https://www.riverbed.com/blogs/power-of-full-fidelity-telemetry/ Fri, 21 Oct 2022 12:40:00 +0000 /?p=18866 Riverbed IQ’s approach to unified observability begins with the full-fidelity telemetry our market-leading NPM and DEM products provide. It applies artificial intelligence and machine learning (AI/ML) on this cross-domain data and correlates incidents across the data to identify business-impacting performance problems. Riverbed IQ then leverages automated workflow intelligence to gather additional evidence, build context, and set incident priorities. By reaching back into the Riverbed full-fidelity telemetry, IQ can fill in the supporting details—like affected clients, impacted devices, network round trip time, and more—to provide relevant perspectives to the Impact Summary.

This blog will dig into the importance of using full-fidelity telemetry with the Riverbed IQ unified observability service. But first, let’s define what Riverbed means by “full-fidelity.”

What is full-fidelity telemetry?

Full-fidelity data means you see and preserve every session in detail. It’s the capture and retention of every flow, every packet, every application transaction, and all user experience metrics so you see every incident. Having all data at your fingertips means you can rapidly search, pivot, and filter on any and all traffic of interest. Full-fidelity data enables quick answers to difficult questions—even if it happened weeks or months ago.

Riverbed full-fidelity telemetry

Riverbed offers a broad set of telemetry across multiple IT domains. Riverbed IQ currently supports network, infrastructure, and end user experience metrics from the following products:

  • Riverbed NetProfiler leverages full-fidelity network flow monitoring to proactively identify and quickly troubleshoot performance and security issues.
  • Riverbed AppResponse captures and stores all packets. It delivers all-in-one packet capture, application analysis, transactional details, and flow export on the same box.
  • Riverbed NetIM is a holistic solution for discovering, modeling, monitoring, and troubleshooting your IT infrastructure. It supports SNMP, streaming telemetry, WMI, CLI, and syslog.
  • Riverbed Aternity provides rich visibility into employee experience for your organization’s cloud, SaaS, thick client, and enterprise mobile apps.
The Alluvio Unified Observability portfolio consists of a broad range of full-fidelity telemetry, from DEM to NPM.
The Riverbed Unified Observability portfolio consists of a broad range of full-fidelity telemetry, from DEM to NPM.

The problem with sampled data

Sampling is the opposite of full fidelity. Metadata generated from sampled metrics can leave significant gaps in visibility and lead to blind spots that makes it difficult to detect performance and security issues. For example, some vendors only collect packet metrics based on KPIs. While this may be okay for many incidents, but not storing the actual packets means when you do need more details, it’s not available.

Another example is using sampled flow data. Sampling is typically employed to reduce the volume of flow records exported from each network device. While this practice allows you to deploy cheaper, lower spec’d telemetry solutions, it also effectively cuts corners on providing the complete view that IT needs for fully effective visibility and forensics. As such, Riverbed does not recommend sampling if you are using flow, and instead, encourages using raw flows whenever possible.

There are trade-offs when it comes to using sampled flow, especially for security or forensics analysis. Metadata generated from sampled flow leaves a big gap in visibility. If we consider a 10G link where the sampled flow data is generated by typical sampling 1 in 2000 packets, that means 99.95% of traffic is not being viewed or stored for future use. This also means we are only getting visibility into 0.05% of traffic flows; this might be fine for capacity planning but it’s not nearly sufficient for good visibility or observability.

Riverbed IQ leverages full-fidelity visibility

Riverbed IQ works best with full-fidelity telemetry. In fact, it can analyze more than 10 million data points per minute from supporting Riverbed telemetry. Because Riverbed telemetry captures everything and doesn’t sample, you’ll never miss a performance problem. The fact that Riverbed solutions provide deep and broad visibility, it’s perfect for providing baseline metrics for Riverbed’s new Riverbed IQ unified observability service.

 

]]>
Riverbed and Riverbed IQ on the Road https://www.riverbed.com/blogs/riverbed-on-the-road/ Wed, 19 Oct 2022 12:15:47 +0000 /?p=19131 Wow, what a month it’s been! Just over four weeks ago, Riverbed announced General Availability of our cloud-native, SaaS-delivered Riverbed IQ unified observability service that empowers IT with actionable insights and intelligent automation to solve problems more quickly and improve the digital experience for users everywhere.

At the same time, we kicked-off our Riverbed EMPOWEREDx user community road show across nine cities globally, and launched a new ‘Get Shift Done’ campaign that initially appeared on the NASDAQ digital board in New York City, and is now running on digital media platforms globally. The campaign is focused on the concept of ‘shift left,” in which all IT staff is now able to tackle jobs once only a very few experienced IT experts could handle. That’s the AI power behind our Riverbed IQ portfolio and Riverbed IQ. And this all follows our brand launch in April.

Riverbed Nasdaq Digital Billboard in NYC
Riverbed’s ‘Get Shift Done’ campaign launched on the NASDAQ digital board in New York City.

As a CMO and marketer, this has been a BIG moment for our company. We’ve been on a journey the past 18 months, driving innovation to deliver a differentiated unified observability solution to the market—one that contextualizes full-stack, full-fidelity telemetry across networks, applications and users, enabling customers to transform massive amounts of data into actionable insights. We believed we had something special—but to finally take the wraps off this solution, and bring it to customers live is what’s most rewarding.

With events starting to take place in person again, our CEO, leadership team, and technology evangelists have had the opportunity to engage face-to-face with our customers and partners to demonstrate the value of our Riverbed and Acceleration solutions. I was fortunate to travel to Paris to meet with customers at our EMPOWEREDx event, and to Dubai, where last week I attended the GITEX event, which is in full swing again! Riverbed also hosted EMPOWEREDx events in London, San Francisco, Washington DC, Dubai and Dallas, and in New York City yesterday, Melbourne on October 26, and Singapore in November.

CMO Jonaki Egenolf spoke with customers at Riverbed's EMPOWEREDx event
With Riverbed’s EMPOWEREDx events occurring in cities globally, we’ve had the opportunity to engage directly with our partner and customer community.

Here are some of the things we’ve heard the past few weeks from our customer community:

  • IT is now synonymous with business, and is top of mind for the C-Suite.
  • One of the biggest challenges organizations face is data overload, including receiving too many alerts without enough context; IT leaders say they need greater context around the data and various monitoring tools they have in place.
  • Resources in IT are tight and often scare, and there’s a need for more automation and enabling broader IT teams to fix issues faster and ensure digital service quality.
  • Acceleration of apps and networks, regardless of user location, still matters.
  • Before the pandemic, digital transformation was starting to take shape, but today it’s in full motion and delivering on the digital experience is central to organizations.

What we heard from our enterprise and government customers really validates our technology direction. At Riverbed, we’re fully focused on meeting critical customer needs, including delivering a unified approach to observability that unifies data, insights and actions across all IT. Ultimately, this empowers IT teams to empower digital experiences.

Riverbed team at GITEX
Many of our customers joined us at GITEX for open labs and live demos of Riverbed IQ.

Many of our customers joined us for open labs and live demos of Riverbed IQ at EMPOWEREDx, GITEX or other events. The feedback on this solution has been overwhelmingly positive. If you are in Melbourne or Singapore, please join us in person over the next few weeks to experience Riverbed for yourself. Otherwise, sign-up now for a Request Demo, or other Riverbed or Acceleration portfolio solutions. We’re ready to help you on your journey—to scale IT, turn data into actionable insights, and Empower the Experience. Let’s do this!

]]>
AIOps and Observability: What’s the Difference? https://www.riverbed.com/blogs/aiops-and-observability-difference/ Mon, 10 Oct 2022 05:31:00 +0000 /?p=19035 Interest in AIOps and observability tools sky-rocketed over the past couple of years as IT teams face the challenge of managing today’s IT infrastructures. The data explosion from modern architectures floods IT teams with massive volumes of data and alerts without context. Organizations are transforming and expanding service offerings into cloud-native, geographically distributed, container and micro-service-based architectures. Most are continuing to enable their employees to work remotely, requiring IT to support employees suffering from performance issues on home networks, on devices being run in sub-optimal conditions, and using SaaS and Shadow IT applications obtained outside of corporate IT.

The increase in complexity and volume of alerts is exacerbated by a shortage of highly skilled expert IT resources. Troubleshooting alerts requires expert IT staff to devote an excessive amount of time, taking them away from more strategic responsibilities.

Legacy monitoring can’t keep up with today’s IT environments

The situation is made worse by monitoring tools which alert on single metrics without broader context and correlation, lacking impact in scope and severity. IT teams have used legacy monitoring tools to collect data and generate alerts on violations of fixed or rate-of-change thresholds. Setting these alerts requires understanding the dependencies of the underlying infrastructure and what constitutes unacceptable performance. Monitoring is predicated on knowing in advance what signals you want to monitor (“known unknowns”).

In today’s modern, highly distributed architectures, it’s impossible for humans to understand these dependencies. Micro-service-based architectures spin up and down in cloud-native environments, so tracing topology after an incident doesn’t help. Too many factors outside of IT’s control affect the digital experience of an employee working from home for IT to resolve the issue. With proper credit to Donald Rumsfeld, there are too many “unknown unknowns.” These challenges have paved the way for observability and AIOps (Artificial Intelligence for IT Operations). Observability and AIOps are closely related, but there’s a difference between the two.

What is observability?

AIOps and ObservabilityFrom systems control theory, observability is defined as the ability to measure the internal states of a system by examining its outputs. A system is considered “observable” if the current state can be estimated by using information only from outputs, namely sensor data.

In other words, building observability into a system eliminates the need to directly understand the dependencies of the underlying infrastructure, which can be treated as a “black box.” This is especially important in distributed systems like cloud-native environments, hybrid cloud networks, and even highly distributed remote work environments.

But observability is only as effective as the quantity and quality of the telemetry being provided. For IT to troubleshoot effectively, observability must include data across the full stack, including network, infrastructure, applications, digital experience, business key performance indicators (KPIs) and user sentiment. Unified Observability not only covers all these IT domains, but also captures metrics without sampling, so that full-fidelity data can be leveraged when resolving issues.

AIOps Market definitions

AIOps and ObservabilityAccording to the Gartner Glossary, “AIOps combines big data and machine learning to automate IT operations processes, including event correlation, anomaly detection and causality determination.”

In the Forrester Now Tech: Artificial Intelligence for IT Operations, Q2 2022 (registration required), Forrester defines AIOps as “a practice that combines human and technological application of AI/ML, advanced analytics, and operational practices to business and operations data.”

AIOps platforms provide insights to IT staff by using AI and Machine Learning (ML) techniques to analyze telemetry and events from across the IT infrastructure and identify meaningful patterns that support proactive responses. In this way, AIOps platforms make the IT infrastructure observable to the IT teams involved in identifying and resolving issues.

The similarities between AIOps and observability

There’s a tight connection between AIOps and observability, but they are not the same. AIOps and observability have many common aspects and many vendors refer to their products as both AIOps platforms and observability platforms. So, market confusion is understandable.

  1. Similar business drivers. Business transformation is increasing the complexity of the underlying IT infrastructure of new applications and services that organizations are rolling out to better serve their customers. The integration of on-premises infrastructure and cloud services creates complex, ephemeral architectures that make it nearly impossible for humans to analyze and resolve issues.
  2. Shared customer requirements. IT teams need their operations from reactive to proactive. This goal is not new. IT has always striven to identify and resolve problems before end users are affected. But the increased dependence on digital performance has raised the stakes for greater availability and faster resolution times.
  3. Both evolve from traditional monitoring. Because of the limitations of traditional monitoring products addressed above, organizations are evolving to tools which incorporate AIOps and observability capabilities. Especially in response to the challenges of managing cloud-native environments. In response, vendors are evolving their Application Performance Monitoring (APM) products into both areas.
  4. Common use cases. Many products in the AIOps and observability segments address use cases for DevOps and site reliability engineering (SRE) teams. Again, traditional APM vendors have focused on use cases for these teams.
  5. Both are subject to over-hype. Both categories are listed in the “Peak of Inflated Expectations” according to Gartner Hype Cycle for Monitoring, Observability and Cloud Operations, 2022 (20 July 2022 ID G00770623, registration required)
Gartner Hype Cycle, AIOps, Observability
Both AIOps and Observability are shown in the “Peak of Inflated Expectations” in the Gartner Hype Cycle for Monitoring, Observability and Cloud Operations, July 2022, ID G00770623.

Riverbed’s focus: customer pain points

Confusion will continue to exist in the market about the difference between AIOps and observability. At Riverbed, we’re focused less on the specific names of market categories, and more on addressing the needs of our customers. Unlike other observability solutions that limit or sample data, Riverbed’s Riverbed Unified Observability portfolio captures full-fidelity user experience, application, and network performance data on every transaction across the digital ecosystem. It then applies AI and ML to contextually correlate data streams based on indicators of problems to provide actionable insights.

Our newest unified observability service, Riverbed IQ automates the investigative workflows of IT experts, empowering staff at all skill levels to solve problems, fast. With Riverbed IQ, IT can eliminate data silos, resource-intensive war rooms, and alert fatigue. They can enable cross-domain decision-making, apply expert knowledge more broadly, and continuously improve digital service quality. You can register for a complimentary evaluation today.

]]>
Network Visibility Proves Its Worth In Defending Against Cyber Attacks https://www.riverbed.com/blogs/network-visibility-proves-worth/ Fri, 07 Oct 2022 12:25:13 +0000 /?p=19098 The development of increasingly powerful and sophisticated IT security tools to defend against cyberattacks can be described as an arms race—with public and private sector organizations of all types acquiring the latest tools and technology, each being promoted as the most effective new weapon against cyber threats.

Ironically, more and more IT experts and security leaders are recognizing that one of the most mature IT management technologies around—one that was not designed primarily for IT security work—is an essential tool for defending against cyber threats.

I’m referring to network infrastructure management tools, also known as network visibility tools.

Today’s advanced IT security technologies are very good and have come a long way in the last 15 years. They can handle huge amounts of data, apply AI and ML analytics, automate formerly manual processes to make IT security teams more effective and efficient, and they can claim to protect against a very high percentage of attempted attacks hitting networks. So, let’s acknowledge they may be stopping 99.5% or even 99.9% of attacks. However, it’s the remaining 0.5% or even 0.001% of the threats that can cause the most damage and cost the most money.

When every one of an organization’s IT security layers and tools have failed, and the organization’s leaders are dealing with a ransomware or malware attack or have a threat actor inside your network, what do you do then?

For organizations of all sizes, but especially state and local government agencies, to prevent these select, advanced, sophisticated attacks, defenders must think differently, even creatively. Because when facing sophisticated threat actors or a knowledgeable insider that knows an organization’s blind spots, all the automated technology in the world can’t stop every one of those threats.

The best defense against these types of adversaries is a curious, determined human being armed with complete visibility into your network environment. Those are the people who you want on your threat hunting team—people who are suspicious of everything, who don’t accept anything on trust, who want to see for themselves and who keep pushing to get to the most granular, detailed level of every aspect of your IT environment.

That’s when the value of network visibility and network management tools becomes crystal clear, especially the tools that use full fidelity monitoring (not sampling). When your threat hunters can see every single component on your network—every server, laptop, desktop, router, firewall, switch, port, every packet of data, all the details of traffic flow, and more—it allows your team to determine where, when, and how the threat actors got in. Armed with these network visibility tools, your team can look at every point on the network and confirm whether it should or shouldn’t be there, identifying new and unknown devices, ultimately closing and eliminating network blind spots that threat actors thrive on.

For example, the 2020 SUNBURST cyberattack that compromised countless government agencies and private organizations by embedding malware in a legitimate software update is a case in point showing how a sophisticated cyber threat can evade traditional IT security tools. The malware was designed in an extremely sophisticated way with many features designed to evade detection, including tactics that made its communications and traffic appear benign. For example, by using in-country command and control servers, the SUNBURST malware left victim organizations unable to determine if traffic was leaving their secure environments and connecting to a known country that was a haven for cyber threat actors.

This approach eliminated a condition that would have been a clear red flag. The attack was only discovered when another security company found one of its security tools stolen and posted on the dark web. That company began investigating and not only discovered the SUNBURST attack, but an even longer attack called SUPERNOVA that had been evading detection for over two years!

Another example that exemplifies the need for active threat hunting with network visibility comes from one of our own government customers. In this case, the customer was leveraging our Rivebed NetProfiler and Riverbed NetIM network visibility tools which uncovered a significant vulnerability that was hiding within plain sight.

Using our tools, the IT team monitoring the network environment noticed a new, unknown, device and upon further investigation discovered they were not able to log in to the device. This and other characteristics raised suspicions that it might be malicious. They were able to quickly zero in on the device’s physical location, which was a cubicle in their own offices. The unauthorized computer had been put in place by several corrupt employees that were exfiltrating data for financial gain.

The insiders in this case had enough knowledge of our customer’s network environment that they were able to place the rogue computer on the network in a place where it would have access to data in an East-West fashion, ensuring that its communications never had to cross a security boundary. There was a good chance this threat could have remained in place for months or even years. But in this case, by having excellent network visibility, the threat was discovered quickly and shut down within hours.

More security layers are a good thing. By all means, I’m a fan of continuing to add more sophistication to IT security tools. But it is valuable to remember that you can’t successfully defend your network, if you can’t see every part of your network. Yes, network visibility tools have been around for years, and they have many benefits that aren’t related strictly to IT security (like performance and bandwidth optimization, operational troubleshooting, end-user experience improvements, etc.) but network visibility tools are absolutely invaluable as part of a comprehensive IT security approach.

That’s probably why the federal Cybersecurity and Infrastructure Security Agency (CISA) has included using network visibility in their Ransomware Guide Best Practices, which states:

“Develop and regularly update a comprehensive network diagram that describes systems and data flows within your organization’s network. This is useful in steady state and can help incident responders understand where to focus their efforts. The diagram should include depictions of covered major networks, any specific IP addressing schemes, and the general network topology (including network connections, interdependencies, and access granted to third parties or MSPs).”

That’s great advice, but no easy task if you’re not using Riverbed’s network visibility tools. Riverbed’s solutions get the job done and are the ones you want to place your bets on to truly protect your network environment.

]]>
Riverbed Aternity Named a Strong Performer in The Forrester EUEM Wave https://www.riverbed.com/blogs/aternity-forrester-euem-wave/ Thu, 06 Oct 2022 12:35:00 +0000 /?p=18907 It’s been a busy year for analyst reports in the Digital Experience Monitoring (DEM) and End User Experience Management (EUEM). Riverbed’s Riverbed Aternity DEM platform covers the full spectrum of digital experience: from monitoring the performance of real business activities on end-user devices, all the way to the backend systems that power them. This means going toe-to-toe with a wide range of other vendors. Today we’re going to look at what Forrester Research has to say about Riverbed Aternity DEM.

What is End User Experience Management?

In general, Forrester defines End User Experience Management as “A set of client-side capabilities that helps operations pros manage the daily technology experience of employees by collecting and analyzing telemetry data from employee devices, apps, networks, identity, and user feedback.” (Forrester Now Tech: End-User Experience Management, Andrew Hewitt, Q2 2022). The key here is that Forrester focuses exclusively on monitoring and managing the employee digital experience.

The Forrester Wave™: End-User Experience Management, Q3 2022 evaluates and scores the top nine EUEM vendors based on 30 key criteria. We are excited to be named as a Strong Performer this year, and to see a major analyst firm like Forrester understand the Riverbed vision and articulate the differentiators and strengths we continue to work on for our customers.

Most notably, analyst Andrew Hewitt writes: “Aternity targets the full spectrum of digital experience monitoring for both employees and customers, which is unique in this market. Its User Journey Intelligence capability best represents this.” Our teams have worked hard to deliver capabilities that cover the full spectrum of Digital Experience Management for busy IT teams, and it’s encouraging to see Forrester Research call this out.

Aternity User Journey Intelligence for End User Experience Managment
Measure the end user experience of ALL of the applications involved in the customer journey

But what do we mean by “Full Spectrum DEM?”

Think of it as a way to connect telemetry to business outcomes from employees to customers, and from front-end applications to the back-end infrastructure that powers them. This includes:

  • Insights into the digital experience of both customers AND employees.
  • The measurable impact of digital experience on business outcomes AND technical telemetry.
  • Unified performance visibility of both employee devices AND the application service, including cloud-native environments.
  • A big data approach that captures and stores ALL transactions without sampling.
  • The ability to measure actual employee experience for ALL types of application

Another observation by Forrester clarifies that this strategy is a true differentiation point for the Aternity DEM platform: “Aternity’s core differentiator is full-stack application transaction monitoring. Uniquely, Aternity can measure click-to-render for both web and client apps, with full visibility into client, network, and back-end infrastructure dependencies.”

Aternity Digital Experience Index
Aternity automatically identifies hot spots across your entire enterprise impacting employees and customers.

These differentiators—such as click-to-render measurements—continue to be a major focus for the Aternity DEM platform as we continue to find new ways to solve real-world problems for our customers. As busy Help Desk and End-User Compute teams are pushed to do more with the same (or even fewer!) resources, we continue to develop these end-to-end capabilities into our single, unified platform.

If you want to learn more about the EUEM space download a complimentary copy of The Forrester Wave™: End-User Experience Management, Q3 2022, or get started right now with a with a Request Demo.

]]>
Riverbed IQ Overcomes Common IT Challenges https://www.riverbed.com/blogs/alluvio-iq-overcomes-challenges/ Mon, 03 Oct 2022 12:30:00 +0000 /?p=18501 Today’s ​IT environments are more complex than ever before. Technologies like hybrid work, distributed hybrid cloud, and advanced network environments, such as SASE, CASB, and SDWAN, are causing new blind spots and reducing IT’s visibility. Below are some other common IT challenges and how Riverbed IQ unified observability can help your organization surmount them.

Alerting overload

Today’s IT environments are profoundly more complex than in the past, with immensely more data and alerts to contend with. Most of these alerts provide little context to help prioritize issues or help expedite the troubleshooting process. For some companies, they get so many alerts it’s impossible for them to manually investigate every incident. We’ve run into other organizations that turn alerting off altogether and wait for the phone to ring. This overabundance of alerts and lack of actionable insights consumes IT’s bandwidth and makes it more difficult for them to separate critical events from the noise.

Riverbed IQ employs Machine Learning (ML) to continuously analyze key metrics that characterize the IT environment and “fits” the most appropriate algorithm, so we leverage the most information from the data. Riverbed IQ continuously assesses the run-time environment and performance to learn behaviors and automatically adapts as the system evolves.

The Artificial Intelligence (AI) inherent in Riverbed IQ algorithms does the heavy lifting and can sift through many datasets to quickly identify and correlate anomalous behaviors that are then run through automated investigations (aka runbooks). Riverbed IQ’s out-of-the-box runbooks gather critical context to provide insight into Impacts (specifically impacted users, locations, and applications) so IT can prioritize and collect supporting data to help IT expedite resolution. In this way, Riverbed IQ surfaces the most critical issues so IT can tackle the most critical incidents, rather than “clear-cutting the forest” of alerts, or chasing false positives.

Skilled resources​ scarcity

In addition, enterprises often rely on a small number of high-impact, in-demand, and highly skilled IT personnel to troubleshoot complex issues. Often IT management can even name the individuals responsible. When these skilled team members are unavailable, it takes longer to get to resolution, or the problem may not get resolved until they return. ​For IT experts who get pulled into troubleshooting when it’s not their primary job, there could be unplanned impacts on work-life balance (employee satisfaction) and potential delays to strategic projects. ​

Closing the IT skills gap
Riverbed IQ helps close the IT skills gap by enabling more IT staff troubleshoot issues.

Additionally, these skilled team members have vast institutional knowledge. It is important to retain and share this tribal knowledge as these employees are in high demand and are frequently poached.​

Riverbed IQ helps improve the quality-of-life for these skilled team members, while also enabling first-level personnel to contribute at a higher level. Riverbed IQ provides low-code automated runbooks that skilled team members can easily use to codify their knowledge. Once tribal knowledge is captured in runbooks, the skilled team members are free to pursue planned high-value tasks, while at the same time first-level personnel have immediate access to the context and supporting details needed to quickly assess/resolve issues.

Data granularity

Some companies deal with the volume, variety, and velocity of data and alerts by limiting or sampling data. For example, they may collect one out of every 10th or 100th data point or collect only metrics. Essentially, they are making decisions based on incomplete snapshots of data. Without the full picture, this sampling can have disastrous consequences when monitoring security issues and can make troubleshooting more complex than it needs to be.

Riverbed IQ leverages the full-fidelity telemetry of our market-leading network, infrastructure, and end user experience products rely on. Because we capture everything and we don’t sample, and because Riverbed IQ analyzes 10+ million data points per minute, you’ll never miss a critical performance problem.

Hybrid work blind spots

Lastly, “hybrid work” architectures are becoming the norm (i.e., there is no longer a difference between in-office and remote users). Hybrid architectures leverage tunneling technologies to establish “work from anywhere” environments—but tunnels create blind spots that complicate troubleshooting and problem resolution.

When employees work from an office, the network team is responsible for application access and network transport issues—and it has access to a mature toolset to help identify/resolve issues.

As work from anywhere proliferates, the responsibility for identifying and troubleshooting remote issues in these new direct-to-cloud environments still falls within the network teams’ domain. Yet, because of the new blind spots, they lack the visibility needed to be effective.

Alluvio IQ provides NetOps with rich visibility into hybrid work issues.
Riverbed IQ provides NetOps with rich visibility into hybrid work issues.

Riverbed IQ leverages Riverbed Aternity end user experience data to triangulate from the edge and provide the visibility NetOps teams need to identify and prioritize network access and performance issues, the impact it has on end users, and who to call to resolve the issue (ISP, CASB supplier, application owner, security team, etc).

Riverbed IQ leverages Riverbed Aternity end user experience data to provide the visibility NetOps teams need to identify and prioritize network access and performance issues, the impact it has on end users, and who to call to resolve the issue (ISP, CASB supplier, application owner, security team, etc).

Unified Observability solves common IT challenges

Riverbed IQ is a cloud-native, SaaS-delivered, open, and programmable Unified Observability service that empowers all IT staff with actionable insights that help identify the critical issues and provide important context so they can fix problems fast. It leverages full-fidelity end user experience management (EUEM), network performance (NPM) and infrastructure data across the digital enterprise and then applies AI and machine learning (ML) to correlate data streams and identify business-impacting events.

This intelligence also informs the investigative runbooks that replicate the troubleshooting workflows of IT experts to gather context, filter out noise, and set priorities. It effectively changes the NetOps model from a reactive and woefully inadequate alert-driven approach to a more intelligent solution that proactively surfaces the most business-impacting issues. The result is that IT staff of all skill levels—not just IT experts—have the context they need to identify and solve problems fast.

Learn more

To learn more about how Riverbed IQ can solve today’s common IT challenges, please check out this Riverbed checklist, “9 Ways to Achieve Actionable Insights with Unified Observability.”

]]>
Digital Experience Alphabet Soup https://www.riverbed.com/blogs/digital-experience-alphabet-soup/ Wed, 28 Sep 2022 05:31:00 +0000 /?p=18917 Measuring and improving the digital experience of employees and customers has risen in priority over the past couple of years. Several factors explain this trend. Despite efforts to return to the office, hybrid work remains a significant option for organizations around the world. Generation Z, the first “true digital natives,” expect digital services at work to be as user-friendly, intuitive, and high-performing as they are outside of the office. Both Human Resources and IT recognize the effect that technology performance has on the employee experience. More and more, company culture and physical office layouts take a back seat to employees’ digital experiences. And with record low unemployment rates, organizations are laser-focused on attracting and retaining top talent.

Digital experience market confusion

The primary capabilities of Digital Employee Experience (DEX) tools, according to Gartner’s Market Guide for DEX Tools (registration required, G00764030, 31 August 2022), are as follows:

  • Data Collection and Aggregation: Products should collect quantitative data on device and application inventory, performance and usage. They should also collect qualitative data on employee sentiment. Business context, such as employee location, department, as well as labor cost add value to the data above.
  • Analysis and Insights: Collecting the data is just the start. Products must provide IT and the business with insights on the impact of the telemetry on digital experience. Trends in usage, performance, sentiment aligned to personas and business context help IT identify and resolve digital experience issues. Benchmarking a company’s metrics against others in the industry or geography helps in identifying and prioritizing digital experience improvement initiatives.
  • Action: Actions enable IT to automate the recovery actions to issues affecting digital experience, while engagement supports interacting with employees to inform them or recommend a next step. Automated remediation is a primary means of building action into digital experience management products to reduce Mean Time to Repair and improve service for commonly expected user experience issues.

The alphabet soup of different digital experience categories

Along with the demand-side factors discussed above, digital experience has become an area of focus for a variety of vendors. As illustrated in the Gartner Market Guide for DEX Tools, several market segments overlap in digital experience management capabilities.

Aternity, end user experience monitoring, EUEM, digital experience management, DEM
According to Gartner’s Market Guide to DEX Tools, several markets overlap in key product capabilities when it comes to managing digital employee experience.

All the acronyms are confusing. And to make matters worse, different analyst firms and vendors define the market segments differently. To make sense of the digital experience alphabet soup, here’s an overview of various categories.

DEM: Digital Experience Monitoring

End User Experience Management, digital experience management, DEM
Visit this link to read the full guide: Gartner Market Guide for Digital Experience Monitoring

This is one of the more confusing categories. According to the Gartner Market Guide for Digital Experience Monitoring, “digital experience monitoring technologies monitor the availability, performance and quality of an end user or digital agent experiences when using a device or application.” So, the Gartner definition includes all types of end users – both employees and customers. It also includes non-human end users, in the context of Internet of Things, such as gas turbines or utility meters which may process digital services. You can obtain a complimentary copy of the Gartner Market Guide for DEM at the link above.

Now analyst firm Forrester defines DEM more narrowly. For Forrester, digital experience monitoring refers to the digital experiences of an organization’s customers, not their employees. Forrester uses a different term, End User Experience Monitoring (EUEM) when the end users are employees. See below for more on that term.

CEM: Customer Experience Management

Customer Experience Management (CEM, or sometimes abbreviated as CXM) is a broader term. Gartner defines CXM as “the discipline of understanding customers and deploying strategic plans that enable cross functional efforts and customer-centric culture to improve satisfaction, loyalty and advocacy.” So, it’s a more strategic practice. CXM extends beyond IT initiatives for measuring and managing the digital experience of customers. For both Gartner and Forrester, DEM is a practice that supports CEM.

DEX: Digital Employee Experience

From the Gartner Market Guide for DEX, “DEX tools help IT leaders measure and continuously improve the technology experience that companies offer to their employees. Near-real-time processing of data aggregated from endpoints, applications, employee sentiment, along with information on organizational context, helps surface actionable insights that drive self-healing automations and engage employees, moving them toward optimal behaviors.”

So, what distinguishes DEM from DEX? Good question. I think there are two main distinctions. On the one hand, DEX goes beyond DEM by adding in the actions – the automated remediation or employee engagement – that enable IT and the business to actually improve employee experiences. Rather than simply monitoring it, DEX enables improvement through a cycle of telemetry, insights, and actions. On the other hand, DEM extends beyond DEX because it includes the digital experience of customers and non-human digital agents (think IoT), whereas DEX is solely focused on improving employee experience.

EUEM: End-User Experience Management

The Forrester Wave End-User Experience Management, Q3 2022 profiles several vendors (including Riverbed) whose capabilities enable organizations to deliver a great digital employee experience (DEX). So, for Forrester EUEM capabilities support the goals of DEX. Just like with Gartner. Unlike Gartner’s definition of DEM, however, Forrester’s definition of EUEM means that “end-user” in this case are just employees. Not customers. That’s CEM in Forrester speak. You can obtain a complimentary copy of the Forrester Wave for End-User Experience Management at the link above.

Forrester Wave, EUEM, End User Experience Management, digital experience management, DEM
Visit this link to read the full guide: Forrester Wave End-User Experience Management

Forrester also includes capabilities like automated remediation in their evaluation criteria for EUEM. As discussed above, Gartner would include this function as part of DEX, not DEM.

UEM: Unified Endpoint Management

Are all these letters swimming around in your soup bowl making you dizzy? Well, what happens when you drop off the first “E” from EUEM to get Unified Endpoint Management? Vendors in this category collect performance telemetry that enables IT to understand the state of an employee device—laptop, PC, virtual desktop, or company-managed tablet. They also include automated remediation scripts that enable IT to speed the recovery from commonly expected performance, security, and configuration issues.

So, products in this category know lots about the devices used by employees. But they generally don’t collect metrics on actual user experience—what employees actually see when they use applications in the course of their daily jobs.

HCM: Human Capital Management

This category serves as the counterpoint to UEM. UEM focuses on device performance, but less directly on the experience of the person using the device. Human Capital Management focuses on a broad set of practices related to people resource management such as workforce acquisition, management and optimization. HCM practices encompass all aspects affecting the employee experience and overlaps with the categories above only around how the performance of their technology affects it.

Riverbed’s focus: addressing customer challenges

No matter what letters are used to describe the market, at Riverbed, we’re focused on the challenges of delivering an excellent experience, for both employees AND customers.

Unlike other vendor’s products, Riverbed Aternity delivers full-spectrum digital experience management, by contextualizing data across every enterprise endpoint, app and transaction to inform remediation, drive down costs and improve productivity. To see how Aternity can help you on your journey to improve employee and customer experience, register to begin a Riverbed Demo today. And let me know how you’re managing the alphabet soup of digital experience.

]]>
Ensure Fast, Agile and Secure App Delivery with Riverbed Acceleration https://www.riverbed.com/blogs/fast-agile-secure-app-delivery/ Mon, 26 Sep 2022 12:35:00 +0000 /?p=18690 Today’s modern hybrid workplace and work styles present unique challenges for companies. If these challenges are not met it can threaten business continuity, customer satisfaction, employee productivity and engagement, as well as the successful execution of business-critical initiatives. For example, networks are increasingly complex having migrated from on-prem, MPLS solutions to supporting mobility and internet-based applications. And now, they leverage remote, mobile, on-prem and edge capabilities with Cloud/SaaS, Internet, MPLS, and SASE. Plus, employees, partners, and customers expect “always on” 24/7 digital support. Sound familiar? Read on to see how Riverbed’s Acceleration solutions can help!

Get results with Riverbed Acceleration

To meet all of these challenges, companies need accessible, secure, encrypted, and high-performing applications.

Riverbed Acceleration solutions provide fast, agile, secure acceleration of any app over any network to users, whether mobile, remote, or in the office. Built on decades of WAN optimization leadership and innovation, our industry-leading Acceleration portfolio powers cloud, SaaS, client, and eCDN (video streaming) applications at peak speeds. The solutions help overcome network speed bumps such as latency, congestion, and suboptimal last-mile conditions to empower the hybrid workforce.

The Riverbed Acceleration portfolio combines the agility, performance, and security of carrier-grade SD-WAN with innovative acceleration for Cloud, SaaS, and prem-based applications and services. With client acceleration straight to the desktops of mobile and remote workers, WANOP for traditional apps and eCDN acceleration for deploying video at enterprise scale, Riverbed ensures peak performance for every digital experience. And our solutions get results!

 

Riverbed Acceleration boosts performance, productivity, and digital experience.
Riverbed Acceleration boosts performance, productivity, and digital experience.

Our customers, like Elizabeth Harp, CIO, GHD, find our solutions essential: “Our 8,500 people collaborate on 3D apps, moving massive digital files across continents as if they’re working side-by-side. Riverbed Acceleration makes it possible.”

Watch this YouTube video now to learn more about Riverbed’s Acceleration portfolio!

End-to-end Riverbed Acceleration portfolio

WAN Optimization

Riverbed’s SteelHead is the #1 hybrid network optimization and application performance solution chosen by over 30,000 organizations worldwide!

Riverbed draws from two decades of experience in driving network and application performance over Wide Area Networks with its flagship offering, SteelHead. SteelHead delivers the underlying technology that helps enterprises of all sizes maximize the efficiency and performance of the networks and applications used to run modern business. Virtual SteelHead, which is a virtual version of the SteelHead appliance, includes pre-configured software for customers to use on their own machine if they prefer.

Cloud Accelerator

Fast, agile, secure delivery of any cloud workload to anyone, anywhere.

Riverbed Cloud Accelerator is an Infrastructure-as-a-Service (IaaS) environment that accelerates migration and enhances access and reliability for any workload. Cloud Accelerator runs on leading IaaS platforms such as Microsoft Azure, AWS and Oracle Cloud Infrastructure to accelerate migration and access to workloads through proven data, transport and application streamlining. Ultimately this increases time to market, enhances employee productivity, and delivers maximum cloud value to the business.

SaaS Accelerator

Fast, agile, secure delivery of mission-critical SaaS applications.

Riverbed delivers the only cloud-based Software-as-a-Service (SaaS) acceleration service, purpose-built for today’s dynamic workforce. The solution ensures consistent performance of leading SaaS applications (Microsoft 365, Salesforce, ServiceNow, and more) for anyone, anywhere, regardless of distance. The solution accelerates SaaS application performance by overcoming network inhibitors such as latency, congestion, and unpredictable user-experiences.

Client Accelerator

Fast, secure apps and data for today’s hybrid workforce.

Client Accelerator extends best-in-class WAN optimization and app acceleration technology to remote users working from anywhere. It provides fast and secure access to on-prem, IaaS, and SaaS-based apps across any network. The solution also interacts directly with any SteelHead, Cloud Accelerator, or SaaS Accelerator solutions to optimize and accelerate on-premises or SaaS applications.

Client Accelerator significantly boosts app performance
Client Accelerator delivers leading-edge app performance to today’s work-from-anywhere workforce

eCDN Accelerator

A cloud-based content delivery platform for today’s dynamic workforce.

Riverbed eCDN Accelerator empowers your hybrid workforce by delivering high-quality live and on-demand video to all employees, regardless of their location, device, or bandwidth capacity. In addition, our browser-based, self-service solution does not require any software to install, purchase of hardware, or investment in additional network infrastructure.

Software-defined WAN

Achieve ultimate agility, performance and security as you expand your network to the internet, cloud and edge.

Riverbed’s secure enterprise SD-WAN unifies connectivity across branch, data center, and multi-cloud infrastructure. The solution enables your organization to:

  • Expand connectivity options:Increase WAN capacity with cost-effective Internet Broadband and LTE to augment or replace MPLS
  • Boost operational agility: Streamline network operations and drive strategic value with centralized policy-based management
  • Streamline branch infrastructure: Replace conventional branch routers with a platform that combines routing, SD-WAN, WAN optimization and security
  • Accelerate site provisioning: Slash costs of onboarding a branch with zero-touch provisioning

To learn even more on these topics, read this blog too!

Case study

Childers, a U.S. architecture firm, has been creating meaningful and memorable spaces for Native American communities for over 25 years. The company was greatly impacted when the global pandemic hit. The shift to remote work impacted the firm’s VPN performance. This limited people’s access to business-critical applications and complicating collaboration. Childers also found it difficult to support end user systems remotely. The company’s IT support effort was limited by available bandwidth and network latency. In addition, low bandwidth as people worked from home posed more network challenges.

The company worked with Riverbed to deploy Riverbed Client Accelerator, SaaS Accelerator, and Virtual SteelHead at the datacenter. This provided more reliable and faster remote access. By using Riverbed Acceleration solutions and achieving faster file access, Childers was able to significantly increase productivity. The solutions helped save each employee 1.5 days per week, which was also a huge time savings company-wide. Phillip, MIS Manager at Childers said, “Had it not been for Riverbed, we wouldn’t have been able to work from home so efficiently when the global pandemic started. Riverbed enabled us to share huge files with remote users that work with varying amounts of bandwidth.” For more details on this case study click here!

Riverbed solutions drive results for our clients

Learn more about Riverbed Acceleration solutions or sign up for a Request Demo now!

]]>
See What’s New in Riverbed SaaS Accelerator Release 1.5.1 https://www.riverbed.com/blogs/whats-new-saas-accelerator/ Thu, 22 Sep 2022 12:30:36 +0000 /?p=18649 Starting SaaS Accelerator release 1.5.1, Riverbed has introduced support for Microsoft’s CASB solution, the Microsoft Defender for Cloud Apps, as well as Azure Information Protection (AIP).

Quick introduction to CASBs and Microsoft Defender for Cloud Apps

Before getting started with the details of Microsoft Defender support in Riverbed SaaS Accelerator, here is a brief overview of CASBs and their place in current enterprise.

The quickest way to understand the technology is that CASBs are an HTTP proxy so that traffic to and from the laptop/machine is intercepted (via an agent on the device), inspected, and gleaned by policies set by the user’s organization via a cloud-based manager. This architecture provides numerous benefits that were either difficult to achieve or not possible in a traditional “enterprise firewall” setup.

Microsoft Defender for Cloud Apps works the same way, except it also involves a DNS redirection, such as the one shown below. This behavior is what sets them apart from some of the other CASBs and why special updates were made to ensure Riverbed SaaS Accelerator supports Microsoft Defender for Cloud Apps.

Microsoft Defender DNS redirection example
Microsoft Defender DNS redirection example

Microsoft Defender for Cloud Apps now supported as part of 1.5.1

Microsoft Defender for Cloud Apps has witnessed significant growth among the Riverbed customer base who depend on SaaS acceleration. This release of SaaS Accelerator has been especially targeted to support Microsoft Defender for Cloud Apps for our customers who use it as their CASB and need Cloud acceleration for its various benefits. Riverbed’s SaaS Accelerator has been tested and validated to integrate with Microsoft Defender for Cloud Apps starting release 1.5.1. Below is a simplified diagram of how SaaS Accelerator release 1.5.1 integrates via proxy chaining with Microsoft Defender for Cloud Apps.

High-level SaaS Accelerator diagram with Microsoft Defender
High-level SaaS Accelerator diagram with Microsoft Defender

Below is the configuration page on the Riverbed SaaS Accelerator Manager web UI for setting up to work with Microsoft Defender.

Riverbed’s SaaS Accelerator Microsoft Defender configuration page
Riverbed’s SaaS Accelerator Microsoft Defender configuration page

Along with Microsoft Defender for Cloud Apps, Riverbed SaaS Accelerator continues to also support Zscaler and Netskope CASB interoperability via Proxy Chaining.

Azure Information Protection support

Azure Information Protection (AIP) is part of Microsoft Purview Information Protection (formerly Microsoft Information Protection or MIP). Microsoft Purview Information Protection helps you discover, classify, protect, and govern sensitive information wherever it lives or travels. 

Various categories of protection within AIP as well as encryption of documents and files have been validated—once the cold pass of the encrypted file goes through Riverbed SaaS Accelerator the subsequent transfers of the file will benefit from acceleration. Also, all protection mechanisms will continue to be honored while the document is accelerated by Riverbed SaaS Accelerator without requiring any additional configuration.

Riverbed SaaS Accelerator will continue to provide WAN acceleration for the various use cases for AIP and harmoniously works with the AIP agent installed on users’ machines. 

What Next?

To learn even more about this release and how to get started with Riverbed SaaS Accelerator, please visit Riverbed SaaS Accelerator Product Page and the Riverbed SaaS Accelerator Release Notes from the Riverbed SaaS Accelerator Support Page. Also, check out this blog post and the included video on implementing the SaaS Accelerator solution for your organization from scratch.

]]>
Seven Differentiators of Riverbed IQ Unified Observability Service https://www.riverbed.com/blogs/differentiators-of-alluvio-iq/ Mon, 19 Sep 2022 12:30:00 +0000 /?p=18475 What sets Riverbed IQ apart from other observability solutions? We end our customer presentation with a list of Riverbed IQ’s key differentiators that make Riverbed IQ’s ecosystem “a big deal,” according to a recent analyst briefing with IDC. With that in mind, here are the ones IDC also considers important for standing out in the hyper-competitive observability market:

Differentiator #1: Unlock the power of full-fidelity telemetry

Riverbed IQ’s approach to unified observability begins with the full-fidelity telemetry that our market-leading NPM and DEM products rely on. While today we leverage network, infrastructure, and end user experience metrics, the future will bring support for APM and device metrics. Because we capture everything and we don’t sample, and because Riverbed IQ analyzes over 10 million data points per minute, you’ll never miss an impactful performance problem.

Differentiator #2: Apply intelligence to problem detection

Monitoring for past decade has used rule-based alerting. It’s time consuming to set up and the thresholds are seldom reviewed often enough to be meaningful. This often leads to over alerting, creating a high volume of alerts that are often false positives of an impactful issue.

Riverbed IQ removes the need for rule-based alerting by using machine learning and logic built by a data science team. The analytics in Riverbed IQ learn what is normal for a specific device, interface, or application, and then passes the problem on to automated investigations only when something is outside normally behavior and in the range of creating a performance issue.

Riverbed IQ leverages AI and machine learning (ML) to correlate and accurately identify cross-domain insights to surface only business-impacting events. It applies 10,000+ correlations per minute across devices, locations, and applications and displays the associated events by “Most Impacted Users, Locations and Apps” so IT can quickly see the worst problems and their impact on the business.

Differentiator #3: Automate the investigation process​

Riverbed IQ also leverages automated, investigative workflows to handle the scale and complexity of today’s IT environments.​ These low-code runbooks replicate the best practices of expert IT teams. Pre-built runbooks gather evidence, build context, and set priorities to enable IT teams to save time, reduce escalations, and turn knowledge that resides in the minds of a few experts into knowledge that is usable by all IT. By spreading the burden for troubleshooting across the entire team, your highly skilled experts can now focus on high-value digital transformation projects rather than spending all day troubleshooting.

Differentiator #4: Codify expert knowledge​

Riverbed IQ codifies the institutional knowledge that resides with IT experts and turns it into automated runbooks that can easily be customized to your organization’s specific requirements. These automated troubleshooting workflows (or runbooks) enable more IT staff to troubleshoot effectively. By spreading the burden across more people, Riverbed IQ reduces the risk that expert resources get burnt out and empower all IT staff to learn faster and take on more responsibility.

Riverbed IQ codifies institutional knowledge into low-code runbooks to automate investigation processes.
Riverbed IQ codifies institutional knowledge into low-code runbooks to automate investigation processes.

Differentiator #5: Empower NetOps in hybrid work environments​

Prior to the pandemic, when users worked in branch offices, the NetOps team was responsible for identifying network access and performance issues for end users accessing business applications. Surprisingly, that didn’t change when users went remote. NetOps is still responsible for identifying network access and performance issues for end users, even though they are blind to remote work problems.

By leveraging Riverbed end user experience metrics, Riverbed IQ removes these blind spots to enable network teams to:

  • Establish the scope and severity of remote work issue so that they can prioritize and determine whether they need to escalate it.
  • Determine the root cause, whether it’s an ISP, CASB, application, or security issue, and estimate when the issue might be resolved (for example, an ISP issues takes more time and an internal issue).
  • Document the incident, understand its impact on end users, and communicate to the affected users.

Differentiator #6: Focus on what’s important

The combination of AI/ML and codified runbooks is unique. We’ve spoken with hundreds of customers who have been burned before by black box AI/ML, where they have no insight into why they got the results they did. By pairing the results of Riverbed IQ’s AI/ML correlations with our transparent runbooks, IT can see exactly why Riverbed IQ highlighted the issues it did. They can trust the output because they can always customize runbooks to their organization’s needs.

No competitive solution offers this combination of capabilities that leads to quicker resolutions, without having to escalate to your IT experts as often.

Differentiator #7: Riverbed Unified Observability Platform

Next in the list of differentiators is the Riverbed Unified Observability Platform, which provides comprehensive, standards-based cloud-native capabilities to enable Riverbed engineers to quickly create new Riverbed unified observability services and customers to deploy and administer them.

Deployed on Azure, the Riverbed Unified Observability Platform supports a suite of SaaS-based observability tools that IT can deploy quickly, administer securely, and scale seamlessly. The Riverbed platform centralizes authentication, privacy, and provisioning so IT can efficiently administer multiple observability services. It provides capabilities for ingesting, correlating, and storing massive volumes of data that supports observability use cases for today’s highly distributed IT infrastructures. With advanced AI and ML-powered analysis and the workflow engines, the Riverbed platform enables new services that streamline repetitive tasks so IT can deliver better digital experience.

Riverbed IQ is built on the Riverbed Unified Observability Platform. These differentiators enable faster development of new services through reuses of basic modules, and to enable IT to deploy quickly, administer securely, and scale seamlessly.
Riverbed IQ is built on the Riverbed Unified Observability Platform. These differentiators enable faster development of new services through reuses of basic modules, and to enable IT to deploy quickly, administer securely, and scale seamlessly.

About Riverbed IQ

To summarize, Riverbed IQ is a cloud-native, SaaS-delivered, open and programmable solution for Unified Observability that empowers all IT staff to identify and fix problems efficiently. It uses full-fidelity end user experience and network performance data to gain a complete picture of your environment. It applies AI and machine learning (ML) to correlate disparate data streams and identify business-impacting events. This intelligence also informs investigative runbooks that replicate the troubleshooting workflows of IT experts. The investigative runbooks gather additional context, filter out noise, and set priorities—reducing the volume of alerts to the most business impacting, and empowering staff at all skill levels to identify and solve problems fast. ​

To learn more about Riverbed IQ’s key differentiators, visit www.riverbed.com/riverbed-iq.

]]>
Riverbed IQ: Solve Problems Fast at Any IT Skill Level https://www.riverbed.com/blogs/alluvio-iq-solve-problems-fast/ Tue, 13 Sep 2022 12:17:28 +0000 /?p=18844 IT has a problem–well, quite a few problems. Monitoring tools are supposed to help network teams find and fix tech problems, but all too often, they simply offer a flood of data and alerts that lack context or provide actionable insights.

As a result, IT professionals spend a lot of time in war rooms trying to figure out how to solve problems and are often forced to turn to a few highly skilled, senior-level individuals who understand how to manually investigate and troubleshoot issues. These experienced experts are in short supply, and their time is better spent on helping implement strategic initiatives rather than having to figure out why the network is down again.

There’s also the challenge of disparate, siloed tools that fail to provide IT with a holistic technology to enable seamless digital experiences. But finally, there’s a solution that can unite IT teams—Riverbed IQ. Discover how this new unified observability platform is the long-hoped-for solution to the many challenges plaguing IT teams.

What is Riverbed IQ?

Riverbed IQ is a cloud-native, SaaS-delivered unified observability product that correlates data across Riverbed Network Observabiloty and Riverbed Aternity Digital Experience Management to detect and resolve critical events, even in hybrid work and hybrid cloud environments. Riverbed IQ achieves this by analyzing 10+ million data points per minute. By capturing all data points rather than relying on sample data, you’ll never miss a performance problem. Best of all, Riverbed IQ surfaces context-rich data so teams can quickly understand the problem and how to solve it.

Below, we explore how Riverbed IQ:

  • Heavily reduces the volume of alerts IT teams receive.
  • Delivers context-rich, actionable insights that empower staff at all levels to solve problems faster and without escalating.
  • Provides investigative workflows to automate the process of gathering contextual evidence.

Applying Intelligence to Problem Detection

While competing tools correlate based solely on time or keywords, Riverbed IQ applies over 10,000 correlations per minute across time series, devices, locations, and applications to provide greater insights. And unlike rule-based products that are often improperly defined and applied to single metrics, Riverbed IQ applies different models to a range of metrics using AI-powered baselining, thresholds, change detection and correlations.

What this means for IT teams: IT teams can be more proactive about identifying and fixing issues before they can frustrate users, ensuring smooth digital experiences.

What it means for the business: Riverbed’s intelligent automation facilitates quicker resolutions by providing the context IT needs to troubleshoot more easily and effectively. IT professionals can spend more time implementing strategic initiatives that add value to the business, rather than using their time to keep current technologies up and running.

Democratize Knowledge Through Scripted Investigations

Senior-level IT professionals are a wealth of knowledge, and best understand how to work out and troubleshoot issues. All too often, level 1 and 2 staff must turn to them for help in troubleshooting. But not anymore.

Riverbed IQ codifies its expert troubleshooting knowledge so junior IT professionals no longer have to escalate. It features automated investigative workflows designed to replicate the best practices of expert IT teams. These no-code runbooks are customizable so additional workflows can be created using a highly graphical, easy-to-use interface.

What this means for IT teams: These pre-built runbooks gather evidence, build context, and set priorities to accelerate mean time to resolution (MTTR), reduce escalations, and turn knowledge that resides in the minds of a few experts into knowledge that is usable by everyone within IT.

What it means for the business: It allows senior-level staff to reclaim their time so they can focus on high-priority projects that can take the business to the next level.

All the Information IT Needs in One Place

According to a recent IDC survey, “54% of organizations use six or more discrete tools for IT monitoring and management. Yet, 60% of respondents agree that most monitoring tools serve narrow requirements and fail to enable a unified and complete view of current operating conditions.”

Simply put, IT teams are using too many tools and still don’t receive the precise information they need to take action. In fact, the tools are drowning them in unusable data and signals, contributing to alert fatigue.

Riverbed IQ remedies the persistent headache of too much data. It’s a single, comprehensive solution that leverages AI and ML to unify and correlate network performance and end-user experience data.

What it means for IT teams: A significant reduction in the volume of alerts and a single source of truth that surfaces the most business-critical events. The solution reduces time spent in war rooms, finger-pointing, and excessive escalations, resulting in happier and more productive IT teams.

What it means for the business: Businesses can rely on the power of Riverbed IQ and immediately realize a return on investment.

Riverbed IQ Promotes Happier Teams and Customers

From excessive alerts to tribal knowledge, Riverbed IQ reduces the pain points IT teams encounter so they can better improve digital experiences for customers and employees, making everyone happier all around.

Discover how Riverbed IQ can positively impact your business and IT team by signing up for a Request Demo today.

]]>
Transform Data into Actionable Insights to Empower Digital Experiences https://www.riverbed.com/blogs/transform-data-into-actionable-insights/ Tue, 13 Sep 2022 12:15:54 +0000 /?p=18856 Today we are living and working in a world that is digital-first and hybrid by design, with cloud, SaaS and legacy technologies working together, and employees working from everywhere.

In this world, a click is everything. That action comes with intent and expectation—of a flawless digital experience. These experiences are the heartbeat of the fierce and competitive landscape we all work in. And when digital services fail to deliver a flawless experience, it can impact your brand, and undermine your ability to achieve important objectives tied to revenue, cost, productivity and risk.

In this complex and distributed environment, many IT teams are finding it more challenging to deliver seamless digital experiences to customers and employees. IT organizations are overwhelmed by massive amounts of data and alerts flooding them from siloed tools that provide little context or actionable insights, when issues occur. As a result, IT teams rely on a few highly skilled individuals, who are in short supply and high demand, to manually investigate and troubleshoot issues.

This is one of the industry’s most daunting problems: how to provide seamless digital experiences that are high performing and secure in a hybrid world of distributed users and applications, exploding data, and soaring IT complexity.

Although observability was meant to solve these problems, current solutions fall short—failing to capture all relevant telemetry, and instead sample data to deal with the scale of today’s distributed environment.

Until Now. 

Riverbed saw the need for a differentiated approach to solve the challenges resulting from this IT complexity and to go beyond the basics of monitoring, testing, and management. For the past 18 months, we’ve been investing in a unified approach to observability—unifying IT data, insights and actions to empower IT to deliver exceptional digital experiences to users everywhere. Today, we’re proud to introduce you to Riverbed IQ—our new cloud native SaaS-delivered unified observability solution.

Riverbed IQ transforms the overabundance of data and alerts into actionable insights and intelligent automation for IT organizations. Powered by AI/ML correlation, Riverbed IQ’s scripted investigations replicate expert IT workflows to gather event context, filter noise, and identify the most business-impacting events to act on. With full stack, full-fidelity telemetry, intelligent correlation, and workflow automation, Riverbed IQ delivers actionable insights that empower all IT skill levels to resolve problems quickly and improve digital service quality. Enabling IT organizations to “shift left” allows all staff to do the job of more experienced IT experts, ultimately freeing up resources to focus on strategic business initiatives.

Riverbed IQ is the first service to be delivered on the Riverbed Unified Observability platform—a secure, highly available and scalable SaaS platform for cloud-native observability services. Riverbed IQ and the Platform are part of the Riverbed by Riverbed portfolio, which also includes industry-leading visibility tools for network performance management (NPM), IT Infrastructure Monitoring (ITIM) and Digital Experience Management (DEM), which encompasses application performance management (APM) and end user experience monitoring (EUEM).

The Riverbed IQ Unified Observability platform and  Riverbed IQ enable faster, more effective decision-making across business and IT. To learn more about Riverbed IQ, our approach to Unified Observability and how we can help you deliver on the click and the digital promise behind it, visit Riverbed IQ.

]]>
Improve Your Employee’s Experience With Microsoft Teams Through Riverbed Unified Observability https://www.riverbed.com/blogs/improve-employees-experience-with-ms-teams/ Wed, 31 Aug 2022 22:20:00 +0000 /?p=18585

Have you had a poor experience with Microsoft Teams such as call drops, bad call quality, crashes or high resource consumption? In the post-pandemic world, MS Teams has become one of the key productivity tools most of us are using to collaborate with colleagues and customers. To stay productive and efficient in hybrid workforce, MS Teams experience should be best-in-class from anywhere the users are connected from.

On 21st of July, MS Teams customers faced a global outage issue which impacted users mostly in Australia, New Zealand and neighbouring ASEAN countries. Did your IT team detect the outage before the users noticed? How did your IT team mitigate the influx of service desk tickets about this outage?

Riverbed can help you improve overall MS Teams performance through visibility, proactively communicating to users, and auto-fixing the common MS Teams issues.

For now, let’s start with the basics—understanding how users access the application.

Where are users accessing MS Teams from? 

Users could be accessing Microsoft Teams from across different locations, offices or even their favorite café. It is helpful to group the type of users based from where they are connected–the company headquarters, branch offices or remote locations.

Riverbed recommends monitoring at two points to analyze the performance and user experience on MS Teams using Riverbed Unified Observability:

  1. Point of distribution: Monitoring MS Team call quality and performance from the network through capturing packets and flows, and polling through SNMP.
  2. Point of consumption: Monitoring MS Team call quality and performance from the user devices such as laptops, desktops, VDI, etc.

Point of distribution: network monitoring 

Riverbed provides a holistic view of every call made on Microsoft Teams. The dashboard below shows the distribution of call quality, identifies the jitter and latency over time and analyses the corporate and non-corporate traffic going through your LAN and WAN.

Riverbed Alluvio Aternity dashboard provides a holistic view

Riverbed provides a hop-by-hop analysis of communication from user machines, switches, routers, public internet and backend server. Riverbed also analyses the traffic when remote users are directly connected to the server through the internet. This way, we know which server is being used that could potentially cause latency issues. For example, if the user is based in Australia and the application was communicating through a server in the US, the user will experience high-jitter and high-latency due to the distance from which the packets send the data to. IT teams can then redirect this traffic through a local server for better performance. Having visibility is the key to target what and where the fix should be done!

Riverbed Alluvio Aternity dashboards give visibility

Another way to monitor call quality is by analysing network packets for each call. Riverbed can identify the MOS CQ, post dial delay, and jitter on each call.

We can also look into Quality of Service (QoS), and prioritise voice and video traffic, just like how airlines have a priority lane for Business and First-class passengers at airports. We need to prioritise network voice and video traffic for MS team users.

Riverbed Alluvio Aternity shows the channel breakdown by Codec

Riverbed can help identify the access points where users experienced poor MS Teams call quality in the corporate network. All these metrics can help you identify the source of poor performance and help you enhance the user experience.   Riverbed Alluvio Aternity dashboard identifies access points

Riverbed Alluvio Aternity dashboard showing Call Rating per BSSID

Point of consumption: user experience in device monitoring

Riverbed gives you all-round performance and experience insights for Microsoft Teams across your organisation. Riverbed can provide visibility on usage, activity response times, audio and video call quality, resource consumption of app, and crashes and errors.

Riverbed can show the number calls with high latency, jitter, and packet loss out of total the calls made. You can easily identify the call quality trends and investigate each user’s call which showed as poor quality. Each dot in the dashboard below represents a user call.

Riverbed Alluvio Aternity dashboard makes it easy to identify call quality trends

Riverbed shows insights on response time of user discrete activities, such as launching the app, join in the meeting, sending a chat, and the breakdown of the response time. You can easily identify the domain that contributes most on the delayclient device, network, or backend. It can also show the impacted user commonalities on the slowness by business locations, server, etc.

Riverbed Alluvio Aternity dashboard shows impacted user commonalities

Riverbed can provide visibility on how much resources MS Teams consumes on devices. For most of our customers, we found that MS Teams consumes at least 1GB of memory on average.

Riverbed io can monitor crashes and capture the errors codes to help IT teams find the root cause and identify the commonalities on users impacted by software version, model of devices, connectivity, and more.

Riverbed Alluvio Aternity dashboard shows impacted user commonalities

MS Teams outage detection

A global outage issue was reported by Microsoft Teams users on the 21st of July, 2022 around 11:15AM AEST and lasted for about 2.5hrs. Before our customers start to notice, Riverbed Aternity detected the change on the app behaviour and sent email notifications to the IT team about the low usage. This outage alert is done through our machine learning capability called Anomaly Detection.

Riverbed Alluvio Aternity sends outage alerts

The incident, detailed below, shows the traffic highlighted in yellow is the normal usage time of MS Teams. Then, at the time of the outage, there was a sudden drop on usage time and number of users, which is highlighted in purple.

Riverbed Alluvio Aternity dashboard details incidents

Riverbed Aternity detected the outage and alerted the IT team. This helped the IT team to take action before anyone noticed the issue, allowing them to proactively communicate to users that an outage may impact them and that IT will provide an update once the issue is resolved.

Alluvio by Riverbed allows IT to put up outage alerts

This proactive engagement from IT resulted in:

  • Mitigation of the potential increase in service desk tickets, as users did not need to create tickets to inform IT that they were impacted.
  • Improved user satisfaction, since IT communicated the issue before users noticed it.

Auto-remediation of devices

Riverbed can help fix the issue, without the need to remotely connect to user devices, through Auto-Remediation tagged to alert detection. Riverbed Aternity can clear the cache to help L1 support teams fix common issues with Microsoft Teams. No disruption on user productivity time!

Alluvio by Riverbed mitigates IT Service Desk issues

Below are the list of MS Teams issues getting resolved after removing the MS Teams cache:

  • Teams’ status showing incorrectly
  • Display Pictures not showing
  • Background images not showing
  • Unable to login into teams
  • Unable to find Teams Groups
  • Members in teams showing incorrectly when compared to web
  • Displaying incorrect Phone numbers
  • Microsoft Teams slowness

As we move from remote working to hybrid work environments, it is increasingly critical that we use unified observability to proactively improve application performance and the user experience, no matter where they are.

If you are interested in learning more about how Riverbed can help you improve the overall performance and user experience in MS Teams, get in touch with us and let’s have a conversation.

]]>
DEJ’s 2022 IT Performance Management Study: The Key Role of Observability https://www.riverbed.com/blogs/it-performance-management-observability/ Fri, 26 Aug 2022 12:35:00 +0000 /?p=18532 DEJ's IT performance management study names top vendors for 2022This is part 2 in the series summarizing the key points of the July 2022 Digital Enterprise Journal (DEJ) market study “24 Key Areas Shaping IT Performance Markets in 2022.” Read part 1 of this IT performance management series here.

As background, DEJ based their report on survey results from over 3,300 organizations around a variety of IT performance management topics. Register here to receive a complimentary copy of the DEJ Top 20 IT Performance Monitoring Vendors report. Riverbed is prominently featured as one of the top 20 vendors, and as a Leader in eight key areas.

 

DEJ's IT performance management study shows Riverbed's alignment in key areas
Riverbed scored a “Leader” position in eight key areas of capabilities in the DEJ report.

The importance and definition of “visibility” for IT performance management

DEJ’s study shows that 64% of organizations have either deployed observability solutions or are considering it. The difference between observability and monitoring is not that clear.

DEJ lists the percentages of orgs citing the importance of full visibility
DEJ’s survey data shows the importance of “full visibility” across their entire IT environments.

As DEJ says, organizations need to assess their environments, rethink their monitoring approach, and ensure they eliminate “blind spots” that can deteriorate business performance. This is exactly what Riverbed’s vision for Unified Observability is all about. Check out this video to learn more.

Observability is more than just APM

IDC’s complimentary report on “The Shift to Unified Observability"The market tends to use observability as an evolution of Application Performance Monitoring (APM). Used in the context of addressing the challenges of managing cloud-native environments, observability tools can help DevOps and Site Reliability Engineers (SREs) address key use cases.

But cloud-native environments are not the only highly distributed infrastructure for which IT is responsible. And this has an impact on the definition of observability within IT performance management. With tens of thousands of employees working remotely, the digital workplace has become a massive distributed system. With complex hybrid networks, network teams also need observability to address the challenges of managing modern environments.

This broader application of observability has been validated by analyst firm IDC in its recently published survey of 1,400 IT professionals. As covered in a blog summarizing IDC’s research, the definitions and use cases of observability extend far beyond just DevOps and cloud-native environments. Register for a complimentary copy of IDC’s report, “The Shift to Unified Observability.” Note that Riverbed is a sponsor of this research.

Putting everything in the business context with IT performance management

Organizations are deploying new technologies and redefining their approach to IT performance management. DEJ’s research shows that 70% of them reported that the tools they are using do not provide business context. The research also shows that organizations are losing millions by not aligning software initiatives to business outcomes. This is a process issue, not an IT performance management technology issue.

DEJ shows the key capabilities orgs are looking to deploy
DEJ’s research shows the importance of business context for prioritizing IT spend, visibility into the application delivery chain, and prioritizing the response to incidents, among other areas.

Providing insight into the business impact of IT performance has always been a priority for Riverbed. The Riverbed Aternity Digital Experience Management Platform measures digital experience as employees interact with applications in the context of a business workflow, such as “process a claim” or “look up a patient record.” In this way, IT can track the impact of slow performance on employee productivity.

Aternity User Journey Intelligence (UJI) enables application owners to measure the revenue impact of improving page performance on their customer-facing websites. Based on actual transaction data, Aternity UJI enables IT to conduct “what if” analysis to show the potential benefit of improving web page load time.

Aternity User Journey Intelligence dashboard shows data to help IT make informed investment decisions
Aternity User Journey Intelligence shows the impact of improving web page performance on conversion rates, order value and page views so IT can make informed investment decisions based on business outcomes.

Optimization and visibility into inefficiencies

Business and IT executives surveyed by DEJ reported improving efficiency as the #1 business goal for 2022. In order to achieve this goal, organizations need to gain visibility into the areas where they are experiencing inefficiencies. As shown in the graphic below, lack of visibility causes overspending in a variety of areas, including performance, underutilized assets, engineering staff costs and cloud services, among others.

DEJ shows the areas where orgs are looking to reduce inefficiencies
Lack of visibility into how resources are being used leads to excessive spend on performance, underutilized assets, engineering staff costs and cloud services, among other factors.

Customers use Aternity for smarter decisions about IT asset cost reduction. Because Aternity measures actual employee experience, it enables digital workplace leaders to employ a “smart device refresh” policy. Rather than replacing employee devices based solely on the age of the device, IT teams can replace only those devices which no longer provide an adequate user experience. For example, for some employees, a five-year old laptop may still provide excellent user experience. There’s no need to replace it. This approach is especially useful in an era where IT budgets aren’t increasing, and the chip shortage is interfering with supply chains. For example, one global bank used Aternity to reduce the cost of device upgrades by $10M a year through a smart refresh policy.

Get started today

If you’re not yet an Aternity customer, you can explore these capabilities by registering for a Request Demo running in your environment. You’ll see how your organization compares to the market with the benchmarking insights from millions of end points monitored in via Aternity SaaS. You’ll see how your Service Desk can drive down costs and improve service with AI-driven automated remediation. And you’ll get a view of employee experience for every app running in your environment – even SaaS and Shadow IT.

]]>
How Real-Time Operations Monitoring Powered Seamless Operations Across Mines and Drilling Sites https://www.riverbed.com/blogs/seamless-operations-across-mines-and-drilling/ Tue, 23 Aug 2022 22:40:00 +0000 /?p=18289 The mining, oil and gas and other resources industries are strong pillars of global economies. Many countries are heavily reliant on these sectors, and companies in this space extract metric tons of minerals, base metals, and resources every day, generating revenues in excess of $1800 billion.

Automation, Internet of Things (IoT) devices and other applications are increasingly optimising various mining processes like the flow of ores, efficiency, transport, and extraction. By doing so, they are increasing the profitability of their operations as well as making mining more efficient. But to be able to effectively automate these processes, reliable and efficient network connectivity is crucial. One of the biggest challenges facing the resources industry and keeping them from operating efficiently is connectivity.

Most exploration and production sites are located in remote areas, where there is no readily available access to network infrastructure for digital communications or operations. Whilst operators navigate these connectivity challenges, these sites are pushing adoption of new digital technologies and IoT workflows, both to stay competitive and to meet increasingly stringent regulatory and compliance requirements. This means a growing assortment of network devices all competing for preciously little bandwidth, leading to network congestion and with failure becoming a recurring problem.

Riverbed has partnered with several resources companies across continents to resolve their network infrastructure challenges, maximise the returns on their technology investments, and help them transition to modern smart operations.

The technology challenges weighing down the mining industry  

Regardless of the type of sub-surface or mining operation, in my experience, the majority of challenges operators face fall into a few distinct categories. Some of the most common ones include:

  • Issues with integrating onsite IoT devices to the corporate network for real-time monitoring and decision making
  • Lack of a single source of truth that can provide a bird’s eye view of operations status across the network
  • Lapses in real-time infrastructure monitoring
  • Meeting varying compliance requirements for operational health and safety in different regions
  • Discovering where exactly to dig, drill or excavate to gain the maximum amount of viable crude oil or ore

Resources companies across the board are investing millions of dollars to develop technology solutions for these areas, but you’ll notice all these areas have one thing in common: they are all heavily dependent on having a responsive and stable underlying network to support them. Network connectivity and real-time monitoring have increasingly been used to support high-precision drilling, blast monitoring, and analytics to assist mining operations to optimise ore recovery.

Riverbed to the rescue

We have implemented NPM solutions for several customers within the resources sector to help them realise ROI on their IoT and automation investments, while also improving the efficiency of their overall operations and enhancing the employee experience. One such example is the work we carried out for a mining transportation giant. This company operates across 150 sites globally, and in many respects, are ahead of the curve in technology. They are already using autonomous haulage to carry tons of ore out of mines to processing plants while also transporting huge amounts of hazardous materials to mining sites. As you can imagine, real-time tracking of these transport services carrying such risky cargo is a must. It’s not just about monitoring their location; safety and operations managers need to know the quantity of ores or hazardous goods being transported at any given time. They also need to check if the necessary health and safety protocols are being followed through loading, transportation, and post-unloading stages.

Riverbed NPM helped them gain deeper visibility into their transport chain, so that at any given point they know the exact status of their operations, from mining to transportation through to processing.

Another example that comes to mind was an oil drilling company for whom we implemented Riverbed NPM, Aternity and Arc GIS. This company operates rigs connected via geostationary VSAT satellites and utilises SaaS and on-prem applications running locally on servers that report back to data centre based applications. The rigs operate on a relatively low bandwidth between 1-2 Mbps and high latency of up to 750 milliseconds. Despite such low bandwidth and high latencies, we helped automate the monitoring of IT/OT systems and sensors across the infrastructure, so operators could assess if valves across multiple pipelines were open or shut when pumping out crude oil and if fuel was flowing as required. We helped them integrate all their sensors, devices, and operational technology onsite to give them an overarching view of their operations with Riverbed NPM. All this information is made available to them on a real-time dashboard powered by Riverbed NPM.

The results of network transformation

Our customers say their work-life has improved dramatically. They now look back and wonder how they were operating without Riverbed NPM. Some of the most prominent benefits our resources industry customers have realised are:

  • Greatly improved response times for resolving network and infrastructure issues
  • Holistic visibility into the state of infrastructure at any given time
  • Reduced time to identify and troubleshoot issues
  • De-risking of their infrastructure
  • Ensuring operational status of health and safety critical technology systems
  • Achieving operational efficiency across the network
  • Enhancing the precision of operational planning due to increased network visibility
  • Setting the stage for increased automation and emerging technologies

It always helps to engage an expert to assess the state of things within your network infrastructure and plan out necessary actions. Get in touch with a Riverbed consultant for a no-obligation consultation to explore how your organisation can transform its network to run critical operations seamlessly and reliably. 

]]>
DEJ’s 2022 IT Performance Management Study: Top Lessons for DEM https://www.riverbed.com/blogs/it-performance-management-dem-lessons/ Thu, 18 Aug 2022 12:30:00 +0000 /?p=18353 DEJ's IT performance management study names top vendors for 2022In July 2022, Digital Enterprise Journal (DEJ) published a market study titled “24 Key Areas Shaping IT Performance Markets in 2022.” DEJ based the report on survey results from over 3,300 organizations around a variety of IT performance management topics. Register here to receive a copy of the DEJ Top 20 IT Performance Monitoring Vendors report.

Recruiting and retaining the right talent, aligning people resources with business goals, reducing time spent on addressing performance incidents, and visibility into technology adoption by employees are some of the key focus areas. 57% of organizations see automation as the key enabler for closing the modernization skills gap in managing IT Operations. Riverbed is prominently featured as one of the key vendors in IT performance management.

The top IT performance management needs: correlating IT performance to business outcomes

The report summarizes DEJ’s survey on the importance of hundreds of technology capabilities. Topping the list of IT performance management needs, 84% of organizations selected “correlating IT performance to business outcomes.” DEJ reports that what organizations are really looking for is a capability that connects operational improvements to business outcomes in a clear and measurable way. The study shows a 32% increase in the number of organizations that are using “ability to quantify the business impact” as the key selection criteria over the last 18 months. Riverbed scored as “Leader” in this area, as well in seven other areas of key capabilities.

DEJ's IT performance management study shows Riverbed's alignment in key areas
Riverbed scored a “Leader” position in eight key areas of capabilities in the DEJ report.

Enabling unique customer experiences with IT performance management

Creating and managing differentiating customer experiences is the key goal for 77% of digital businesses. As the graphic below shows, over the last 18 months, there has been a 41% increase in the mentions of enabling new and unique customer experiences as a key driver for investing in IT performance management technologies.

DEJ lists key drivers of investments in IT performance management
Over the last 18 months, there has been a 41% increase in the “enabling new and unique customer experiences” as the key driver for IT performance management investments

Riverbed’s Riverbed Aternity Digital Experience Management Platform takes a unique approach to managing digital experience from the user’s perspective. Through our “full-spectrum DEM,” we enable IT to measure and manage the digital experience of BOTH employees AND customers. Aternity has been cited by other analyst firms, such as Forrester in its recently published Forrester Wave™ End-User Experience Management Report, Q3 2022, for this unique approach to digital experience management.

Register to obtain a complimentary copy of the Forrester EUEM Wave Report.

War for talent

It’s no surprise that organizations are finding it hard to attract and retain top talent. Much has been written about the “Great Resignation” and the flexibility in work practices that newly empowered employees demand from their organizations. Survey data gathered by DEJ shows an increase of 2.5 times in the number of respondents who say that finding and retaining employees has become more difficult over the past three years.

As the table below shows, 30% of organizations reported employee churn due to digital services issues. Human Resources teams and Digital Workplace teams now coordinate more closely than ever to ensure they provide their employees an amazing technology experience. This is especially important in hybrid work situations in which many of the factors which can affect employee experience are beyond the direct control of IT. When employees work remotely, Wi-Fi signal strength, network bandwidth provided by the ISP, and the performance of SaaS-delivered apps all affect the user experience. IT performance management vendors like Riverbed help address these issues.

DEJ shares factors contributing to current challenges in retaining employees
Several factors contribute to the challenges in finding and retaining employees in general and IT experts in particular.

Digital workplace teams use Aternity’s capabilities for automated remediation to proactively identify and resolve issues before they affect employee experience. Self-healing capabilities enable IT teams to “shift left” and resolve issues at lower levels without escalating. Integration with ITSM tools like ServiceNow enable IT to incorporate Aternity into current workflow processes. These capabilities reduce costs, improve service, and enable IT teams to focus on the right priorities, which leads to better employee satisfaction.

Aternity has automated remediation and self healing capabilities
Aternity’s automated remediation capabilities can be tailored to the requirements of your IT organization.

Employee experience

Monitoring employee experience has become top of mind for many organizations, and as the graphic below shows, employee turnover rates are 61% lower for organizations which monitor employee experience.

DEJ, IT performance management, employee experience
For organizations that monitor employee experience, employee turnover is 61% lower than for those that do not.

In an era of rising wages and historically low unemployment rates, companies must take every action they can to hold on to top talent—especially given the challenges of ensuring a positive digital experience in “work from anywhere” environments. Digital experience management not only ensures a positive employee experience, but it also enables IT to proactively identify and resolve issues, even without employees having to call the Service Desk. Better service means happier employees, which means less frustration and less employee turnover.

Aternity, end user experience monitoring, EUEM, digital experience management, DEM
With Riverbed Aternity, service desk teams can monitor employee digital experience for EVERY app in the portfolio and isolate the source of delay to client device, network, or backend.

Get started with IT performance management today

If you’re not yet an Aternity customer, you can explore these capabilities by registering for a Request Demo running in your environment. You’ll see how your organization compares to the market with the benchmarking insights from millions of endpoints monitored in via Aternity SaaS. You’ll see how your Service Desk can drive down costs and improve service with AI-driven automated remediation. And you’ll get a view of employee experience for every app running in your environment—even SaaS and Shadow IT.

]]>
6 Proven Strategies to Protect Networking Teams from Burnout https://www.riverbed.com/blogs/protect-networking-teams-from-burnout/ Sun, 14 Aug 2022 22:00:39 +0000 /?p=18292 Is your team feeling overworked, undervalued and frustrated? You’re not alonethe pandemic has put more pressure on almost all of us, especially on those in charge of maintaining network performance. As well as dealing with relentless demands for more bandwidth, network and IT teams have a host of other problems on their plate. Many are already at full capacity, and now being asked to manage increasingly complex hybrid environments with tools that are no longer fit for purpose. It’s no surprise that burnout and attrition within networking teams are on the rise. How do you protect your networking teams from burnout?

The response to this crisis typically comes in the form of well-meant gestures: free lunches, or a set of guidelines that are high on good intentions but low on substance. These are band-aid solutions that don’t address the underlying problem. Businesses need to support and listen to staff and improve processes and tools to reduce the stress associated with network management. The key to that lies in first understanding the root cause of burnout and what it looks like.

Learn to identify stress when you see it

The corporate world has a mental health problem, and IT teams have it worse than most. In a survey by Harvard Business School, 84% of workers reported at least one workplace factor that had a negative impact on their mental health. Among Australian tech workers, the problem was pronounced: over half would not recommend their workplace, while three-quarters said they’d experienced stress at work that made them less productive.

Individuals struggling with their mental health in the workplace may go into survival, or “fight or flight” mode. They may become less productive, be increasingly absent or feel less engaged. They may display anxiety, anger or uncharacteristic behaviour. But in the hustle and bustle of everyday tasks, these problems are not always easy to spot.

Look beyond the symptoms

To help employees, IT leaders must start by listening. That may come in the form of one-to-ones, in which managers ask workers not just “How are you”, but also “How can I help”—and actively listen to their responses. Survey tools can help you build wider, more consistent feedback, and application and network logs can identify the technical blockers that are holding individuals back.

While your research may uncover company-specific issues, you’re also very likely to come across these common IT complaints:

  • Overwork: Long hours are common in many roles, and if a network outage hits, you can forget about a work-life balance.
  • Churn: With the job market stretched and conditions often challenging, teams can feel they are in a constant state of flux—particularly in startups, which lose staff every 1.2 years. Churn means lower morale and more overwork, as the team strains to pick up the slack.
  • Unrealistic expectations: Timelines for projects are often needlessly optimistic, and may not factor in extra tasks, such as employee onboarding and basic maintenance.
  • Manual tasks: Network and application issues are often raised via user help-desk requests because IT staff do not have the right tools or visibility to identify and resolve them early. The result? A time-consuming game of catch-up that never ends. Across most organisations, portals and tools are rarely unified, which means workers have to keep switching between different tools to get a handle on the big picture. The resulting work can quickly become repetitive and frustrating.

Once you’ve found the source of the problem, you can start to solve it

Whether it’s through one-on-one feedback or wider data, these learnings should help you prioritise issues, and make a case for the best way to solve them on an individual or structural level.

At company level, that may mean appointing a leader who is directly responsible for mental health, rather than treating it as a general HR responsibility. It might mean encouraging managers to include mental health checks into every catch-up and share their own experiences. By encouraging both top-down and bottom-up approaches to mental health, you can help make it part of both daily conversations and long-term strategy.

Other actionable steps may include better resourcing, setting boundaries around projects, using ticketing or Agile processes to break workflows into manageable pieces, and working to clarify job expectations.

Giving workers the agency to choose exactly where and how they tackle their workload is another crucial shift. It’s increasingly clear that remote and hybrid work (involving a mix of office work and remote work via both local networks and cloud services) can deliver real benefits. In a Riverbed survey, 94% respondents agreed that a hybrid work environment helped organisations recruit talent and remain competitive, with greater employee happiness among the main benefits. These flexible work practices can be a huge benefit to IT staff, but unless they’re planned properly and backed by the right tools, they will only exacerbate issues that network teams are all too familiar with.

IT workers need tools to suit the modern age

The more complex and widely distributed the IT environment, the more strain is placed on networks and applications. And the wider your organisation’s perimeter, the more attack surfaces cybercriminals have to exploit. Without the right tools to manage hybrid applications and networks, IT staff may be left screaming in frustration.
These issues are particularly damaging because they combine a feeling of powerlessness—since engineers struggling to get networks up to speed again may have little choice but to use tools that aren’t fit for purpose—with pressure from other staff who are desperate to get crucial operations back online. Network engineers and IT teams need tools, technology and training that match the demands placed on them.

Provide the right network support

Slow networks, poor monitoring, limited metrics and regular outages are common problems that heap pressure on even the most patient IT teams. They may be particularly noticeable when your organization uses platforms and monitoring that are a poor fit for the existing network architecture. However, it may not be necessary to overhaul the entire network. Depending on your situation, network performance can be improved by:

  • Optimising performance via application performance management platforms and application acceleration.
  • Using best-in-class network performance management and monitoring to ensure tools and hardware are working at peak capability.
  • Using software-defined WAN to increase efficiency while reducing bandwidth use.
  • Producing timely, relevant dashboards that can be customised and shared with different stakeholders.

Integrated, end-to-end platforms that manage multiple functions will be far easier to deal with than separate solutions. They should offer the flexibility to remedy problems and accelerate functions across multiple networks. And they should be able to produce and share data in real-time. The right alerts and metrics give IT staff a crucial advantage, and the chance to spot an issue before it turns into a network crisis that will hammer both your business and your team’s mental health.

Protecting networks and the teams that manage them

Workplace mental health issues can impact individuals whatever their role, but as we’ve seen, the specific pressures that make life difficult for IT staff have worsened in recent years. Companies must give IT staff and network engineers the right support and the right tools. That means listening and undertaking concrete actions to build a sustainable workplace. But it also means giving workers the right software and hardware to remove blockers and frustrations and help them keep data flowing.

The Riverbed Unified Network Performance Monitoring unifies device monitoring, flow monitoring, and full packet capture and analysis solutions. These solutions are tightly integrated together so that you can more quickly troubleshoot complex performance issues. They also integrate into the Riverbed Portal, which combines them into collated, easy-to-use dashboards that streamline the analysis of complex problems. Book a consultation with a specialist here.

]]>
Application Acceleration for Today’s Distributed Enterprise https://www.riverbed.com/blogs/application-acceleration-distributed-enterprise-today/ Thu, 11 Aug 2022 12:30:14 +0000 /?p=18219 Today’s IT teams are challenged like never before—expected to support work from anywhere and provide secure, fast access to needed applications from any location. Then there’s the matter of where those applications are based, which is complicated as it varies from app to app. This makes providing the acceleration needed to drive employee productivity on those apps more challenging than ever. And, let’s not forget the challenges presented by the network that these applications run on, which as it evolves becomes increasingly distributed and complex. They don’t just support on-premises MPLS solutions, but also mobility and internet-based applications as well.

The enterprise network needs a differentiated solution for networking, connectivity, and acceleration for every app.

 

It’s relatively easy to find vendors to address one issue or another. But, how can any one of them handle the complex set of issues you face? Could there possibly be a company with a holistic solution? Could that company address such a highly complex, multi-faceted challenge? You can dream, right?

Fortunately, the answer is, Yes! There IS one company uniquely qualified to provide a holistic solution—better yet, a fast, agile, and secure acceleration of any app over any network to users, anywhere. We are that company. We’re in the business of application acceleration. Our solutions are trusted by 95% of the Fortune 100 as well as 83% of the Forbes 500. To learn more about our application acceleration solutions, watch this.

Riverbed optimizes network performance & accelerates applications

Our acceleration solutions are based on 15+ years of industry leadership and innovation. They boost end-user digital experience and productivity by enabling up to 33x faster app performance anywhere. And, bandwidth consumption is also reduced by up to 95%—even under sub-optimal network conditions.

Application Acceleration Benefits

 

Riverbed maximizes cloud value

Our Application Acceleration solution can speed migration and access to workloads for multiple IaaS platforms. This includes Microsoft Azure, AWS, Nutanix, and Oracle Cloud. We also accelerate cloud-to-data-center replication flows by 50x or more through proven data transport and application streamlining innovations. Our fully managed cloud service accelerates SaaS performance by overcoming network inhibitors such as latency, congestion, and the unpredictable last mile of today’s mobile workforce for leading SaaS applications. These include Microsoft 365, Salesforce, ServiceNow, Box, etc.

Riverbed accelerates app performance for today’s remote workforce

Riverbed Application Acceleration boosts performance by 10x or greater direct from the user desktop. Workers get the application performance they need no matter where they’re working. We extend best-in-class WAN Optimization and the industry’s only application acceleration to remote users. We provide fast, secure access to on-premises IaaS and SaaS-based applications. And, we do this across any network.

Riverbed speeds video content delivery for today’s dynamic workforce

Riverbed provides a reliable, secure, and easy-to-deploy video distribution solution. And even better, we do this without the need to change or upgrade any existing network infrastructure. Our scalable, cloud-based platform speeds the delivery of bandwidth-hungry video content directly to users by up to 70%. We can also reduce bandwidth by up to 99%.

Riverbed Acceleration boosts performance, productivity, and digital experience.
Riverbed Acceleration boosts performance, productivity, and digital experience.

To learn more about our application acceleration solutions, go here.

 

]]>
Unified Observability Is the Solution IT Has Been Waiting For https://www.riverbed.com/blogs/unified-observability-solution-it-has-been-waiting-for/ Tue, 19 Jul 2022 12:35:57 +0000 /?p=18226 IT teams have been relying on observability tools to (theoretically) provide intelligence and insights into operating conditions within an organization’s digital infrastructure for years. But most of these tools have come with significant shortcomings that leave IT teams wanting more.  

Riverbed recently worked with consulting firm IDC to survey over 1,400 IT professionals across the world to determine the current state of observability solutions, what IT professionals want to get out of observability, and how they’re planning to invest in these solutions.  

This blog will highlight some of the key takeaways from the reporting including how IT teams look to achieve true unified observability. If you want to jump straight into the full report you can download The Shirt to Unified Observability in Management: Reasons, Requirements, and Returns. 

The current state of observability  

IDC’s survey reveals that over 90% of IT organizations currently use observability solutions. But when we drill down further, we see that IT teams aren’t exactly thrilled with current observability offerings.  

  • 61% of IT teams agree their productivity and collaboration is limited by specialized tools and siloed data views.  
  • 60% of IT teams believe their monitoring tools serve narrow requirements and fail to enable a unified and complete view into current operating conditions. 
  • 59% of IT teams must manually troubleshoot issues to identify root causes and determine specific remedies. 
  • 54% of organizations already use six or more discrete tools for IT monitoring and management. 

Current solutions simply aren’t robust or unified enough, which leads to blind spots in visibility and forces IT teams to waste a lot of time troubleshooting problems. 

Simply put, most observability tools on the market today are ineffective for addressing the changes of today’s distributed IT environments.  

Unified Observability merges silos 

When IDC asked the 1,400 survey respondents: What is driving the need to unify observability across all IT domains (applications, network, infrastructure, cloud, end user services, smart end devices)? The number one response was “Improving IT teamwork and productivity across domains.” 

For too long, discrete monitoring and performance tools have failed to connect the dots between disparate users, teams, and networks. This problem has only been exacerbated by the shift to remote and hybrid work which has added layers of complexity to an already complex IT environment.  

Riverbed Unified Observability portfolio helps break down these silos by unifying data, insights, and actions across IT. With full-fidelity user experience, application, and network performance data on every transaction across the digital enterprise, Riverbed can apply AI and ML to correlate data streams and alerts to provide actionable insights across the business.  

Unified Observability makes for happier IT talent   

Survey results indicate that Unified Observability and its ability to leverage automation in remediating tech issues would help improve working conditions within IT teams. Consider the following: 

58% of respondents agree that their organization’s most expert staff spend far too much time on tactical responsibilities. Meanwhile, 56% also agreed their organization struggles to hire and retain highly skilled IT staff.   

That is a recipe for disaster as organizations are forced to leverage their hard to find and retain staff on responsibilities far below their skill (and pay) level. It makes sense then that IDC research reveals IT managers have a strong desire to redirect staff from tactical duties to strategic responsibilities. 

Riverbed Unified Observability can help organizations do just that. It begins with capturing full-fidelity user experience, application, and network performance data on every transaction across the digital ecosystem. It then applies AI and ML to contextually correlate disparate data streams and provide actionable insights. With these actionable insights you can automate the investigative workflows of IT experts, empowering your entire IT staff to solve complex problems quickly and accurately.  

The need for Unified Observability 

Over 70% of the survey respondents believe Unified Observability is critical to delivering the best possible digital experiences for customers and employees. At the same time, 60% say that the lack of Unified Observability restricts the ability of IT teams to meet business requirements. And 59% say the absence of Unified Observability makes their job and the job of their staff and peers more difficult. 

Most IT organizations understand the promise of truly unified observability solutions but until now have been forced to patch together disparate monitoring and performance tools. With Riverbed Unified Observability, organizations can finally eliminate the data silos, resource-intensive war rooms, and alert fatigue that has plagued discrete monitoring tools for years. They can truly enable cross-domain decision-making, apply expert knowledge more broadly, and continuously improve digital service quality for their customers and employees.  

Take a deep dive into the IDC survey by downloading the report: The Shift to Unified Observability in Management: Reasons, Requirements, and Returns. 

]]>
How Riverbed Aternity Supported Essential Eight Compliance for a NSW Government Agency https://www.riverbed.com/blogs/how-alluvio-aternity-essential-eight-compliance-nsw-government-agency/ Fri, 01 Jul 2022 00:00:00 +0000 /?p=18158 As soon as the world went into lockdown, digital transformation became a top priority. Many organisations saw two years of transformation initiatives carried out in as little as two months. This rapid adoption of digital technologies enabled businesses and governments to keep the show running and helped cushion economies worldwide. But through this period of rapid change, organisations have become more vulnerable to cyber threats. Adversaries feed on uncertainty, and the sudden transition to new ways of working provided them with the perfect opportunity. Many countries witnessed a steep rise in cyber breaches, and security has become a hot topic in management circles. The digital technologies are not to blame, but the lack of a security framework around them is. Customers often ask about how Riverbed approaches cybersecurity, so let’s take a deeper look at a real-world example.

How Riverbed is helping government organisations become more secure

The Australian Cyber Security Centre’s (ACSC) prescribed Essential Eight has laid out a clear game plan for security for government organisations. ACSC’s Essential Eight is a series of baseline mitigation strategies from the ‘Strategies to Mitigate Cyber Security Incidents’ recommended for organisations. Implementing these strategies as a minimum makes it much harder for adversaries to compromise systems. Mapping their activities to Essential Eight framework provides them with a path to level up their security and address ongoing upgrade requirements.

 Government agencies across Australia have been mandated to comply with the Essential Eight sooner rather than later. But mandates aside, the Essential Eight goes a long way towards improving your cybersecurity posture. The layer of security tools won’t hamper your ongoing processes, but it will act as a shield that protects you and your communities against adversaries. Compliance will ensure sustainability, protect your credibility, and elevate your role as a government agency.

 Three key aims underpin the Essential Eight:

  • Enhance resilience against cyber attacks
  • Increase in customer trust
  • Maintain data sovereignty

It’s important to realise that maintaining security is an ongoing process. We are working closely with several government agencies and helping them implement key network technologies so they can provide uninterrupted services to Australian citizens safely and reliably. One notable project I personally worked on was for a large government agency in NSW.

As the first step to compliance with the Essential Eight, we began investigating devices in their existing network and ensuring we tied up loose ends while keeping the end-user experience intact. When we say loose ends, there are a lot of different possibilities. For example, in the case of this government agency, some of the issues we found included:

  • Unauthorised applications installed on some devices
  • Some devices running older versions of some applications
  • Users not cleared for admin-level access had full access anyway, allowing them to install unauthorised applications freely

Clearly, they needed a deeper dive into their network and security systems and a solution that could give them whole-of-network visibility. Riverbed Aternity suite was the perfect fit for the job. The benefits of implementing Aternity are manifold, so let me walk you through the specific issues we resolved by using it.

Implementing application controls

We began by implementing the Essential Eight mitigation strategy, and the first step for this was Application Control. Aternity helped identify 454 devices out of 20,000 we scanned with unauthorised software installed. We quickly remediated this risk by uninstalling unauthorised applications.

Aternity Application Control

Restricting admin access

The next task was to restrict admin access. Aternity identified 542 unauthorised user devices that were granted admin access. This revelation was shocking for the government agency, as it was like having the keys to the kingdom out in the wild. Users were granted permanent access when they requested admin access to install a particular software for work purposes. Once this issue was identified, they revoked the access of unauthorised users and started granting only temporary access to verified users.

Aternity admin access

Patching applications and the OS

Essential Eight also prescribes using the latest version of applications and installing patches as and when available. While most of the devices on their network were using up-to-date applications, a few were still not using the latest versions. We found 55 versions of Webex, 45 versions of Zoom and 38 versions of Firefox on some of the machines, which was a considerable security risk.

Aternity application report

Strengthening Wi-Fi security 

When Aternity scanned all the endpoints, it reported security failure across 64% of wi-fi access points (1,721 access points in total). Aternity monitors all endpoints and can also show where users were accessing the internet from. Some of them were using unsecured wi-fi at cafes, libraries or hotels. When users connected to the agency SSID, we found many devices failed the wi-fi security check. Based on our analysis, this problem was dominant in certain areas, so we could identify specific users and help them mitigate this issue.

Aternity Wi-Fi security

Updating AV check status 

Aternity monitors all agents, including security agents, on each machine in the network for our client. We completed over 52,788 AV scans in 14 days. On further analysing the results of the status checks, we quickly identified security threats from 54 devices detected by Symantec. 44% of the threats are because of “Trojan.Malscript”.

Aternity AV status check

Implementing proactive monitoring

Proactive monitoring enables anomaly detection and automated alerts. We also integrated Aternity with ServiceNow for this client. When Aternity detects anomalies, it automatically creates a ticket in ServiceNow, and the issue is fixed as per standard SLAs to minimise the impact on users.

With Riverbed Aternity, finding loose ends, fixing them, ensuring Essential Eight compliance, and improving customer experience became effortless. We have laid a secure foundation for their network and user devices. Now, they can monitor security threats, mitigate them on an ongoing basis, and follow up on other Essential Eight prescribed best practices to maintain their security posture. With Riverbed Aternity by their side, they are well-equipped for the challenge.

Have you reviewed the risks to your organisation? 

A survey of Australian businesses showed that 76% were victims of cyber-attacks due to a lack of cyber preparedness in 2021. The Australian Cyber Security Centre (ACSC) recorded a 13% increase in reports of cyber threats, which comes to over 67,000+ complaints. However, when I meet with clients, it is reassuring that most senior executives and government agencies realise the need for a comprehensive security program that touches all aspects of the organisation.

Essential Eight compliance gives your end customers the confidence that their data is safe and in responsible hands. The rewards make it well worth the investment. Whatever be your organisation’s landscape, Riverbed Aternity fits well into every network type and delivers on your unique cybersecurity needs. Having a trusted cyber security partner can go a long way in securing your network. Speak with us today to explore how Riverbed Aternity can help your organisation.

]]>
Under Pressure: How Network Performance Takes A Mental Toll on IT https://www.riverbed.com/blogs/network-performance-takes-mental-toll/ Sun, 26 Jun 2022 22:00:00 +0000 /?p=18147 In our increasingly connected world, network slowdowns and outages can cripple a business. Outages hit organisations’ operations, reputation and profits, and the pressure to get the online wheels turning again is immense. The stress falls squarely on IT teams, and the impact on individuals’ mental health can be brutal. Companies must offer better support to teams that may be stretched thin even before an outage strikes.

Downtime can be catastrophic

No one is safe from network outages. Apple, the BBC, Coinbase and Reddit have all suffered in recent years. In October 2021, a seven-hour outage cost Facebook $100 million. Two months later, Amazon was out of action for hours, leaving customers unable to operate their networked fridges, doorbells, and speakers, and leaving thousands of robot vacuum cleaners to twiddle their smart thumbs.

Network outages can result from power failure, network congestion, cyberattacks, human error or configuration issues. They may be widespread (like the March DNS incident that saw 15,000 Australian websites taken offline) or specific to your organisation. Either way, they’re expensive, costing larger corporations an average of $144,000 per hour in revenue loss, and smaller organisations (those with fewer than 20,000 employees) $2,000 per hour.

Cost of downtime is rising year on year. Lost revenue, reduced productivity, customer complaints, regulatory issues and reputational damage can all throw your organisation into a tailspin. A blocker to one process can ripple right through your organisation, leaving staff baffled and senior executives fuming. And panicked stakeholders will be looking in one direction for resolution: the IT team.

Network teams are already feeling the strain

The pressure to get networks running again as your organisation haemorrhages money would be bad enough if it landed on an empty to-do list. But many IT teams are already overworked, under-resourced, and plagued by employee churn. Workers may be expected to deal with tickets at pace while procuring hardware and software, helping less technically minded staff, and advising on new technologies.

That pressure has been exacerbated by rising cybercrime and new COVID protocols, while the rise in remote work demands more bandwidth from IT infrastructure and the expectation of timely troubleshooting. Outage emergencies put the heat on people who may already be near boiling point.

Network outages pile on the pressure

Network outages can hammer your organisation’s bottom line commercially, but there will be a psychological impact for many workers, too. Internet addiction is recognised as a condition by the World Health Organisation, and for many of us being deprived of the internet feels deeply personal, more like losing a limb than having a tech problem. Being cut off from networks, key projects and messaging services can be deeply frustrating for employees. That great wave of emotion comes crashing down just as IT workers need to be at their most focused and methodical.

Network outages will generally mean team members being pulled out of key tasks, many of which are time-sensitive. Forget that once-urgent project, remote work or project visibility: even employers who are sympathetic to a work-life balance will crack the whip. Network engineers may need to travel to remote sites, expose themselves to COVID risks or spend long hours desperately tracing the problem while the clock ticks and panicked communications rain down from executives. Unfortunately, at this stage, it’s not just about fixing the technical problems: stakeholder management becomes a major factor. Individuals whose attention to detail and concentration makes them relish coding projects may suddenly be roped into a high-pressure environment that not only requires a quick technical response but also needs diplomacy and the ability to manage stakeholder expectations.

Recovery may be complex and frustrating

Network outages may be relatively short-lived, especially if you have a viable secondary connection. But all too often, recovery is complex and frustrating.

A single failure may trigger multiple issues, affecting different offices, partner organisations, and online portals in different ways. For network teams, that means reviewing everything from IoT endpoints to data packets and cloud-service infrastructure to trace the source of the problem. Outages can also result from cyber-attacks, operation errors, surges, network congestion, loose connections, or cables damaged by fire or water.

Teams may find themselves scrambling through dashboards and logs to get a holistic picture. Systems may need to be rebooted remotely. Weighing up possible causes and results of corrective action is hard at the best of times and is even harder while the business breathes down your neck. Without clear guidance, staff may use unsecured public wi-fi or shadow IT to keep projects on track, exposing data to theft and opening up your network to malware attacks—and the risk of further outages—further down the track.

Network outages can have a profound mental health impact

To get a sense of just how profound the impact of a network outage can be on IT teams, it can be revealing to consider it in the light of six classic causes of burnout:

  1. Workload: dealing with a network outage can take up all the hours in the day.
  2. Lack of control: outages are sudden, and the pressure to resolve them quickly gives workers very little agency.
  3. Lack of recognition: resolving an outage might be met with fanfare—or with cries of “What took you so long?”
  4. Poor relationships: friction is inevitable with emotions running high.
  5. Lack of fairness: the outage may not be anyone’s fault, but IT is likely to shoulder the blame.
  6. Values mismatch: your security team may be preaching safety first, while sales want channels reopening as soon as possible. Guess who’s caught in the middle? That’s right, the network team.

Preparing your organisation for network outages

So how can IT teams be better supported? One solution is to make network outages rarer. Better monitoring can mean you identify problems earlier and take steps to resolve them. The best Network Performance Monitoring (NPM) tools integrate device and flow monitoring with full-packet capture and analysis solutions, allowing you to assess data flow, security threats, and network issues. Smart, real-time dashboards take the strain out of assessment and troubleshooting.

Other changes that can help you stave off-network outages include using a backup connection, installing an uninterruptible power supply (UPS), and improving your organisation’s cybersecurity posture (particularly to mitigate Distributed Denial of Service attacks).

These measures will reduce the risk, but you should still ensure you have a clear, frequently updated disaster recovery plan—and that plan needs to be shared and agreed upon by relevant stakeholders.

Measures to improve staff wellbeing can make a real difference to mental health (and staff retention), but they need to address fundamental business processes rather than superficial signs of burnout. There’s no point in offering workers more time off if their workload remains unmanageable. And if you want to learn from the trauma of network outages, you should listen to the individuals who have worked to solve them, hear their pain points, and assess their resource needs.

Managing and preventing network failure

When an outage does occur, the pressure on IT teams can be unbearable, and that has an inevitable impact on mental health. Appropriate measures such as Network Performance Monitoring can help reduce the risk of an outage and give your network teams the tools they need to quickly resolve problems when they occur. With the right tools and policies, your organisation can support IT staff to quickly resolve network performance issues, even in the eye of the storm.

Riverbed Unified NPM unifies infrastructure monitoring, flow monitoring, and full packet capture and analysis solutions. These solutions are tightly integrated together so that your teams can more quickly troubleshoot complex performance issues. They also integrate into the Riverbed Portal that provides collated, easy-to-use dashboards to streamline the analysis of complex problems. Book a consultation with a specialist here.

]]>
Complete Wi-Fi Monitoring with Riverbed https://www.riverbed.com/blogs/complete-wi-fi-monitoring-with-riverbed/ Thu, 19 May 2022 14:39:48 +0000 /?p=17906 In our previous blog post, we highlighted the necessity for Wi-Fi (Wireless LAN) monitoring. We also highlighted some of our newest capabilities in Riverbed NetIM to monitor Wi-Fi. In this post, we aim to dive deeper and establish that Riverbed monitoring tools provide the definitive, complete picture for effectively monitoring Wi-Fi performance.

Wi-Fi performance problems

Wi-Fi performance problems intermingling with application and network issues are expensive to isolate and resolve. If your business depends on strong Wi-Fi performance, you simply must have proper visibility into the various moving parts of Wi-Fi infrastructure. Riverbed’s promise for providing full fidelity observability is not complete until we cover monitoring for this important business asset, Wi-Fi.

How can Riverbed help?

Consider the Wi-Fi infrastructure as LWAP (lightweight access points) and WLC (wireless controllers). For these, Riverbed NetIM can provide numerous health metrics across your entire fleet from basic up or down indicators to radio level signal-to-noise ratio measurements. All makes and models of WLCs and APs monitored via a single tool. In addition, after the COVID-19 pandemic, people started to work from anywhere. As a result, at times there was very little control over the Wi-Fi infrastructure available to users. Therefore, a new challenge emerged for enterprises everywhere for monitoring end user’s devices’ Wi-Fi performance. Because of this, in addition to the infrastructure side, to get the full picture, Riverbed’s End-User-Monitoring (Digital Experience Monitoring) via Aternity provides agents which can be installed to obtain Wi-Fi analytics from end user’s devices.

Wi-Fi infrastructure monitoring

Wi-Fi access points and controllers in most large businesses can be quite expansive. In certain cases, each wireless controller can have a thousand or more access points and the provisioned landscape can change quickly with growth and changes in businesses. Monitoring the health of its various components is an important step toward protecting your investment in Wi-Fi.

Infrastructure side: inventory search/reports

NetIM is not only a monitoring tool but has also been utilized by our customers as an intelligent database with a powerful search engine (pictured below) and REST API on top which is not just a passive inventory. NetIM performs health on all devices in its inventory and can automatically identify, absorb new devices and age out decommissioned devices—based on live SNMP and ICMP polling results. Along with that, there are out-of-the-box reports to summarize vendor types, models, and OS versions of your Wi-Fi fleet.

NetIM Searching Inventory
NetIM Searching Inventory
NetIM Inventory Report
NetIM Inventory Report

Infrastructure side: monitoring Wi-Fi health

Basic health metrics mentioned below can be easily obtained across the Wi-Fi infrastructure using Riverbed NetIM.

  • Access points operationally up/down
  • Wireless Controller CPU Utilization %
  • Wireless Controller Memory Utilization %
  • Number of APs connected
  • Number of active WLANs

Infrastructure side: capacity issues

Oversubscribed Wi-Fi channels at the radio level can seep into your infrastructure like sparse distribution of APs leading to overloaded APs. Riverbed NetIM can provide the below metrics for your APs to better manage ever-changing capacity needs across the Wi-Fi fleet.

  • Channel Utilization %
  • Channel Rx Utilization %
  • Channel Tx Utilization %
  • Channel User Count
  • Maximum Allowed Clients

Infrastructure side: RF interference effects

Co-channel interference (CCI) can be one of the biggest enemies, but general radio interference can be quite prevalent as well in busy cities and office spaces. Once again, Riverbed NetIM can provide the metrics to enable visibility into these problems. I have used the below metric during my Riverbed consulting days to help customers figure out just exactly how significant is radio interference and SNR quality in various sites.

  • Poor SNR Clients

Infrastructure side: client mobility

If you have WLC Mobility enabled in your Wi-Fi deployment, you will find these metrics useful, specifically, if you have IoTs that are highly mobile in a localized space. there are more mobility metrics available inside Riverbed NetIM, below is a curated list of some mobility metrics related to Wi-Fi.

  • Total Hand-off Requests
  • Total Hand-off Requests Sent
  • Total Hand-off Denied Received

End-User Wi-Fi experience monitoring

In cases where a DEM agent can be installed on user’s device, the breadth and depth of visibility and analysis available from Riverbed Aternity DEM solution is unmatched. Below is a curated list or Wi-Fi performance dashboards available from Aternity.

End-user side: which band is experiencing the best data rates (2.4G / 5.0G / AC)?

Below is a sample dashboard from Aternity. It clearly shows the throughput of all Wi-Fi bands utilized by users, put in charts stacked together for easy comparison.

Aternity Wi-Fi Bands Performance
Aternity Wi-Fi Bands Performance

End-user side: which clients prefer more advanced Wi-Fi bands?

Sometimes clients can connect to access points using newer protocols and Wi-Fi bands. Such cases are reported in below screen in the Wi-Fi dashboard.

Aternity Device Proclivity For New Protocols
Aternity Device Proclivity For New Protocols

End-user side: which bands are experiencing worse RSSI (2.4G / 5.0G / AC)?

All-important RSSI, split by frequency band and signal quality (Poor/Okay/Good/Excellent) available in this section of the dashboard.

Aternity Wi-Fi Bands by RSSI
Aternity Wi-Fi Bands by RSSI

End-user side: what SSIDs experiencing the worst RSSI?

Who are the worst hit users affected by bad RSSI? What were their throughput speeds? Answers to such questions are easily available in this dashboard view.

Aternity Wi-Fi SSIDs By Worst RSSI
Aternity Wi-Fi SSIDs By Worst RSSI

End-user side: track usage characteristics of users’ Wi-Fi

Picking right metrics to view together for comparisons and contrast is the effective method for proper root cause analysis of problems. Therefore, we have a section of the Wi-Fi dashboard where we showcase the user’s experience for each band by RSSI and throughput.

Aternity Wi-Fi User's Usage
Aternity Wi-Fi User’s Usage

End-user side: Wi-Fi encryption used by user’s devices—determine whether they might be vulnerable to intrusion/attack.

A full view of the various encryption technologies used by user’s devices is a must. For that, the below dashboard will showcase the encryption technology used by your user’s Wi-Fi connections.

Aternity Wi-Fi Encryption Dashboard
Aternity Wi-Fi Encryption Dashboard

Conclusion

There is a wealth of data for Wi-Fi monitoring made available by Riverbed performance monitoring tools. On the infrastructure side with NetIM and the user’s side with Aternity. Above all, this provides businesses with complete visibility for Wi-Fi performance monitoring. So, make sure you have metrics and analytics and that you are prepared against Wi-Fi performance problems that confound most teams. Wi-Fi performance problems are expensive to bear and resolve.

]]>
Riverbed Empowering the Experience https://www.riverbed.com/blogs/empower-the-experience/ Wed, 27 Apr 2022 00:01:12 +0000 /?p=17959 Riverbed Empowering the Experience - Dan Smoot
Riverbed Empowering the Experience – Dan Smoot

Today marks an exciting new chapter for Riverbed, our partners, and our customers. With the launch of our new brand including Riverbed, and a new unified observability portfolio and strategy that unifies data, insights and actions across IT, we are embarking on a mission to enable organizations everywhere to deliver seamless digital experiences and drive enterprise performance. This launch reflects the evolution of the Company, our technology, and our intent to disrupt the market.

We are capitalizing on our trusted brand, and the dynamic growth and market momentum of our visibility solutions, to drive a differentiated approach to observability that solves one of the industry’s most daunting problems: how to provide seamless digital experiences that are high performing and secure in a distributed and hybrid world.

Let me set the stage. Today, users are everywhere. Applications and their components are everywhere. Data is everywhere and increasingly growing in terms of volume, variety, and velocity. In fact, data is projected to reach 180 zettabytes in 2025, up 3x from 2020. Modern IT architectures are exponentially more complex making it difficult for IT to manage performance effectively and proactively.

Yet every click represents an activity that is vital to your organization and there is a relentless expectation for a flawless digital experience. The quality of these experiences is the heartbeat of the fiercely competitive digital-first world we live and work in.

When issues occur, IT is overwhelmed by massive amounts of data and alerts from siloed tools that provide little context or actionable insights. Troubleshooting requires war rooms and the expertise of highly skilled IT staff to manually connect and interpret data across domains. And when tools limit or sample data, IT may not even be aware of other potential issues or opportunities for proactive improvement.

Observability is meant to solve these problems, but current solutions fall short. Even so-called “full-stack” observability solutions fail to capture all relevant telemetry and sample data to deal with the scale of today’s distributed environment. Most solutions only collect three or four types of data and are limited to DevOps or Site Reliability Engineers for cloud-native use cases. And they offer nothing beyond the alert, so IT still relies on their resident expert to manually investigate events.

Enter Riverbed—a different, unique and superior approach to observability. Our unified observability portfolio unifies data, insights, and actions across all domains and environments, enabling IT to cut through massive complexity and data noise to provide seamless digital experiences that drive enterprise performance for both the employee experience (EX) and customer experience (CX).

The Riverbed portfolio leverages our industry-leading visibility tools (available today) for network performance management (NPM), IT infrastructure monitoring (ITIM), and digital experience management (DEM)—application performance management (APM) and end-user experience monitoring (EUEM), used by thousands of customers around the world. Unlike other observability solutions that limit or sample data, the Riverbed portfolio vision for unified observability is to capture full-fidelity user experience, application, and network performance data on every transaction across the digital ecosystem, and then apply AI and ML to contextually correlate disparate data streams and to provide the most accurate and actionable insights. This intelligence will empower IT staff at all skill levels to solve problems fast. Visit our Riverbed page to learn more and read today’s press announcement.

Complementing the Riverbed portfolio, Riverbed Acceleration solutions provide fast, agile, secure acceleration of any app over any network to users, whether mobile, remote, or on-premises. Built on decades of WAN optimization leadership and innovation, Riverbed’s industry-leading acceleration portfolio delivers cloud, SaaS, client and eCDN (video streaming) applications at peak speeds, overcoming network speed bumps such as latency, congestion, and suboptimal last-mile conditions to empower the hybrid workforce. Additionally, Riverbed’s enterprise SD-WAN provides best-in-class performance, agility, and management of MPLS, LTE, broadband and Internet-based networks.

Only Riverbed provides the collective richness of telemetry, insight, and intelligent automation, from network to app to end user that illuminates and then accelerates every interaction. With the powerful combination of our Riverbed Unified Observability and Acceleration solutions, IT teams are empowered to provide a seamless digital experience for customers and employees, and end-to-end performance for the business.

We’re looking forward to helping support our customers on this digital journey.

Together, let’s Empower the Experience.

Read more on our new brand here.

]]>
A ‘Brand’ New Day for Riverbed… Meet Riverbed! https://www.riverbed.com/blogs/brand-new-day-riverbed-meet-alluvio/ Wed, 27 Apr 2022 00:01:10 +0000 /?p=17975 Today is literally a ‘brand’ new day for Riverbed. It’s a moment we’ve been preparing for the last several months, and eager to unveil to our customers, partners and the world.

First, as you’ll see in our CEO Dan Smoot’s blog and press announcement, Riverbed is launching a broad strategy to bring unified observability to customers globally and accelerate growth. As part of the strategy, we’re developing an expanded unified observability portfolio to unify data, insights and actions to solve one of the industry’s most daunting problems: how to provide seamless digital experiences that are high performing and secure in a hybrid world of highly distributed users and applications, exploding data and soaring IT complexity.

In conjunction with this announcement, I’m pleased to share that we’re launching the new Riverbed brand, including the introduction of Riverbed, our portfolio for unified observability. With a fresh and vibrant visual identity, and a sharpened articulation of our solutions, the brand refresh reflects the evolution of the Company, our technology, and the momentum we are driving in the market.

The launch of Riverbed’s new brand identity and Unified Observability strategy comes nine months after Riverbed reunited with Aternity—which had been operating independently—to capitalize on the tremendous market opportunity around unified visibility and observability. We initially went to market as Riverbed | Aternity to signify the unification of these companies and our industry-leading solutions. Collectively, the companies’ intense focus on NPM, DEM (Aternity) delivering actionable insights on performance and acceleration have positioned Riverbed to fully capitalize in both the Unified Observability and Acceleration markets.

Now is the right moment to emerge as the new Riverbed—a Company that is visionary but grounded; agile yet proven; dynamic while trustworthy. We understand that every click brings an expectation of a flawless digital experience. And Riverbed enables organizations to transform data into actionable insights and accelerate performance for a seamless digital experience.

Riverbed will go to market with two exciting product lines—Riverbed and Riverbed Acceleration.

Riverbed pays homage to Riverbed, while also underscoring our unified observability value proposition. The name Riverbed derives from alluvium—the place where riverbeds unite and create the most nutrient-rich environment to mine for gold—with the ‘o’ standing for observability. Metaphorically, it represents the coming together of discrete IT telemetry streams (network, application, end users) where insights that are hard to find, but worth their weight in gold, reside. And metaphorically the “o” represents how we apply observability as a process to harness the value across the streams of telemetry—ultimately finding the “gold” for our customers across the flood of data in their IT ecosystems

Our Riverbed unified observability portfolio of solutions helps customers find that gold as fast as possible, turning actionable insights into business value so companies can stay competitive, productive and satisfy users’ fierce appetite for seamless digital experiences.

Our second portfolio is Riverbed Acceleration, which provides fast, agile, secure acceleration of any app over any network to users anywhere. Built on decades of WAN optimization leadership and innovation, Riverbed’s industry-leading Acceleration portfolio delivers cloud, SaaS, client and eCDN (video streaming) acceleration, as well as enterprise-grade SD-WAN.

When we bring these solutions together, Riverbed enables organizations to illuminate, accelerate and empower the digital experience. As we usher in the new Riverbed, I welcome you to view our new brand, and learn more about Riverbed. We look forward to continuing to deliver on our brand promise and helping our customers empower the experience across their organizations.

]]>
Why You Need Wireless LAN Monitoring https://www.riverbed.com/blogs/why-you-need-wireless-lan-monitoring/ Thu, 07 Apr 2022 19:08:31 +0000 /?p=17814 As employees begin returning to the office and enterprises adopt hybrid work policies, enterprise IT teams are being forced to accommodate a more unpredictable workforce. To provide more flexibility and foster collaboration, many enterprises have done away with assigned desks and offices in favor of hoteling and more communal work areas. This has placed an emphasis on the need for strong and reliable Wireless LAN Monitoring to ensure mobile and unpredictable employees maintain constant, uninterrupted wireless connectivity.

Let’s start at the beginning. First: what is Wireless LAN? Wireless LAN is a cordless computer network that links multiple devices using wireless communication to form a local area network within a specific space, such as an office. Basically, when you’re sitting at your desk and move to a conference room for an important meeting, Wireless LAN is the reason you stay connected to the closest and most effective access point.

To address the need for consistent Wireless LAN health, Riverbed has released NetIM 2.5 to give users predefined support to identify and fix Wi-Fi stability. NetIM 2.5 can achieve this through insights into access point status and quantity by model, OS version, and controller.

In this blog, we explore the importance and benefits of Wireless LAN monitoring and how the availability of Riverbed NetIM 2.5 will improve your Wireless LAN monitoring capabilities.

Why you need Wireless LAN monitoring

Every time a user’s connection falters, their productivity takes a hit. Not only does this demand time and attention to diagnose and fix the issue, but it also takes time away from an employee that would have otherwise been spent on important business-related tasks. This can cause significant employee and customer frustration, stalled projects, and loss of revenue. In fact, employees lose an average of 71 productive hours annually because of poor network connectivity. And whether you have two or 20 access points, connection issues can be hard to diagnose, especially since the primary form of connectivity is wireless, not tangible cords and cables.

So, if your network connection falters, do you know how to identify which access point is causing the issue?

Wireless LAN monitoring provides visibility into which controllers are being accessed across your device network, and which users are connected to specific access points. If the quality of your Wi-Fi connection suddenly decreases or is lost altogether, a Wireless LAN monitoring tool can find and mitigate the connectivity issue efficiently and effectively.

The data provided by Wireless LAN monitoring—which includes information on access point status, quantity, etc.—helps IT teams to answer questions like:

  • Do any access points have stability issues?
  • Are issues correlated with specific models or OS versions?
  • Do issues occur at specific times of the day?
  • Are issues correlated with the number of active clients for the access point?
  • Do we have too many clients connected to an access point?
  • Are any access clients down? If so, how many APs were connected before going down?

To fully realize the benefits of Wireless LAN monitoring, you need an easy-to-use platform that continuously identifies access point issues and provides proactive solutions.

Introducing: Riverbed NetIM 2.5

NetIM provides a scalable network and server infrastructure monitoring platform to help customers detect, diagnose, and troubleshoot infrastructure availability, performance, and configuration-related problems and outages. Network traffic data can be displayed within NetIM to help IT understand how device outages/slowdowns are affecting broader network and application performance. It is often combined with NetProfiler and AppResponse as part of Riverbed’s Network Observability. NetIM provides visibility into your devices (physical, virtual or cloud) giving insight into the health and status of your network environment, translating to what your user is experiencing.

In the latest update to Riverbed NetIM 2.5, Wireless LAN metrics are available with predefined support for Cisco and HP-Aruba Wireless LAN Controllers, along with Wireless Access Point Views

The update also features new security-related capabilities to ensure that NetIM operates within the security parameters of customers’ IT environments, including TLS 1.3 support for all communication activities and the ability to add or update SSL certificates via the web UI.

Learn more about the benefits and new security features of Riverbed NetIM 2.5.

]]>
Auto Discover Internal Web Apps with Riverbed AppResponse https://www.riverbed.com/blogs/auto-discover-internal-web-apps-with-riverbed-appresponse/ Thu, 31 Mar 2022 15:30:00 +0000 /?p=17673 Riverbed® AppResponse™ speeds the identification, diagnosis, and resolution of your most difficult network and application problems. A key component of AppResponse is its application analysis. It provides specialized analysis modules that deliver focused visibility into 60+ TCP and UDP applications, web transactions, SQL database transactions, Citrix, and VoIP and video apps.

Although AppResponse users are generally responsible for the upkeep of the network infrastructure and its core connectivity and transport functions, they are not part of IT teams that make the decisions that determine which applications use network resources. It’s therefore quite common for AppResponse users to want to “discover” which applications and protocols are present in the network. Often, they are surprised by what they see!

Web applications constitute an increasing share of mission-critical apps. Some AppResponse customers choose not to use the Web Transaction Analysis (WTA) module for very detailed performance analysis of HTTP/S, but they still want the Application Stream Analysis (ASA) module to tell them which apps are present in the network. The ASA module primarily looks at fields in the IP and TCP/UDP headers. These fields do not have enough information to recognize internal web apps because that information is only present deep in the bowels of HTTP/S that ASA does not analyze. This creates the following problem: How do AppResponse customers using (only) ASA know which traffic on the network belongs to important internal web applications?

Discovering internal Web apps

Good news! A new feature, called Discovered Service Names, enables the ASA module to identify internal web apps by extracting the service name from the HTTP CONNECT, TLS SNI, or X509 Certificates. Public web apps, like SAP, Google, etc. that are already tracked by DPI aren’t discovered by this feature. The URL app is not automatically created; the user must choose to explicitly monitor an application. This feature is disabled by default.

The AppResponse ASA Discover Service Names feature identifies critical internal web apps for monitoring and analysis.
The AppResponse ASA Discover Service Names feature identifies critical internal web apps for monitoring and analysis.

How does it work?

By inspecting the SNI (Server Name Indication) field in SSL/TLS handshakes, AppResponse ASA can classify web traffic. SNI is an addition to the TLS encryption protocol that enables a client device to specify the domain name it is trying to reach in the first step of the TLS handshake, preventing common name mismatch errors. Using SNI, AppResponse ASA is now able to classify internal web applications traffic more accurately by inspecting the contents in SSL/TLS handshakes, in addition to fields in the IP and TCP/UDP headers.

AppResponse can also classify internal web traffic by matching the content of the HTTP Connect message that browsers send to web proxies, including CASBs that act as web proxies. HTTP Connect messages will typically contain the name of the web application or web service, e.g., hr.company.com, booking.company.com, etc. This lets ASA accurately classify traffic into internal web applications.

TLS 1.3 traffic decryption (PFS API)

Web applications dominate the mission-critical traffic that AppResponse sees in most of its in-production deployments. As a result, WTA is the second most used analysis module. It delivers vital insight into network and application behavior for web applications, e.g., seeing web traffic organized by user sessions, auto-stitched web pages and their network, client, and server delays, and analyzing web server behavior by looking for unexpected HTTP Status Codes and so on.

All mission-critical web apps use either TLS 1.2 or 1.3 encryption. AppResponse must decrypt these application packets to calculate and derive the statistics whose analysis (via Insights, Navigator, or Transaction Search) delivers the deep visibility AppResponse users expect. Because TLS 1.3 mandates the use of encryption algorithms that guarantee Perfect Forward Secrecy (PFS), any man-in-the-middle network appliance that intercepts TLS 1.3 packets cannot decrypt them even if it has access to the private keys. As a result, we’ve added support for TLS 1.3 to our PFS API that enables the decryption of traffic encrypted by TLS 1.3, in addition to the previously supported SSL and TLS 1.2. We still need an external source (like a F5 load balancer) to send the keys. For more information on how AppResponse PFS API works, check out this blog Riverbed AppResponse Adds SSL/TLS Analysis and PFS API.

Backup and restore

An AppResponse deployed in production is a source of network and application performance data

– both packets and metadata/metrics derived from DPI. The metadata/metrics contain data that spans a few weeks, to several months, to a few years. A deployed AppResponse also contains a lot of user customizations in the form of configurations, e.g., Host Groups definitions, app definitions, traffic policy definitions, WTA page analysis rules, etc.

The new Backup and Restore feature is found under System Settings.
The new Backup and Restore capabilities are found under System Settings.

Performance data includes aggregate data (ASA, WTA, DBA, VOIP, CXA), alert events, scheduled reports, and system metrics. AppResponse customers want data loss protection in place for this information set. Together, an AppResponse’s configuration and forensic data represent very valuable information.

AppResponse can backup and restore both configuration and performance data to local and remote backup servers. You can initiate a backup manually or create a schedule and automate when backups are performed, e.g., after working hours or on weekends. However, you must restore to the same software and hardware.

Faster transaction analysis

What makes AppResponse transaction data invaluable is that it’s never topped. AppResponse power users use HD Insights and Transaction Search when they need to analyze all types of network and application behavior, including occasional low-volume communication. For example, a security use case like finding the IP address that generated just a few bytes of traffic six hours ago and went quiet after that.

We continue to make writing and querying faster when accessing critical transaction data (e.g., 1-min. summaries of each TCP connection, detailed summary metrics of each HTTP/S request-response pair, 1-min. summaries of each voice/video media stream, 1-min. summaries of every SQL query-response pair).

We addressed this problem in a two-part project: The first part was delivered in 11.12 and included updating to a newer version of the database we use for write-once many-reads data store for transaction data. This feature delivers the second part of this performance improvement by optimizing the structure of the underlying database tables to better leverage data sparseness and facilitate highly selective record processing times.

Support for VMWare ESXi 7.0

For customers who operate private clouds based on VMware ESXi, deploying a virtual AppResponse appliance as a guest VM is the NPM packet analysis option we have long delivered to address this need. Just like any other OS platform, the ESXi hypervisor evolves over time and shows up as newer release versions. In the release, we added support for VMware ESXi version 7.0. We continue to support versions 6.5 and 6.7 but dropped support for ESXi 6.0.

To summarize, Riverbed AppResponse provides the ability to extract valuable network and application performance information using real-time packet analysis and do it at the peak scale. We continue to enhance this capability by

  • Improving AppResponse’s built-in intelligence to auto-recognize enterprise-internal web apps,
  • Enabling decryption of TLS 1.3 using our PFS API,
  • And delivering performance improvements for packet write to disk and HD Insight transaction queries.

 

]]>
Windows 11 Transformation Journey – A Seamless Migration With Riverbed | Aternity in Four Easy Steps https://www.riverbed.com/blogs/windows-11-transformation-journey-a-seamless-migration-riverbed/ Wed, 30 Mar 2022 18:00:00 +0000 /?p=17772 Microsoft released Windows 11 to the public on 5th October last year with many exciting new features and security enhancements. Many organizations are planning or are currently executing an enterprise-wide transition to Windows 11. Let’s take some time to get familiar with the technology, what it means for your organization and how you can plan for the transition. After all, you do not want to risk missing out on the benefits of Windows’s 11 but also need to avoid impacting user productivity and business continuity.

Security benefits of Windows 11

A massive push for organizations to enable a remote workforce and fast-tracked digital transformation initiatives have left many vulnerable to cyber-attacks. Fortunately, new features from Windows 11 are raising the bar for security. Microsoft’s latest software adopts the Zero Trust model to protect enterprise customer data and ecosystems. These enhanced security features also mean that there might be a need to upgrade existing PC hardware to run Windows 11. In fact, our State of Digital Experience Q1 2022 report found that one-third of enterprise devices will need to be replaced or upgraded in order to run Windows 11.

Planning your Windows 11 transition

Enterprise-wide Windows 11 upgrades can be a massive undertaking, and if you do not have end-to-end visibility of the entire process and how it impacts your end-users, then your transition can quickly become overwhelming. The key lies in smart planning; using data-driven actionable insights will provide critical support during your transformation journey. Armed with insights from planning to execution to completion stages, your transition will be a lot smoother, cost-efficient and less disruptive.

Whether you choose to wait it out or want to get going already, you will have to upgrade to Windows 11 at some point. However, you can take steps to streamline the transition and make it relatively straightforward. There are specialized tools that can help you gain the visibility you need for a seamless transition, like the Digital Experience from Riverbed | Aternity.

Our Digital Experience takes a four-step approach to migration, which helps achieve end-to end-visibility of your transformation program. It also delivers key insights into user experience post-migration.

Riverbed | Aternity’s four-step approach

We are setting a higher bar for endpoint digital experience management when migrating to Windows 11 with Riverbed | Aternity.

Step 1: Plan

We have discussed planning for a Windows 11 transition in a detailed blog post here. To summarize, the focus here is to get a holistic view of your asset inventory and understand how it affects your bottom line. With the help of Riverbed | Aternity, you can establish your organization’s readiness for the upgrade.

  • Identify if the hardware is compatible with Windows 11 minimum system requirements. For example, if Jane Doe from accounting is using her laptop since she joined the company six years ago, her machine may not meet Windows 11 requirements. This means her IT team needs to check each device across business functions to see if they meet the minimum system requirements of Windows 11; if not, they will be recommended for device replacement/upgrade.
  • Establish a baseline to compare user experience with the current Windows 10 device. This baseline will help compare how Jane Doe is adjusting to the new user interface (UI) and if she is able to carry on with her daily job activities without any hiccups.
  • Understand which PCs and business functions are running legacy/non-Windows 11 supported applications. For example, does Jane Doe use Internet Explorer to run an accounting application? Windows 11 does not support Internet Explorer, so Jane will have to use the application in another browser, and we need to understand if the application will be compatible with that new browser.

Step 2: Execute

Once you know what your asset inventory looks like, form a small pilot user group with users pooled from various business functions and upgrade their PCs to Windows 11. Test all the applications used across business functions and monitor the OS performance for users in the pilot group.

  • Replace old PCs with those compatible with Windows 11 and/or upgrade OS on pilot devices.
  • Track the deployment of Windows 11 OS and required drivers on nominated pilot group devices. You can see the device names, usernames, upgrade date for all pilot users and the total number of devices migrated in a snapshot.
  • Run suitable self-updating tasks on the endpoint devices to ensure they’re Windows 11 ready.

Step 3: Validate

Compare User Experience and Device Health between the Windows 11 pilot group and current Windows 10 users. Once validated, roll out Windows 11 to users across business functions in a phased approach. Riverbed | Aternity reports metrics such as Resource Utilization, App Response Time, Stability Index and App Crashes, empowering the IT team to weed out issues at the granular level for every device in the network.

Step 4: Communicate

Now that you have rolled out Windows 11 to most of the users in your organization, it is time to take stock of the transformation project and share insights with various stakeholders.

  • Share project progress, failures, and successes with management
  • With Riverbed | Aternity, you can also compare your own organization’s Windows 11 user experience with that of your peers globally or within your industry

Why choose Riverbed | Aternity Digital Experience Management (DEM) platform?

The Riverbed | Aternity Digital Experience Management (DEM) platform goes beyond endpoint management to contextualize data across every enterprise endpoint, application and transaction to inform remediation, drive down costs and improve productivity. Riverbed | Aternity has more of what matters to help enterprise users migrate to Windows 11 and beyond:

  • End-user experience monitoring
  • IT service benchmarking
  • Application and desktop performance monitoring
  • User journey intelligence

It will be a mammoth task for enterprise IT teams to migrate devices to Windows 11, but the benefits of the new OS make the effort well worth it. Change can be difficult, but Riverbed | Aternity makes your transformation journey easier.

Before your organization decides to kick off the transition to Windows 11, speak with a Riverbed | Aternity consultant to help you plan your strategy for a seamless migration.

]]>
Monitoring for Country-Specific Traffic https://www.riverbed.com/blogs/monitoring-for-country-specific-traffic/ Wed, 23 Mar 2022 11:30:00 +0000 /?p=17725 As in past years, financially motivated attacks continue to be the most common, likewise, actors categorized as “organized crime” continue to be the top threat vector.1  Most of these attacks come from a handful of countries: China, Russia, Turkey, United States, etc.2

Riverbed Network Performance Management (NPM) solutions can identify and alert on traffic coming from countries where your organization may not normally do business, e.g., North Korea. However, once this traffic is identified, the IT Operations or SecOps team must determine if that traffic is legitimate or suspicious.

CIDRs & Host Groups screen traffic

Here’s how a financial services company recently started to screen traffic coming from the Russian Federation. They use Riverbed AppResponse, packet-based application analysis, and Riverbed NetProfiler, full-fidelity flow monitoring.

The ITOps team, with the help of their Riverbed SE, started by putting together a list of the CIDR blocks for the Russian Federation, then separating them into 12 Host Groups. Host Groups allow you to manage similar objects together. These 12 Host Groups were added to both AppResponse and NetProfiler.

Next, the ITOps team set up monitoring at the port level. Immediately, they started to see traffic from the Russian Federation! Tweaking settings helped determine if the traffic is suspicious and required further investigation by SecOps. Here are some of the features they used:

  1. Network Monitoring – receiving traffic information from any combination of sources. Aggregating, de-duplicating, and processing traffic data to prepare it for network behavior analytics. Behavior analytics builds profiles of typical network behavior for specified times so it can identify unusual changes that indicate performance or security issues.
  2. Event Detection – analyzing compliance with service policies, performance and availability policies, security policies, and user-defined policies. Assigns each security policy violation event a severity rating number based on the likelihood of being a threat to network performance, availability, or security.
  3. Alert Generation – checking the severity of each network event against a set of user-defined tolerance levels or alerting thresholds. When the severity of an event exceeds a tolerance or alerting threshold, NetProfiler alerts users to the existence of the event by indicating an alert condition and displaying information about the event.
  4. Notification – automatically sending email, SMTP, or SMS alert messages to designated security or operations management personnel or systems.
  5. Event Reporting – saving details of all events that triggered alerts. Event detail reports can be viewed on the NetProfiler user interface or retrieved by remote management systems for analysis.

Setting User-Defined Policies

The next step for this company is to leverage user-defined policies. User-Defined Policies is customizable event detection that lets you configure your own alerts based on hosts, ports, interfaces, and response time.

This financial services company is planning to create policies to alert when traffic from any of the 12 Host Groups hits any sensitive servers or on ports associated with mission-critical applications. User-defined policies will simplify the identification of suspect traffic since only internal employees should be accessing these servers.

Fig. 1. This policy example alerts on non-encrypted connections to/from PCI-regulated servers. The alert identifies the source of insecure connections and creates a virtual firewall between nodes without having to deploy inline devices. Note that thresholds can be set on a variety of parameters.
Fig. 1. This policy example alerts on non-encrypted connections to/from PCI-regulated servers. The alert identifies the source of insecure connections and creates a virtual firewall between nodes without having to deploy inline devices. Note that thresholds can be set on a variety of parameters.

If you are interested in leveraging these capabilities, check out this video that explains how to create a user-defined policy to proactively monitor high-risk subnets.

If you’d like the Russian Federation CIDR blocks with instructions on how to import them as Host Groups in AppResponse and NetProfiler, see this Knowledge Base article on Riverbed Support.

 

1  verizon.com/dbir/

2  https://www.govtech.com/security/hacking-top-ten.html

]]>
Accelerating Enterprise Video Delivery https://www.riverbed.com/blogs/accelerating-enterprise-video-delivery/ Tue, 22 Mar 2022 12:35:00 +0000 /?p=17734 Content delivery networks, or CDNs, have been used by content providers for years to deliver high-quality video to people’s homes over an unpredictable public internet. Video consumes a lot of bandwidth, and high-definition audio requires low latency with minimal jitter. So, to deliver great video and clean audio, CDNs deployed content servers in strategic points of presence. That brought the content closer to the customer and solved the problem of delivering a bandwidth-intensive application over the public internet.

Today’s dynamic, hybrid workplace is facing a very similar problem. Enterprise organizations rely on video content — live, recorded, and collaborative — to operate efficiently and effectively. Even the most mission-critical activities now rely on video conferencing and reliable, high-quality video content.

The problem, however, is how much bandwidth video consumes at the local branch office or at someone’s home. This is especially in locations where many end-users consume the same content, a network connection can be overwhelmed, and the quality of the video stream will suffer. This is where eCDNs (enterprise content delivery networks) shine.

The Riverbed eCDN Accelerator

Riverbed eCDN Accelerator solution solves this problem by mimicking the distribution of content throughout a local region. Instead of placing a rack of content servers in data centers every 100 square miles, Riverbed eCDN Accelerator makes use of the WebRTC peer-to-peer technology to deliver a single stream to a dedicated computer in one location, which then distributes the content to its local peers.

With this method in place, WAN (wide area network) traffic for video content is dramatically reduced up to 99% while speeding video delivery up to 70%. This means video distribution can scale without needing additional service provider connections or locally installed hardware.

End-users working in the same office will immediately benefit from the eCDN peering relationship, and so will end-users working remotely over a VPN connection or when video traffic is being backhauled. In fact, it doesn’t matter where the source of the video content originates. The solution works just as well with video content delivered from the cloud or a SaaS provider.

In the graphic below, notice the difference in the volume of streams between a network using Riverbed’s eCDN Accelerator and one that isn’t.

 

A cloud-based solution

The eCDN solution is cloud-based with computers peering with each other via a browser or a software agent, not a piece of hardware. This provides several benefits to both IT operations and to the end-user.

First, computers peer with each other via a browser for certain applications, making zero-touch deployment quick and easy. Some applications benefit from an agent that can also be deployed via a browser eliminating the need for manual installations or some sort of local hardware.

The policy is pushed from the cloud providing IT centralized control over the environment. From there, IT can manage the ports that are used, manage the cache size on each local computer, filter the bitrate for video streams and configure location-based eCDN parameters to accommodate security requirements.

Second, for the end-user, there’s nothing to do but enjoy high-quality video. Deployment and ongoing operation happen behind the scenes, so for an end-user consuming recorded or live video content, it’s completely hands-off with instant results.

Video Consumption is on the Rise

Streaming for live video on Microsoft Teams, live virtual events hosted on platforms like ON24, and recorded video-on-demand has ballooned in the last few years. In fact, 82% of all IP traffic is expected to be related to internet-related video by the end of this year.

Without a solid enterprise content delivery solution in place, this increase will overwhelm many internet connections and crush the video quality needed to conduct business in today’s world while also impacting other applications running across the same networks.

Riverbed’s eCDN Accelerator is a powerful solution that improves the digital experience for end-users and ensures optimal video delivery.

For more information, reach out to your Riverbed representative and visit riverbed.com to learn more.

]]>
Determining If NetProfiler Is Oversubscribed https://www.riverbed.com/blogs/determining-if-netprofiler-is-oversubscribed/ Fri, 04 Feb 2022 16:30:00 +0000 /?p=17615 When your Riverbed NetProfiler is oversubscribed and out of capacity, it’s not providing a complete view of your network performance. The view is not just ‘gaps’ but potentially overall network blindness. NetProfiler can become oversubscribed because of normal network growth – new users, devices, acquisitions, and so on—that can add new flows to your network.

Incomplete, oversubscribed flow collection can lead to an unreliable and misleading understanding of the following types of problems:

  • Bandwidth problems
  • Top Talkers on a specific interface
  • Unauthorized application usage
  • WAN utilization issues
  • Incorrect QoS tagging
  • Wireless bandwidth usage
  • Analyzing zero-day threats
  • Identify a DNS attack
  • Identify data exfiltration
  • Computer worm virus

Therefore, it is essential to check on your flow rates on a regular basis to ensure you still have the correct number of flows for your current environment.

How to know if  NetProfiler is oversubscribed

You can determine your flow status by going to the NetProfiler or Flow Gateway ADMINISTRATION link at the top of the screen, and then clicking on SYSTEM. As shown in Figure 1, the Flow Capacity Usage and Raw Flows Processed/Over Limit will show you how many flows you are processing.

These graphs can show up to about 1 million flows over your limit. Under 1 million flows, the graphs will show exactly how much over; after they can’t calculate how much over and just drop the flows.

Figure 1. The Flow Capacity Usage and Raw Flows Processed/Over Limit graphs will show you how many flows you are processing and if you have you have exceeded your limit.
Figure 1. The Flow Capacity Usage and Raw Flows Processed/Over Limit graphs will show you how many flows you are processing and if you have you have exceeded your limit.

NetProfiler requires all flows in order to provide complete, comprehensive, and accurate monitoring and reporting. Once you exceed your flow limit, NetProfiler will not be able to capture all of the relevant data. This will result in incomplete monitoring, and likely inaccurate insights and false results. Without this full-flow data, the risks and impacts are consistent with the risks associated with not having NetProfiler capabilities at all. For these reasons, it’s important to maintain appropriate flow licenses.

I’ve reached the max 20M Flows of my legacy NetProfiler,  now what?

Riverbed offers a trade-up program to assist you in moving from legacy and xx70 versions of NetProfiler to the latest xx80 versions. Both the virtual and appliance versions of NetProfiler xx80 currently max out at 30 million flows per minute, a 33% increase over legacy solutions. Plus, the xx80 Riverbed Flow Gateway is also increased in performance and scale, with up to 6 million FPM.

What are the compelling reasons to renew?

If you are on a NetProfiler that is EOS, you’ve missed out on a lot of performance and scale updates including:

  • An upper limit of 30M FPM for NetProfiler
  • 6M FPM limit for Flow Gateway and a high availability configuration
  • 2000 preferred interfaces
  • Seamless grow path from 10K to 30M FPM by simply adding licenses and Expansion modules
  • 10GB NIC for Flow Gateway and Dispatcher

Some of the features you may have missed: AWS and Azure deployment, AD Connector 3.0, IPv6 support, restricted reporting, integrations with NetIM and Aternity, SAML, Flow Gateway buffering.

Other great features include enhanced cloud flow support, a really cool modern home screen, and new Google-like search.

Benefits of a properly working NetProfiler

Network Performance Management (NPM) can help you keep your network and applications running at peak performance. Here are a few ways Riverbed NetProfiler, as part of the Riverbed Unified Network Performance Management platform, can help:

  1. Proactively prevent slowdowns and outages to minimize downtime and the effect it can have on your users and your business outcomes.
  2. Resolve performance issues faster because you have all the insights you need at your fingertips.
  3. Eliminate network blind spots by providing visibility into on-premises, virtual, and cloud environment in the same unified dashboards.
  4. Improve user experience. Networks are designed to deliver applications to users. User experience should always be measure and monitored to ensure ample performance.
  5. Enhance network security. Network visibility delivers the crucial insights to detect, investigate and mitigate advanced threats that bypass typical preventative measures.
  6. Increase IT collaboration. Disparate IT teams can share common network data to solve performance issues faster and without the finger-pointing.

Ask your Riverbed partner or account team for more information on xx80 trade-ups or how to order more flows. Click here to learn more about Riverbed NetProfiler.

]]>
Facebook and Slack Outages Show That Visibility Is Mission Critical https://www.riverbed.com/blogs/facebook-and-slack-outages-show-e2e-observability-is-mission-critical/ Tue, 21 Dec 2021 21:00:00 +0000 /?p=17477 In early October, two major outages related to DNS configuration changes affected the customer experience for the users of digital giants, Slack and Facebook. The failure of Facebook and its WhatsApp and Instagram services extended several hours and was catastrophic in nature–a routine maintenance change effectively took down Facebook’s global backbone. The company was forced to communicate with its own staff and customers via its rival service Twitter.

The Slack outage was less widescale, affecting only a proportion of corporate users for up to 24 hours, but it was also as the result of an erroneous maintenance command. In both incidents, time to recovery was extended, as DNS servers had to be rapidly reconfigured and BGP replicated across the internet, and multiple data centres powered up again. In Facebook’s case, this put an extensive strain on power systems.

Mistakes like these do and will always happen–so what’s the best way to mitigate them and minimize outages when they occur? You have two choices: passive or proactive.

1. Wait for customers to complain (or leave)

The Facebook outage caught front-page news because it affected so many individuals and businesses. For many smaller organisations, Facebook and Instagram are their primary digital connection with their customers, often because they’re cheaper and easier to maintain than a standard website, even if they have one. One example is the number of retailers and restaurants offering click-and-collect or delivery during lockdown via their Instagram accounts. Influencers–one of today’s growth businesses –would also have lost revenues.

In some countries, WhatsApp has become the de facto call and SMS service provider—even for government departments. Inability to access it (and its stored contact details) would have put many millions of people out of touch.

Further, Facebook is used for authentication for accessing other online services, making it the ‘digital front end’ for millions of other businesses. It is also the greatest connector of family and friends for the western world. While a single outage is unlikely to lose the behemoth a large swag of disciples, few digital platforms as resilient, especially where there are alternatives.

Overall, Facebook’s outage is variously estimated to have cost the company US$60-100 million in ad revenue and wiped US$40 billion off its market capitalisation. Other estimates reckon the outage could have cost the wider economy hundreds of millions each hour.

You are unlikely to have quite as many customers dependent on your digital services as Facebook, but such a catastrophic failure could cost your business considerable revenues. And worse, you could lose customers for good. Banking customers, for example, often operate accounts with multiple providers. If your service goes down, it could be the last straw that will see them walk.

2. Be proactive through early warning and diagnosis

Whether an outage is due to erroneous commands, as in these cases, or due to hacking, you need the tools to pinpoint the precise issues so you can fix them fast. As Facebook’s engineers reported, “All of this happened very fast. And as our engineers worked to figure out what was happening and why, they faced two large obstacles: first, it was not possible to access our data centers through our normal means because their networks were down, and second, the total loss of DNS broke many of the internal tools we’d normally use to investigate and resolve outages like this.”

Border gateway protocol (BGP), for example, can go down in just 90 seconds–or potentially sub-second, depending on how it’s deployed. Using Riverbed’s Unified Network Performance Monitoring platform of integrated online services, you can set synthetics at the packet level to post alarms if any changes occur. NetIM can monitor BGP passively, while AppResponse can look at packets to detect failure. This enables you to be on the front foot–before people complain.

In the 11.12 release of AppResponse, we’ve introduced DNS Reporting and Alerting. AppResponse 11.12 includes brand new DNS analysis which previously required inspection using tools like SteelCentral Packet Analyzer or Wireshark. These new insights allow us to identify problems with DNS performance as well as compliance. This means that we can identify quickly, and accurately which clients are making what queries to which DNS servers, and if they are responded to.

The AppResponse DNS policies also allow us to identify when we see changes in our DNS traffic profiles. For example, we can alert on clients making connections to foreign DNS servers as an indicator of compromise. Another example could be increased DNS timeouts or errors.

These new features are included in AppResponse 11.12 and included if you are running the ASA feature license.

Here's an example of the types of metrics you will find with the DNS Servers Insight.
Here’s an example of the types of metrics you will find with the DNS Servers Insight.

Stronger security makes it harder

As Facebook found, the strong security measures they have in place slowed their ability to bounce back up: “We’ve done extensive work hardening our systems to prevent unauthorized access, and it was interesting to see how that hardening slowed us down as we tried to recover from an outage caused not by malicious activity, but an error of our own making.”

As I wrote in my recent blog, Customer Experience Lessons from the Akamai Outage, major outages highlight the importance of redundancy for essential services like global load balancing. Moreover, they emphasise the need for end-to-end visibility to pinpoint any network, application or third-party service fault within minutes rather than hours. In today’s economy, digital customer experience and business continuity are what it’s all about.

]]>
Log4J Threat Hunting with NetProfiler and AppResponse https://www.riverbed.com/blogs/log4j-threat-hunting-with-netprofiler-and-appresponse/ Wed, 15 Dec 2021 17:29:45 +0000 /?p=17530 A recently discovered vulnerability in the Java logging utility Log4J (CVE-2021-44228)1 enables remote code execution exploits in a variety of common software. This happens through the download and execution of malicious code embedded in the Java utility, sometimes nested in such a way making it difficult to identify.

Compared to a more directed malware campaign, this vulnerability has many potential exploits. However, Microsoft is maintaining a list of IPs believed to be taking advantage of this vulnerability as detected in their Azure service. Keep in mind, though, that because these bad actors are also scanning systems that are not vulnerable, we should be careful to examine positive indicators closely. The scanners are looking for vulnerable systems, and so receiving an incoming communication from them is not as conclusive as an outgoing communication would be.

Because this vulnerability can affect so many systems, it’s very important to examine network history immediately. We can use Riverbed NetProfiler to identify flows and hosts affected by this threat both now and in the past, and Riverbed AppResponse to analyze web application traffic to identify the vulnerability in action.

Log4J Threat Hunting with NetProfiler

Using NetProfiler with the Advanced Security Module, we can go back in time to verify exposure to the Log4J vulnerability. NetProfiler uses a frequently updated threat feed as its source for vulnerability information including the specific criteria to run against newly captured and stored flow data.

In the graphic below, notice that we can select Log4Shell Known Exploits from the threat feed and run a report to see if we are impacted when it first appeared on the network, and which hosts are affected.

Notice in the graphic below that we’re running a report for the last week.

Traffic Report showing a week's worth of traffic. New Connections can be suspicious traffic.
Traffic Report showing a week’s worth of traffic. New Connections can be suspicious traffic.

 

Some experts believe this vulnerability first appeared around December 1, so we can extend our search further back in time. In the graphic below we’re searching back to December 1.

Extending our Traffic Report back to Dec. 1 when Log4H is thought to have started.
Extending our Traffic Report back to Dec. 1 when Log4J is thought to have started.

Log4J Threat Hunting with AppResponse

The Web Transaction Analyzer module, or WTA, within AppResponse 11 allows us to search for the Log4J vulnerability from an application perspective using byte patterns to search the HTTP header and payload.

In the graphic below, notice that within WTA we can use some custom variables to search within the body, URL, and header of application traffic and report back if any of the conditions are true. Here we’re looking specifically for the JNDI lookup since it’s a key part of the exploit mechanism used by the vulnerability. These conditions could be extended to exclude certain source IPs that are legitimately running vulnerability scans as part of your security posture.

Using AppResponse WTA to set custom variables to detect for certain conditions within the body, URL, and header of application traffic.
Using AppResponse WTA to set custom variables to detect certain conditions within the body, URL, and header of application traffic.

Mitigation Steps

 After a thorough scan using NetProfiler and AppResponse, also consider these steps to protect systems from the Log4J vulnerability.

  • Enable application layer firewalls such as AWS WAF which would significantly reduce the risk by protecting cloud-based and some SaaS applications
  • Block outbound LDAP to the public internet
  • For log4j versions greater or equal to 2.10 set log4j2.formatMsgNoLookups to true
  • And of course, always remember to install the latest patches in both internet-facing and internal systems.

The old tech adage “you can’t secure what you can’t see” has never been more poignant than now. Visibility is the cornerstone of network security, and by using powerful visibility tools such as Riverbed NetProfiler and AppResponse, we can gain a deep and wide awareness of everything going on in our network both in real-time and historically.

 

1https://www.cisa.gov/news/2021/12/11/statement-cisa-director-easterly-log4j-vulnerability

]]>
Riverbed AppResponse Adds Zoom, Teams, DNS and New TCP Analysis https://www.riverbed.com/blogs/riverbed-appresponse-adds-zoom-teams-dns-new-tcp-analysis/ Thu, 09 Dec 2021 13:45:00 +0000 /?p=17455 With the vast bulk of the world’s white-collar workforce working from home during the COVID-19 pandemic, there was an explosion in demand for communication tools like Zoom, Slack, and Microsoft Teams. The recent Riverbed|Aternity Hybrid Work Global Survey of business and IT leaders found that 83% say at least a quarter of their workforce will work remotely at least part of the time even after the pandemic. In fact, 84% believe that hybrid work will have a lasting and positive impact on society and the world.

While most organizations are not fully prepared to deliver a seamless hybrid experience, nearly all have adopted some remote communication and collaboration tools. Despite this, a surprising 31% still need collaboration and virtual relationship building.

Zoom is the leading communications app with 300 million daily meeting participants, both paid and free,1 while Microsoft Teams hit 250 million monthly active users in July 2021.2

The one problem with these and other collaboration solutions is that they are bandwidth-intensive and often suffer from performance issues, especially when users are working from low-bandwidth home networks.

Zoom and Teams

To solve this problem, Riverbed AppResponse added full visibility into all Zoom and Teams media, voice and video. The AppResponse Unified Communications Analysis (UCA) module can now auto-detect Zoom and Microsoft Teams media streams. Customers can now better support Zoom and Teams audio and video traffic and surface the full complement of quality metrics, such as MOS-CQ, MOS-V, Jitter, packet loss, new channel rate, and more. These details will help IT diagnose call and video quality issues.

"AppResponse

DNS

The Domain Name System (DNS) is like a phonebook for the Internet. It translates a web domain name into an IP address and vice versa. People access websites through domain names, like Riverbed.com or Aternity.com then the web browser interacts through Internet Protocol (IP) addresses. In short, DNS translates domain names to IP addresses so browsers can load Internet resources.

AppResponse 11.12 added three new DNS Insights to the Application Stream Analysis (ASA) Module to help troubleshoot the DNS performance and security issues, such as:

  • DNS service outages, i.e., “Can’t find server”
  • High DNS latency or high load times
  • DoS or DDoS attacks that can bring down the service

The new DNS Insights include:

  • DNS Servers—analyzes all DNS servers. The types of metrics on this chart include All DNS Traffic with DNS Timeouts and All Errors, Slowest DNS Servers, DNS Servers with Errors, DNS Servers with Timeouts.
  • DNS Server—shows results for an individual server. The metrics in the chart include DNS Requests & Responses, Response Time TruePlot, DNS Timeouts, DNS Errors, and a new graph called Top Queried Domains.
  • DNS Transactions—helps you understand the performance of individual DNS transactions. The types of metrics you can utilize include DNS Errors, DNS Response Time, Slowest DNS Transactions, DNS Timeouts, Query Names, Client Groups, Client IPs, Server IPs, Opscodes, Query Types, and a GeoMap.
Here's an example of the types of metrics you will find with the DNS Servers Insight.
Here’s an example of the types of metrics you will find with the DNS Servers Insight.

TCP Metrics

TCP is the heartbeat of the network. In fact, it’s the protocol used by nearly all modern applications and the AppResponse ASA module functions like an MRI for TCP, providing rich details into TCP-based apps. In fact, ASA calculates more than 60 health and activity metrics for TCP.

This release is taking our TCP analysis to the next level by adding TCP Receive Window, TCP Zero Window, and TCP Out-of-Order to the ASA module. These new TCP metrics enable NetOps users to diagnose these serious problems without having to dive into the packets using Riverbed Packet Analyzer Plus or Wireshark.

  • TCP Receive Window—is the amount of free space in the client’s receive buffer. This field tells the sender how much data can be sent before an acknowledgment is received. If the receiver is not able to process the data as fast as it arrives, gradually the receive buffer will fill and the TCP window will shrink. This will alert the sender that it needs to reduce the amount of data sent to allow the receiver time to clear the buffer.
  • TCP Zero Windows—happens when the client says, “I don’t have any available buffer space, stop sending data.” This tells the TCP sender to stop sending data. Typically, this indicates that the network is delivering traffic faster than the receiver can process it. When the client begins to digest the data, it will let the server know to resume sending.
  • TCP out-of-order—occurs when a packet has a sequence number lower than the previously received packet. If too many packets are received out of order, TCP may cause retransmission of packets, like dropped packets. As such, the impact of out-of-order packets is can be similar to packet loss.
These are the three new TCP Insight pull-down menu options.
These are the three new TCP Insight pull-down menu options.

Other New Features

Other features released in AppResponse 11.12 include:

  • The ability to search TLS Handshake and DNS Transactions, in addition to previously supported Web and Database queries
  • The ability to import/export custom Insights
  • Business Hour profiles for Insights & Navigator; in addition to previously supported Scheduled Reports and Policies
  • Remote out-of-band system access and management for AppResponse xx80 appliances
  • RFC 5425 encrypted Syslog notifications
  • Performance improvements for HD (High-Def) queries

If it wasn’t clear, the theme for this AppResponse release was application and network intelligence—the ability to extract useful network and application performance information via real-time packet analysis. It’s our hope these new features make your life easier and your network safer and higher performing.

Riverbed AppResponse customers with support contracts can download AppResponse version 11.12 for free from the Riverbed Support Site. Otherwise, click here for more information.

 

1 https://www.businessofapps.com/data/zoom-statistics/

2 https://www.zdnet.com/article/microsoft-teams-hits-250-million-monthly-active-user-milestone/

]]>
A High-Performing Hybrid Workplace: Are You Ready or Too Late? Executive Insights from Our Hybrid Work Global Survey https://www.riverbed.com/blogs/high-performing-hybrid-workplace-executive-insights-hybrid-work-global-survey-2021/ Mon, 01 Nov 2021 18:59:32 +0000 /?p=17437 Hybrid work is the new norm and critical to organizational success—but the question is, are you ready to support the hybrid workplace? To assess the benefits and challenges of a hybrid workplace and the role technology plays in enabling or impacting its long-term success, Riverbed | Aternity conducted a global survey across eight countries in September 2021 of nearly 1,500 business decision-makers and IT decision-makers. The findings are eye-opening and a reality check for us all. A full 83% of decision-makers believe 25%+ of their workforce will be hybrid post-pandemic and 42% say 50%+ will be hybrid.

We get it, hybrid work is important and here to stay. But shifting to a high-performing hybrid work model is challenging and elusive for most with only 32% believing they are completely prepared to support the shift to hybrid work. What do we do? Address both human- and technology-related barriers NOW!

As the survey noted, 80% of business decision-makers believe technology disruptions negatively affect them, their teams, and employee job satisfaction. To gain the maximum benefits from hybrid work, organizations must invest in technologies and modernize their IT environment. Under-investing in technologies that ensure IT services are performing and secure can have severe consequences to business success and the employee experience.

Now the good news. More than 90% of respondents agree hybrid work helps with recruiting talent and competitiveness and 84% agree hybrid work will have a lasting and positive impact on society and the world. So, they are investing in critical capabilities such as end-to-end visibility, cybersecurity and acceleration technologies to enable long-term success. This is important, as the need for end-to-end visibility and actionable insights intensifies in a hybrid workplace. And when networks, digital services and SaaS applications operate at peak performance, so do employees and the business.

Are you ready? Take a look at the full Riverbed | Aternity Hybrid Work Global Survey 2021 and discover key executive insights and investment areas to create a high-performing hybrid workplace.

For the success of your business, happiness of your employees and satisfaction of your customers, do it today, before it is too late.

]]>
Visibility and Performance from the Client to the Cloud: Riverbed at Microsoft Ignite 2021 https://www.riverbed.com/blogs/visibility-performance-client-cloud-riverbed-microsoft-ignite-2021/ Wed, 27 Oct 2021 21:00:00 +0000 /?p=17422 Today, we can work from almost anywhere. Sometimes we’re at home, sometimes we’re at the office, and other times we’re in a coffee shop or an airport. This poses two significant problems for IT departments:

  • First, how can we have visibility into a network we don’t manage?
  • Second, how can we ensure peak application performance over a network we don’t own?

Riverbed’s visibility and performance solutions address these modern problems head-on to provide end-to-end visibility from the client to the cloud as well as optimal application performance no matter where people connect to do their work.

The two sides of a conversation are no longer static. In the past, end-users were grouped together at an office managed by the IT department, and applications were down the hall in a server room or in a data center owned by the organization. That’s not at all the case anymore. Now, end-users could be virtually anywhere there’s an internet connection, and they’re accessing applications that live in the cloud.

We’ll be addressing these two problems during three short demonstrations prepared for Microsoft Ignite 2021. Read on for details on what we’ll be covering in each of the demos below:

Aternity provides remote worker visibility

In the first scenario, we’ll see how Aternity gives us historical visibility of our remote client computers when they’re working remotely over a VPN. Aternity’s VPN usage and trends dashboards provide very specific information on how end-users are connecting, over which VPNs, and how connections perform.

Application Acceleration ensures peak performance of on-premise applications

In our second scenario, we’ll see Riverbed Application Acceleration in action as it dramatically improves the performance of an on-premises application. Because it’s an agent-based solution, we can provide the benefits of application acceleration regardless of where end-users are located.

SaaS Accelerator optimizes SaaS performance for remote workers

In our last scenario, we’ll see a real-time demonstration of how Riverbed SaaS Accelerator optimizes Microsoft Azure and Sharepoint traffic, again regardless of where end-users work. In this case, our end-users are remote, and they’re accessing applications in the cloud, delivered by our Microsoft SaaS provider. Riverbed SaaS Accelerator was designed for this very scenario – ensuring top performance of SaaS applications for end-users at the office or working remotely.

For today’s hybrid workforce, a simple internet connection just isn’t enough anymore. We don’t manage the network many of our end users are connecting to, but we still need the visibility and application performance we had when everyone worked at the office. As the workplace continues to be in a state of flux, our visibility and performance solutions help to keep your end users and applications productive from anywhere.

Check out Riverbed’s virtual booth at Microsoft Ignite 2021 where you can watch our demos videos and learn more about our solutions.  To register, visit the Microsoft Ignite Registration page here.

]]>
Visibility, Actionable Insights and Performance for the Modern, Hybrid Enterprise https://www.riverbed.com/blogs/visibility-actionable-insights-performance-modern-hybrid-enterprise/ Mon, 04 Oct 2021 14:35:50 +0000 /?p=17410 This is an exciting week, as we host the Riverbed Global User Conference with thousands from both the Riverbed and Aternity user communities joining us at our annual conference.

As we look forward, the New Horizon is upon us—it’s digital-first and hybrid by design, with cloud, SaaS and legacy technologies working together and employees collaborating and engaging with customers anywhere, anytime on any device.

To achieve the needs of today’s enterprise, businesses and customers are demanding IT organizations take the following steps to successfully move forward:

  • Embrace a hybrid culture
  • Modernize IT
  • Leverage end-to-end visibility to improve security postures

Embrace a hybrid culture

Embrace a hybrid culture: hybrid networks, hybrid workplace and hybrid workstyles. There is an acceleration of network and application usage by users internal and external to every organization. IT must deliver seamless virtual and physical experiences that are consistent, reliable and secure for all employees and customers, regardless of when and where they work or how they choose to connect.

Modernize IT

Digital acceleration requires IT modernization and the use of public and private cloud infrastructure. The challenge is how to combine legacy environments with new cloud infrastructure and validate the efficacy of this digital transformation. With the goal of creating modern, hybrid cloud environments, you must overcome the complexity of operating resources on-premises, in the cloud and at the edge. This requires investing in technologies that give you end-to-end visibility and control—empowering your teams to deliver rock-solid, secure performance and digital experience.

Leverage end-to-end visibility to secure performance and digital experience

There have never been greater challenges for IT than the current complexity of the network and the aggressiveness of cyberattacks. IT teams must gain full visibility and control over the security and performance of their modern, hybrid environment. To accomplish this organizations must capture full-fidelity data—not sample—across networks, apps and end-users, which is what our visibility solutions enable.

End-to-end visibility and the rich, broad set of data it provides is more important than ever in our modern hybrid enterprises to ensure productivity, end-user experience, high-quality digital experiences and security, but the treasure trove of data is more valuable when analyzed in context to deliver actionable insights to the multitude of stakeholders that are driving the organizational goals—transforming IT operators into technology leaders who connect insights into business outcomes

Our vision is to deliver actionable insights that extend from the technology infrastructure through the network all the way to the customer to protect and extract the value behind every click.

To help you maximize this transition, we’re hyper-focused on bringing unique value in two critical areas: Network Performance and Acceleration, and End-to-End Visibility.

Network performance and acceleration

Riverbed | Aternity application acceleration and WAN optimization technologies overcome the effects of latency, bandwidth saturation and network contention to ensure the fastest, most reliable delivery of any application—including popular SaaS applications—to any user, regardless of location or network type. Delivering performance is vital in a hybrid environment. Today there is a race to the edge. The agility and efficiencies gained by investments in cloud and SaaS won’t matter if your mobile and remote workforce productivity suffers as a result of unacceptable application performance. Networks must optimize to the edge and accelerate applications; on-prem to the edge, cloud to the edge and application to the edge while securing edge access.

End-to-end visibility

Today, there is an abundance of data being generated. Many organizations are burdened with mountains of siloed data from disparate monitoring tools that are difficult to normalize and interpret for immediate action. More digital devices and sensors, plus broader, more complex networks means more data sources. Decision-makers are being inundated by data and alerts, still lacking business intelligence and actionable insights, which is even more important with hybrid networks, cloud and dispersed workforces. However, pulling all that siloed data, from the user, the application and the network up into the cloud allows you to contextualize the data and provide actional insights.

We recently made a strategic decision to bring more closely together the best-in-breed assets of Riverbed and Aternity (previously a division of Riverbed) to bring to our customers the most comprehensive end-to-end visibility solution in the industry. This is powerful and brings together full-fidelity visibility that will enable us to deliver unified observability of your entire digital ecosystem: application, network, servers, cloud, devices.

If you are an organization that is seeking to Optimize Performance—Maximize Productivity—Reduce Risk—Eliminate Waste—Improve Customer Experience—you need actionable insights that extend from the technology infrastructure through the network all the way to the customer to protect the value behind every click. Technology organizations, especially IT, become invaluable when they can gather information that can be easily understood and used to make faster, better, more accurate decisions—fueling innovation, which drives the business and organizational performance. IT can be an accelerator and not an inhibitor to your organization’s productivity.

Unified observability provides a single source of truth of your data providing actionable insights that transform IT operators into technology leaders who can drive value for the business by delivering:

  • Frictionless Performance
  • Unyielding Productivity and Efficiency
  • Seamless business continuity

We are here to help our customers and the market prepare for this new horizon that is hybrid and all about digital experience and performance. Join us this week at the Riverbed Global User Conference to learn more about our strategy, vision and how you can master the New Horizon.

 

]]>
Riverbed at Networking Field Day 26: Demonstrating End-to-End Visibility from the Client to the Cloud https://www.riverbed.com/blogs/demonstrating-visibility-client-to-the-cloud/ Thu, 30 Sep 2021 15:41:00 +0000 /?p=17375 Riverbed has presented at Networking Field Day a bunch of times, but for the most recent event, we took a different approach than usual. We wanted to show a real example of how Riverbed’s solution provides both deep and wide visibility from the client to the cloud.

Rather than present 100 PowerPoint slides, we walked through troubleshooting an actual application performance problem. We still had a slide here and there to introduce the tool we’d use in that segment, but other than that we wanted our presentation to be as much demo as possible.

We built an environment of real client computers running over a real SD-WAN to real web servers in three AWS regions. We connected everything to our SQL backend, and we stood up internal and external computers running internal and public DNS. And to set the stage for our presentation, we purposefully caused poor application performance of our demo web application.

Level 1 helpdesk

I started with Portal, similar to how a level 1 helpdesk person would. We immediately saw a problem with our AWS East region and no indications that there was a problem with our SD-WAN. So, just like in an actual troubleshooting workflow, I escalated the ticket to the next engineer.

Visibility from the client perspective

Jon Hodgson, VP of Product Marketing at Aternity, analyzed the client-side with Aternity. Aternity uses agents installed locally on endpoints, whether those be workstations, mobile devices, servers, or even containers. Jon used the Aternity dashboard and DXI, or the Digital Experience Index, to confirm poor application performance on all computers, but he also discovered an unauthorized crypto miner on three machines.

Investigating a security breach

This was a security breach, so it was time to escalate to John Murphy, Technical Director at Riverbed, who played the role of a security engineer. John used NetProfiler to dig into the crypto miner application flows to determine where they were going, when they started, and what else on our network was infected. We believe that visibility is the foundation for robust network security, so to us it’s only natural to incorporate automated security investigation functions into our flow analyzer.

Though John got some great info in terms of the breach, he didn’t find the root cause of our application performance problem. So he escalated the ticket to the network team to see if there was a problem with the network itself.

Escalating to the network team

Brandon Carroll, Director of Technical Evangelists, used NetIM to look at the path in between clients and AWS. SD-WAN gateways looked healthy, core switches looked fine, and all our regions showed green in the dashboard. It was time to get more granular, so Brandon introduced Riverbed’s synthetic testing tool, built right into NetIM.

Several tests were already running – in this case, HTTP tests which monitored successes, failures, and response times to our web servers. The metrics didn’t look good. Response times were high, and success rates were around 80%. And using some synthetic monitoring tests he created on the fly, he began to see strange DNS issues.

With this red flag, Brandon escalated the ticket to our last engineer, Vince Berk, CTO at Riverbed. Vince used AppResponse to analyze the specific TCP connections between our clients, DNS servers, and web servers.

Digging deep to find the root cause

AppResponse is a powerful analytics tool. It gives us the macro view of how applications are doing using visualizations of server response time, retransmission delay, connection failure and setup time, and an entire host of metrics that can be looked at individually or taken together as the application’s User Response Time. And since AppResponse gathers every single packet we throw at it, it’s also a full-fidelity visibility tool down to the most granular micro level.

And that’s exactly how Vince used AppResponse. He analyzed TCP flows, looked at individual packets, and ultimately found that DNS wasn’t load-balancing but was instead pointing all requests to the AWS East region. All this unexpected traffic overwhelmed our AWS East web server which negatively affected the performance of our application.

Remember that Portal is our macro view and usually our first step in troubleshooting, so the helpdesk may have figured out the root cause right away.

You can visit our NFD26 presentation, to see each of our visibility tools used independently to analyze different pieces of the puzzle.

Riverbed’s end-to-end visibility solution operates at the macro level to provide high-level metrics of application performance, but when it’s time to roll up your sleeves and get into the weeds, our tools provide the depth and breadth of end-to-end visibility at the micro level from the client, through the network, and to the cloud.

Visit Tech Field Day’s event page to watch our entire presentation at Networking Field Day 26, and visit the Riverbed Community to join in on the discussion!

 

 

]]>
Riverbed NetIM Revamps Alerting https://www.riverbed.com/blogs/riverbed-netim-revamps-alerting/ Wed, 22 Sep 2021 15:30:00 +0000 /?p=17338 If I had to pick one theme for the Riverbed NetIM infrastructure monitoring release (version 2.4), it would have to be new and improved alerting. We’ve reimagined the alerts page, adding the notion of active alerts. NetIM also started on the journey toward alert suppression with Site-based Gateway suppression.

Tangential to the topic of alerting, NetIM added a new Synthetic Test Object View Page and an IP SLA Views Page. The new Synthetic Test Object View page has four tabs that show results, alerts, browse configurations, and metrics. And, the IP SLA Views page allows you to view, navigate and search through all the IP SLA test results you’ve collected via polling.

This is a big and exciting release, so let’s dig into details and explore the ins and outs.

Re-imagined alerts page

The NetIM alerts page and alerts banner has been reimagined from the backend and the frontend. What we are providing now is a view and count of what is in active threshold violation. This is the “right now” view. It’s not time-based. We also provide aggregation of the counts, views into the counts, and the ability to filter the active alerts and the counts. Note that the Time-base Legacy Alerts page is still available.

The alerts page is organized into three sections. The top section is the alert banner that aggregates alerts in three ways:

  • Alert Counts by Severity
  • Affected Objects currently in Alert
  • Count of Alert Profiles that are triggering active threshold violations

We have multiple tabs that you can use to slice and dice and view the alerts that are in an active state, for example, Alert Counts by Object Type, Metric Class and Metric Name. You can also aggregate and view Affected Objects in Alert and Affected Geographic Region/Country. Another view NetIM provides is Alert Count by Alert Profile. This provides you with information on which of your defined alerts are causing the devices or objects to be in alert at that time.

There are lots of features in the Active Alerts view. The Active Alert Table has filtering. You can launch a Quick View of the metric. You can search and perform grouping within the table. You can customize the columns per user, and you can download the entire table to CSV.

NetIM also gives you two historical alert views so you can see when things went into alert first and when you had the most things in alert.

The new Alerts Manager pager reimagines how alerts are handled in NetIM 2.4.
The new Alerts Manager pager reimagines how alerts are handled in NetIM 2.4.

Alert suppression

Alerts can be suppressed to avoid generating too many alert entries or too many non-actionable alerts when the triggering condition occurs often. In this case, the site-based gateway alert suppression allows you to suppress all related devices and interfaces if the gateway is down. That way you only receive the one gateway alert. Once the site gateway is fixed, it should fix most, if not all the device and interface alerts. And, you don’t have all these extraneous alerts hiding the true issue.

To put it another way: if all configured site gateways are down, then all notifications for all devices & interfaces configured for that site are suppressed. If at least one gateway device is up for a site, then all configured notifications are sent.

You can configure Site Gateway suppression by going to Settings/Organize. There is also the Notification setting you must configure.

IP SLA Views page

NetIM still supports Cisco and Juniper IP SLA tests and the polling is configured at the device level. What’s changed is the IP SLA Views page and IP SLA tabs for source and destination devices of the test as well as the associated site and group. This Views page is launched from the menu item ‘More’ and allows you to view and navigate through all the IP SLA tests you have in your managed network. It supports search, filtering, or a Metric Quick View and a drill down to the source and target device and the site. Additionally, in the section below the table, you can view Top-N tests by various metrics.

NetIM 2.4 adds and IP SLA Views page so you can easily view all your test results in one place.
NetIM 2.4 adds an IP SLA Views page so you can easily view all your test results in one place.

You can now view the IP SLA tests scoped to device, site or group. Finally, we provided an IP SLA page. This is a page dedicated to each IP SLA test. It provides the test configuration and the test metrics.

Synthetic Test Object View page

The Synthetic Test Object View page has four tabs that show at-a-glance results, browse configurations, associated alarms, and metrics. In addition, the Synthetic Test TCP Port and other configuration properties are now available within Portal.

The Synthetic Test Object View page shows at-a-glance results, browser configs, associated alarms and metrics.
The Synthetic Test Object View page shows at-a-glance results, browser configs, associated alarms and metrics.

Reporting enhancements

NetIM now supports business hours, multiple discontinuous time periods, for example, Monday to Friday, 9:00 a.m to 5:00 p.m. In Performance Summary Reports, you select either All Hours, Business Hours, or Non-Business Hours to filter the timeframe of any report.

The Performance Summary Report now includes Component Type support. This has been expanded from the core base objects to components in the form. You select the Component Type from the drop-down list, then you select the relevant Metrics Class type for that Component Type, and finally the relevant Metrics.

AWS C2S

NetIM supports AWS C2S, extremely secure cloud computing for the U.S. Intelligence Community. The AWS Secret Region can operate workloads up to the Secret U.S. security classification level. Cloud security at AWS is the highest priority. AWS customers benefit from data center and network architecture built to meet the requirements of the most security-sensitive organizations.

Out-of-the-box metrics

This release added new out-of-the-box metrics, including a slew of Wireless LAN Controller metrics, F5 Load Balancer System Throughput, Group Status, and, of course, Site Gateways Status.

To summarize, NetIM 2.4 is a vital release that totally revamps how NetIM manages alerting. It also takes a major step forward in managing and reporting on synthetic and IP SLA tests, among an array of other updates.

Existing Riverbed NetIM customers with support contracts can download version 2.4 from the Riverbed Support Site. Customers running NetIM 1.x and NetCollector customers can easily upgrade to NetIM. Just ask your account manager for details.

 

 

]]>
Riverbed NetProfiler: Easier to Use for More Users https://www.riverbed.com/blogs/riverbed-netprofiler-easier-use-more-users/ https://www.riverbed.com/blogs/riverbed-netprofiler-easier-use-more-users/#comments Tue, 07 Sep 2021 15:30:00 +0000 /?p=17206 New features provide a modern UI, free-form search and security improvements

Riverbed NetProfiler recently introduced new features that enrich information sharing and simplifies its UI with a new homescreen and free-form search. It also improved security and enhanced cloud visibility, supporting native Azure NSG Flow Logs and augmenting support for AWS VPC Flows Logs (learn more about these updates here).  

NetProfiler’s theme is to make it easier to use by new users, helpdesk, support tier 1 and 2 users, and even users of other Riverbed NPM solutions. By simplifying and modernizing the user interface and menu and making the look and feel more consistent with the rest of the Riverbed NPM product line, we want more users to be able use it more of the time. 

New Home Screen  

The Network and Applications Overview insight is the new home screen. It helps new or infrequent users quickly understand how the network and applications are performing, what issues need attention, and how issues are trending. Users can easily search or contextually drill deeper into the data. 

These at-a-glance performance summaries are customizable on a per-user basis. Toggle between last hour, last day, or last week timeframes, this insight loads quickly ensuring fast responsiveness to performance queries.  

New NetProfiler home screen simplifies troubleshooting for NetOps and SecOps users.
New NetProfiler home screen simplifies troubleshooting for NetOps and SecOps users.

The Network and Applications Overview insight consists of four widgets: 

  1. Summary widget has high-level network statistics and counts with optional comparison timeframes and trends. Metrics are configurable via Column Chooser and you can edit the appearance of the Summary widget. 
  2. Top Talkers Sankey widget shows top hosts and what apps they are using on the network. It displays both traffic flows and volume, which is shown proportionally through the width of the arrows. You can choose Host to Application mapping or the reverse, Application to Host. Hover over any flow for the metric value and any flow details. 
  3. Traffic Volume widget displays traffic in a time series with a time comparison using the same timeframe as the Summary widget.  
  4. Cards widgets—There are six card widget slots that are individually configurable: 
    • Watched card allows for watching up to three different objects per widget for a select set of metrics.
    • Alerts display alert counts for different types of NetProfiler alerts, e.g. performance or security alerts. 
    • New Hosts, New Applications, New Ports show the top objects that were not previously seen in NetProfiler. 
    • Hosts, Interfaces, Ports, Applications show top objects with a select set of metrics; these cards offer a launching pad for deeper drill-down.

Free-form search 

The feature that will change users’ lives the most is the new Google-like search. Comparable to AppResponse search, the search bar sits right on the banner. With this new search, you can look up an IP address or an interface without having to understand where to find this information in NetProfiler, without having prior knowledge of specific NetProfiler workflows. The Search Results page will not only show you a list of relevant reporting queries and links but the definitions too.   

This free-form search feature uses type-ahead and autocomplete to show relevant suggestions. Tabs allow you to limit the results to a particular object type. Providing multiple results in a tabbed format helps you quickly find what you are searching for.  

NetProfiler Tabbed Search Results
NetProfiler Tabbed Search Results

Now you can also use the search field to look for a Host IP, CIDR or wildcard. A Host Data Search will provide a Host Information Report in the Search Results if data is found for that host. 

Improved security 

TLS 1.3 is the newest version of transport layer security and provides reliable encryption for data sent over the internet. TLS 1.3 dropped support for older, less secure cryptographic features, and is faster and more secure than TLS 1.2, among other improvements. One of the changes that makes TLS 1.3 faster is an update to the way a TLS handshake works: TLS handshakes in TLS 1.3 only require one round trip (or back-and-forth communication) instead of two, shortening the process by a few milliseconds. As a result, it’s quickly becoming the latest standard for HTTPS encryption. 

NetProfiler 10.20, now supports TLS 1.3 for its services, including syslogs. Out of the box, new systems are now installed with a minimum of TLS 1.2 and 2048-bit cipher certificates. TLS allows client/server applications to communicate over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery.

NetProfiler Simplifies 

To summarize, this release simplifies NetProfiler’s user interface and how you interact with it. It aligns more closely with the look and feel of the Riverbed NPM product family, especially AppResponse, so you can more easily switch between tools. It provides new and improved charting and graphics to wow users and simplify where possible to cater better to the helpdesk and support tier 1 and 2 users, while still serving Riverbed’s traditional power users. To learn about NetProfiler’s cloud updates, check out my  blog “Riverbed NPM Enhances Cloud Visibility. 

NetProfiler customers with a current support contract can download version 10.20 from the Riverbed Support site. Otherwise, click here for more information.  

]]>
https://www.riverbed.com/blogs/riverbed-netprofiler-easier-use-more-users/feed/ 1
Riverbed NPM Enhances Cloud Visibility with Support for Azure NSG Flow Logs https://www.riverbed.com/blogs/riverbed-npm-enhances-cloud-visibility-supports-azure-nsg-flow-logs/ Wed, 01 Sep 2021 12:31:13 +0000 /?p=17196 Cloud adoption was expanding rapidly even before COVID-19. During, and even after the pandemic, cloud plans and adoption increased even faster to adapt to work-from-home needs and to increase resiliency.  

Multi-cloud continues to be the dominant cloud strategy, implemented by more than three-quarters (76%) of organizations.1  Analyst firm ESG defines multi-cloud as more than one IaaS provider. Also, the use of infrastructure as a service (IaaS) has almost doubled in the last five years, from 42% in 2107 to 78% in 2021.2 

So clearly, today’s new normal is multi-cloud and hybrid networks, with an almost endless array of cloud-based business applications and workloads. As a result, enterprises are addressing concerns about the unpredictable performance of cloud workloads impacting overall business productivity. Moreover, mapping all the relationships across apps, hardware and networking devices for each IT-delivered service is notoriously difficult to do, especially in a rapidly evolving cloud environment. Therefore, it’s no surprise that 51% of organizations claim understanding app dependencies as the top cloud migration challenge. Further, 45% view the ability to assess on-premises vs cloud costs as a top challenge.3   

Support for Azure NSG Flow Logs 

This release of Riverbed NetProfiler (v10.20) does its part to jump on the cloud bandwagon and to address some of these challenges. It now supports the ingestion of Azure NSG Flow Logs, the native mechanism of flow generation offered by the Azure platform. Azure NSG Flow records are collected and exported to our Azure Function. 

Using this Azure flow data, NetProfiler provides two specific Azure cloud reports:  

  • Azure NSG Flow Information  
  • Azure Billable Data Transfer 

The Azure NSG Flow Information Report provides rich visibility into usage in the cloud. It shows applications, hosts, and conversations by VNETs, Regions, and Availability Zones. Most importantly, it can map any application relationships across the network for any service, addressing that top concern. NetProfiler’s extensive traffic reporting can also be used to report on and to study Azure NSG Flow log data. 

Azure NSG Information Report

On the other hand, the Azure Billing Data Transfer report helps you understand where cloud costs are occurring so you can make better plans and decisions to help minimize costs. It provides visibility into traffic volumes by Azure pricing policies. For example, it lets you know how much traffic is egressing the cloud – the most expensive type of cloud data – versus how much is traversing VNETs, the next tier of pricing. Knowing how the traffic is flowing across VNETs, regions, and cloud-egress also help determine whether services and their dependencies are all efficiently deployed, or whether there are more efficiencies to be had. By placing different services in the same VNET or same region, you gain pricing and latency efficiencies.  

Azure NSG Billable Data Transfer Report

Together these reports help answer the tough questions: 

  • What apps are running in the cloud? 
  • How’s the cloud network performing? 
  • Who’s talking to whom? 
  • How and where is traffic flowing through the cloud?  
  • Which VNETs, Regions, and Availability Zones are experiencing the most traffic? 
  • Are apps and services efficiently deployed? 
  • Is any traffic leaving the cloud? 
  • Where are you are incurring costs? And how can you save money? 

The new Azure reports are located at Reports->All Reports->Cloud Reports. Except for the Azure vs AWS terminology differences, the reports are similar to their AWS counterparts. 

AWS updates 

In NetProfiler 10.14 (August 2018) we introduced AWS VPC Flow Log support. It required customers to manually configure and maintain AWS hostgroups (Region/AZ/VPC) to run the AWS visibility reports. This can be a laborious and error-prone process. 

With recent improvements made by AWS to their AWS VPC Flow logs, NetProfiler utilizes those improvements to automate the groupings. NetProfiler polls the AWS Management Console for the metadata and populates the corresponding AWS hostgroup definitions. However, there are two requirements for this polling to work: 

  • It requires outbound Internet access from NetProfiler to your AWS management console. 
  • And, you cannot have overlapping CIDR definitions. 

Lastly, by popular demand, we added a new widget in the Billable Transfer Report called “Billable Data Transfer between VPCs in the same Region” to the AWS Billable Data Transfer Report, and a comparable version to Azure. I think the title of this report makes it pretty clear what data this report provides!  

To sum it up, NetProfiler 10.20 is an important release. In addition to these cloud enhancements, we made a slew of other updates, including a new easy-to-use homepage, free-form search, security updates, and more.

If you’re an existing customer, you can download the latest version of NetProfiler on the Riverbed Support Site. If you are new to NetProfiler, contact Riverbed sales

 1 ESG Master Survey Results, Technology Spending Intentions Survey, March 2019. 

2 ESG Master Survey Results, Technology Spending Intentions Survey, Dec 2020 

3 Flexera 

]]>
Using the Network to Contain Supply Chain Attacks https://www.riverbed.com/blogs/using-network-contain-supply-chain-attacks/ Fri, 06 Aug 2021 12:30:00 +0000 /?p=17188 These days we’re hearing more and more about ‘supply chain attacks’. That’s when a component of an application has a weakness with the potential to make the entire system or service vulnerable.

Consider a soft drinks manufacturer. If a competitor wanted to damage its market shares, rather than targeting the bottling plant, it would be easier to target the supplier making the bottle caps. Loss of fizz, unhappy customers switching to the ‘other’ cola—all achieved without needing to hack highly guarded systems and the ‘secret recipe’.

Lurking in Linux in plain sight

On 10 June 2021, a security specialist reported a serious bug that had been sitting in Linux code for seven years. Located in polkit, an ‘under the hood’ system service used by default in many Linux distributions, it effectively allows an unprivileged user to assume administration rights. It’s also quite easy to execute with just a few command lines.

Obviously, the first thing for any organization using the relevant releases is to close this dangerous breach with a patch. But, given it took the extensive Open-Source community seven years to spot, how can you know if and when it was exploited on your own systems?

It’s just one example of potential vulnerabilities you may not be aware of within your application infrastructure—and it won’t be the last. Many applications encompass a thousand or more components, and you can’t possibly test them all against your own security posture. Products are built by product managers and developers of varying quality; there is plenty of scope for human error, or someone deliberately creating a back door for attacking a service or software product made up of multiple components. Until a new zero-day is announced, there won’t be patches available. So until then, you’re running blind.

The security community is well aware of the risks. So-called White Hatters have been deliberately introducing duplicate software with typos in the name of software components which alert the developers to the fact they have included the similarly named albeit benign software package. The intent is to alert developers to the problems and risks of supply chain vulnerabilities.

How does Unified NPM help?

Riverbed’s Unified Network Performance Monitoring (NPM) platform is typically used by NetOps and application teams to troubleshoot, pinpoint, identify then resolve performance issues, whatever their cause. But it is also proving invaluable to an increasing number of SecOps teams by enabling them to go back and collect empirical evidence of data breaches in order to deal with any consequences.

Because Unified NPM records all data flows, all of the time and maintains historical records, it makes it easy to go back and see whether any data was breached after an event. It does this by recording ‘indicators of compromise,’ which may be IP addresses associated with an attack, or command and control mode activities indicating where attacks are coming from.

Essentially, Unified NPM retains comprehensive flight data that enables you to discover in the future both if and how your security has been impacted.

Making unknown unknowns known

Another vulnerability resulting from supply chain attacks is endpoint software. Unless you only allow users to access corporate applications via strictly controlled SOEs (standard operating environments), you have no way of managing what people are using—devices, services or applications—and potentially bringing into your environment. In the current ‘from-any-device, from-anywhere’ world and considering the prevalence of Shadow IT, it is extremely difficult to know your level of risk.

At least with Unified NPM deployed, you will have the ability to identify indicators of compromise, enabling you to spot and investigate external reconnaissance of your systems or illegitimate data exfiltration. In addition to proactively reducing the impact of performance issues across your environment—on-premises or in the cloud—it’s another extremely useful weapon in your cybersecurity armory.

If you’d like to know more about the security potentials of Network Performance Monitoring, our recent webinar Why Network and Security Monitoring are Merging is available on demand.

]]>
Detection vs. Protection: Painting a Complete Picture of Your Security Position with Unified NPM https://www.riverbed.com/blogs/detection-vs-protection-unified-visibility-for-cybersecurity/ https://www.riverbed.com/blogs/detection-vs-protection-unified-visibility-for-cybersecurity/#comments Mon, 02 Aug 2021 14:49:00 +0000 /?p=17140 I’ve spent 20 years trying to help people understand IT problems (and solutions) and to dispel confusion. I really enjoy finding new ways to map IT to the physical world and analogies that turn on that lightbulb in people’s minds. My favorite analogy today is describing how network Performance is a huge part of ensuring cybersecurity for your business.

First, we need to clear up one thing. The way we approach security needs to change from WHEN, not IF, your network and data will be attacked. We have seen a huge rise in ransomware attacks. We have also seen major supply chain attacks. What does this tell us? Even if you follow the best security principles and have excellent perimeter security solutions in place—you are still at risk. If you download a digitally-signed, verified software patch that happens to contain malware, the attackers are in. There isn’t much your perimeter security tools can do to help. You have effectively, if unwittingly, opened the door to the attack.

Now that attackers are in the network, how do we know they are there and what they are doing? Here is my analogy: think of an art gallery with priceless works hanging on the walls. The gallery has:

  • An outer wall or fence (firewalls)
  • External doors (controlled internet connectivity)
  • Security personnel at each entry point (IPS/IDS systems)
  • Internal doors that permit or deny entry to secure areas (application security)
  • Cameras (particularly around high-value items), and
  • Sensors that detect motion, pressure, etc.

The gallery is designed to have people come and go as they please, with the perimeter security teams checking visitors for potential risks (bag searches, etc.) and tracking their arrival (logbooks, camera systems, etc.). Vehicles arriving at the loading bay will undergo additional checks on arrival and departure.

Security Guard and Visitors at Art Gallery
Security Guard and Visitors at Art Gallery

It is normal and expected to have people standing a few feet from a Van Gogh masterpiece at 3pm on a Thursday and the museum security will not be alerted by that. However, if someone were detected in the same place at 2am on a Sunday morning, this would raise the alarm as abnormal behavior.

If someone got into the gallery and removed an item from the wall, we would spot it is missing the following day by noticing the gap in the exhibition. But what if the intruder stole a second item, swapping it with a forgery? There would be no gap on the wall to alert us. A gallery would have lots of cameras though, revealing the intruders’ actions.

Back to the world of IT…

If we assume that the perimeter security solutions merely make it harder to access the network and that we are going to be attacked, understanding the attackers’ actions within the network is crucial to both detecting the damage and preparing a recovery plan.

The sensors and the cameras are the equivalent of Network Observability tools, alerting us to unusual activity (the 2am Sunday moment) on the network and telling us where people have been and what they have been doing (the forgery swap). It’s like having a recording so you can play back the whole incident.

If we think of a scene in a film where thieves move acrobatically between laser beams across a room, the sensors and the cameras in the room are there to detect the activity, not stop the heist. You could easily walk past the cameras and through the beams, take the painting off the wall and walk out again. NPM is the same—it is not a security tool, it does not stop the attack, but it does alert you when abnormal behavior occurs.

IT security threats come in all shapes and sizes, and there are attacks that you can’t really protect against, such as state-sponsored activity. Others are just hard to secure against.

You have users on the network (just like a gallery has staff and visitors) and you expect them to be there—in fact, you want them to come in! They need to access systems and data to do their jobs. Hopefully, you have security tools in place to check the identity of the users and allow them access to the right places (applications and data).

What if a user, who has legitimate access to a system, starts to engage in malicious activity? Would your perimeter security tools detect this? Perhaps not. However, because NPM understands normal behavior on the network, it can alert you to abnormal behavior, too. Perhaps, the user usually transfers a few hundred MBs a day, in the office, between 9 and 5, Monday to Friday. But suddenly, they access 10GB on a Sunday afternoon from home. What are they doing with this data? Perhaps it’s nothing, just a mistake, or maybe they are going to sell it to a competitor or take it to a new company? Either way, it is an anomaly that needs to be investigated.

As a final thought, if you are subject to ransomware attacks and systems are encrypted and data is stolen, you have to report the breach to the relevant authorities and may be exposed to significant fines. These attacks are typically two-fold now: 1) pay to get access to the data and 2) pay to stop the stolen payload from being released to the public. You need to know exactly where the attackers went and what they did, and this may help you make the decision on whether to pay the ransom or not.

In summary, security threats are going to happen. Attacks come in a range of types and traditional security measures may not protect you. To better prepare for the inevitable, it’s vital you have complete visibility of all activity on the network to detect rogue behavior and enable a quick recovery. And, as an added benefit, NPM tools (as a primary function) also track the performance of applications on the network helping to give your users the best possible performance.

Unified NPM from Riverbed

Networks are mission-critical to business success. Digital businesses need secure, reliable networks more than ever before. But, with today’s hybrid cloud architectures, maintaining a high-performing and secure network requires a broad view across IT domains.

Relying on a hodgepodge of narrowly focused, siloed performance monitoring tools does not provide the breadth and depth needed to diagnose complex network performance problems. Network Performance gathers all packets, all flows, all device metrics—all the time. The solution maintains visibility across all environments, on-premises and cloud, to enable business-centric views across all your domains. It also integrates with end-user experience and application performance monitoring so that you can understand the impact of network performance on critical business initiatives.

Identify, remediate and protect against cybersecurity threats

Today’s enterprises, with modern applications migrating from the data centre to cloud and SaaS platforms, are facing an uphill battle when it comes to cybersecurity. Despite heightened awareness, high-profile breaches continue to occur at alarming rates.

In order to quickly diagnose and respond to a full range of attacks, IT teams need visibility to identify threats of all shapes and sizes, from campus to cloud. Riverbed’s full-fidelity network security solution provides essential visibility and empowers users with fast, secure connectivity to the resources they depend on for business execution. The results: stronger security and better business performance.

 

 

]]>
https://www.riverbed.com/blogs/detection-vs-protection-unified-visibility-for-cybersecurity/feed/ 1
Customer Experience Lessons from the Akamai Outage https://www.riverbed.com/blogs/customer-experience-lessons-akamai-outage/ Wed, 28 Jul 2021 12:30:00 +0000 /?p=17154 On Saturday, 17 June 2021, a small configuration error by a usually ‘invisible’ cloud service provider had a massive impact on some of the world’s leading businesses. The Reserve Bank of Australia plus three of the Big Four were severely affected, along with Australia Post and Virgin Australia. Online services halted, staff couldn’t access the internet, contact centres went down, planes couldn’t take off–evaporating end-user experience and damaging brand reputation with their customers.

What happened?

Big brands are constant targets for a range of ideological, political, commercial or sheer criminal reasons. They must remain proactive against persistent cyber threats, including Distributed Denial of Service (DDoS) attacks originating from anywhere in the world. DDoS scrubbing is a powerful form of defence, and Prolexic from US-based global content delivery network (CDN) Akamai is a leading choice.

Prolexic monitors traffic entering large networks—such as web queries or mobile apps—then establishes whether it is valid or malignant. If valid, traffic is forwarded to the network of the bank, airline or other business. If not considered valid, the traffic isn’t allowed in.

Unfortunately, an erroneous value in a routing table caused a failure in Prolexic which affected around 500 organisations globally. Some were automatically rerouted, while for others it was a manual operation.

All up, it took from around 30 to 120 minutes for services to be restored, causing widespread angst and frustration for the customers of affected brands. All-points apologies via social media were reputation damaging. “We’re aware some of you are experiencing difficulties accessing our services and we’re urgently investigating,” tweeted CBA. “We’ll be back soon… We are currently experiencing a system outage which is impacting our website and Guest Contact Centre,” said Virgin Australia. For some consumers, it might even have been the last straw, causing them to switch providers.

How would Unified NPM have helped?

Customers with Riverbed’s Network Performance platform have the advantage of visibility in both directions: up and down. The cause of the fault would be quickly placed outside of the network as no traffic would have been detected in the GRE tunnel. In other words, “Everything’s fine, but there’s no load!” This would have sped remediation by simply turning off the Akamai DDoS scrubber or switching over to another one.

Unified NPM is able to protect customer experience by monitoring all key metrics—packets, flows and device data—all of the time. This gives you end-to-end visibility to:

  1. Understand what normal looks like. How much traffic should we be expecting? Where is that traffic coming from, or not coming from?
  2. Baseline the traffic leveraging passive (packets/flows) and active (synthetics).
  3. Alert on KPI deviations to help isolate the problem.
  4. Implement a mitigation or business continuity strategy.

This level of granularity delivers NetOps and SecOps teams with quantitative, empirical evidence of precisely where faults lie, so they can be remediated fast. If, as in the Prolexic case, the fault lies beyond the network, the indicated service provider can be alerted and have services diverted or switched off.

Unified NPM also provides valuable forensic information after an event. Once systems are up and running again, you have solid evidence to use in the development of mitigation tactics internally between teams and with your external service providers—with the objective of avoiding such outages in the future.

What have we learned?

The Akamai incident highlights the importance of redundancy for an essential service like DDoS scrubbing and a ready-to-go mitigation strategy. Once network and applications teams worked out that Akamai was the problem, they could have switched to an internal DDoS scrubber. In fact, many organisations principally use these less costly options and only switch to cloud providers like Akamai and Fastly when they are overwhelmed by a high level of incoming threats.

Network, application and security engineers could have been saved extended, high-intensity troubleshooting on a Saturday afternoon, if they had been able to pinpoint the fault in minutes rather than hours. Most importantly, faster recovery would have meant fewer consumers suffering a poor customer experience.

If you’d like to know more about Network Performance Monitoring, our recent webinar The Art of Troubleshooting is Back! is now available on-demand.

]]>
Expanding Gig Economy Raises Security Concerns https://www.riverbed.com/blogs/expanding-gig-economy-raises-security-concerns/ Fri, 16 Jul 2021 14:40:00 +0000 /?p=17171 COVID-19 has fundamentally changed traditional labor models and employment conditions. Many 9-5 office workers, having proved they can be just as productive working from home, expect flexible arrangements to continue post-pandemic, including the option to work from anywhere. And, at a time when all organizations are carefully managing human capital expenses, the demand for gig workers to fill resource gaps grew at an exponential rate. In fact, amid the pandemic, 23 million new participants—in the US alone— joined the gig economy to supplement their income or to become full-time independent workers.

According to a study by ADP Research Institute, the gig economy accounts for a third of the world’s working population and includes a wide variety of positions. Whether hiring artistic labor or deep technical expertise, or arranging for the short-term help of personal assistants, the gig economy enables organizations to be increasingly nimble and efficient in making use of outside talent at just the right times with as few hurdles or delays as possible.

As the demand for alternative labor arrangements grows, the use of software and web-based platforms to facilitate and automate gig work has evolved. Early examples include the use of technology to facilitate peer-to-peer transactions (e.g., Airbnb, Uber). Today, gig platforms support a wide array of digital transactions involving the exchange of goods and services, as well as sensitive data.

Gig workers are unique insider threats

While the benefits of the gig economy are evident for both employers and workers, the practice of hiring outside talent or leveraging unvetted platforms fundamentally clashes with the business imperative to monitor and safeguard sensitive data. Existing large-scale breaches of corporate networks have been tied to outside contractor and vendor firms. For example:

  • In 2013, the large-scale hack of retailer Target was traced back to their HVAC vendor
  • In 2018, cybersecurity firm BitSight found that over 8% of healthcare and wellness contractors had disclosed a data breach since January 2016, along with 5.6% of aerospace and defense firms
  • In 2020, a ransomware attack on Visser Precision exposed NDA and product plans for Tesla and SpaceX

In these cases, firms rather than individuals were implicated, but the threat is clear and known that trusted insiders of any stripe pose a security risk. Unfortunately, gig workers who require remote access to corporate data to do their work, are least visible to security teams.

To complicate matters further, gig workers often use their own equipment and network connections to perform work for multiple companies at the same time. This means traditional visibility instrumentation such as client agents or VPNs may be restricted. Direct oversight in many cases is not feasible, resulting in a reliance on automation to provision, facilitate, and de-provision appropriate network and application access.

Machine learning has become increasingly utilized to help security teams grapple with increasing scale and decreasing visibility. Here too, gig work poses unique problems: how does one produce behavioral baselines for an actor who only uses the network for a few days or weeks and then never again? Once produced, how can they be effectively managed and utilized?

Are your security controls adequate?

Despite these challenges, organizations still need effective strategies to determine whether their data is safe and to feel confident that they can identify and deal with any threats.

Emerging security approaches such as Secure Access Service Edge (SASE) and Zero-Trust Network Access (ZTNA), coupled with well-defined, role-based access control (RBAC) will be necessary to effectively manage gig workers according to principles of least access. But provisioning access is only part of the security story.

Network Performance has always been a critical component of ensuring that security controls are effective. New sources of telemetry will be needed to complete the picture, coupling events from SASE components with traditional packets and flows to paint a full picture of interactions from start to finish. Policy-aware visibility and population-based machine learning techniques will be needed to help analysts make sense of what they’re looking at—alongside, perhaps, techniques not yet dreamed up.

In addition to technology-based controls, organizations should establish clear, contractually-imposed requirements for gig workers, covering basics like antivirus software on their laptops to expectations for handling data upon finishing their assignments. Essentially, when it comes to gig workers, organizations can’t sacrifice proper vetting and due diligence for speed.

Flexible, distributed work is here to stay

The gig economy has brought dynamic growth to companies and flexible opportunities to workers. But business and IT leaders need to be prepared for the visibility and security challenges posed by gig workers—as well as their own employees who are working remotely—because these trends represent the future of work.

At Riverbed, we see our role as trusted visibility advisor to our customers to help guide them through the challenges of maintaining visibility—and thus security and auditability—while staying nimble. We continually monitor, plan and innovate to address these trends so that our customers can take full advantage of modern work practices, as well as transformative technologies, without giving up control over security and performance.

]]>
Simple, Secure SSL Certificate Management at Scale https://www.riverbed.com/blogs/making-ssl-certificate-management-simple-secure/ Thu, 01 Jul 2021 22:31:00 +0000 /?p=16957 SSL and TLS traffic are among the most common forms of secure network traffic in today’s enterprise. The Riverbed Application Acceleration solution has been ensuring optimal service delivery of SSL and TLS traffic for years. Our solution optimizes SaaS application traffic, internal traffic, and even traffic used for service chaining with CASBs, IDS solutions, and so on. On one side of our bookended solution is a SteelHead appliance in a data center or in the cloud, and on the other end is a SteelHead in the branch or installed as an agent on an end-user’s computer. However, creating, deploying, and managing the certificates we need for each internal or external HTTPS application can be a lot of management overhead for a network operations team.

Optimizing secure traffic

When we optimize SSL and TLS traffic, all these components need to be part of the organization’s PKI, or in other words, the method we use to secure digital communication. Typically, that’s done by using certificates deployed on the server-side SteelHead and the branch SteelHead. And, each HTTPS application uses its own unique certificates.

Think about how many and how often new applications get rolled out these days—especially SaaS applications. That means manually installing certificates and updating expiring certificates whenever there’s a change or a new application is deployed.

Simplifying certificate management

To solve this, we’ve integrated a certificate management component into the Client Accelerator agent already installed locally on an end-user’s computer. With this simple software update, the Client Accelerator has the ability to generate, host, and manage the certificates we need.

There’s no longer a requirement to host certificates on the server-side SteelHead. There’s also no longer the management overhead of manually creating, configuring, and storing certificates. And since certificates can be generated locally right on the computer, we eliminate the need for a central certificate authority.

We still use the Client Accelerator controller to manage all the agents deployed in the organization, but now we also use it to manage the certificate peering, certificate rules, and installation packages. What we end up with is a simplified, modular, and largely automated method for managing all the growing number of certificates we need to optimize SSL and TLS traffic.

Optimizing SSL and TLS traffic is a no-brainer. It’s one of the most common types of secure traffic on the network, and we’ve been doing it for years. And, with Riverbed’s latest update to the Client Accelerator agent, we’ve removed the complexity and overhead for managing certificates making it that much easier to deliver SSL and TLS traffic at peak performance.

Check out this video diving into the solution in detail here: 

To learn more about other ways in which you can strengthen your security posture with Riverbed, visit: riverbed.com/security

]]>
Riverbed + Microsoft: A Force Multiplier for Public Sector Mission Success https://www.riverbed.com/blogs/force-multiplier-for-public-sector-mission-success-microsoft-partnership/ Thu, 24 Jun 2021 21:51:06 +0000 /?p=17111 The public sector is built on collaboration. The vast services of the nation would cease to exist if it weren’t for the collective effort between the public and the private sector to rally around the mission of governing.

This unique dynamic is something that the Riverbed Public Sector Team thrives off because it forces us to be extraordinarily collaborative and creative in both our approaches and solution offerings. Public sector networks are some of the most complex in the world. There is no downtime and often the words life or death have very real implications.

This has taught us to look beyond the network needs and strategically approach customer engagements with partners like Microsoft who bring complementary solutions to bear and ensure our public sector customers have the network and applications needed to drive their missions forward. In short, we’re better together.

This is critical because agencies and organizations throughout the public sector are in the middle of massive digital transformation initiatives, either planned or brought on by the pandemic. They’re modernizing legacy infrastructures, adapting how they deliver and deploy IT resources, and moving staggering amounts of data, workloads, and applications into the cloud.

This transformation was accelerated at a breathtaking pace during the pandemic. The entire public sector displayed incredible resilience in adapting their networks to ensure that their dispersed users could continue to provide, and in many cases expand, critical services here at home and across the globe.

As Microsoft’s CEO Satya Nadella, recently said, “organizations underwent two years of digital transformation in two months.”

Ensuring mission success requires partners that share a unified vision of what it takes to help agencies accelerate out of the pandemic, build on silver-lining successes, and transform from where they are now to where they want to be tomorrow. We’ve been incredibly fortunate to have an active and engaged partner in Microsoft in this endeavor. I think a lot of our experiences over the past year could serve as valuable lens to look through in other customer engagements across both Riverbed and Microsoft.

Partnerships Proven by the Pandemic

Almost overnight, entire public sector agencies were accessing networks through TICs designed for fractions of users, laying bare the inefficiencies of legacy network environments.

While agencies were able to nimbly adapt and leverage Microsoft’s SaaS-deployed Modern Workplace, Collaboration, and Cloud solutions, many quickly discovered that those solutions were only as capable as the networks they traverse. Networks that hadn’t been designed for SaaS or cloud suffered from reduced visibility, network congestion, latency, poor application performance, and poor user experiences.

This is where our partnership and telework solutions, including Unified NPM, SteelHead, and Client Accelerator, were able to act as a force multiplier for success. We were not only able to give agencies crucial visibility into their network, applications, and users, but we allowed them to unlock desperately needed capacity on their overburdened networks so they could optimize the performance of their Microsoft solutions.

If you want to dive deeper into some tangible examples of this partnership in action, I encourage you to view our Pairing IT Investments Webinar: How Riverbed and Microsoft Create Greater Value for Public Sector Networks where public sector leaders from both Riverbed and Microsoft outline the strengths of our partnership and how our complementary solutions have prepared federal agencies for post-pandemic realities.

Jointly, we’ve enabled our public sector customers to not only respond to crisis in front of them, but helped them reimagine telework in the future and the role of the network as a driver of mission success.

The Government-from-Anywhere is Here to Stay

So what is next?  Whether public sector employees continue to work-from-anywhere, return to the office in reduced capacity, leverage hoteling and collaboration spaces, or a combination of all, it’s safe to say that a hybrid environment is here to stay.

This is truly a paradigm shift in the public sector because the benefits of telework are not just anecdotal, the data is clear. The public sector was not only more productive teleworking during the pandemic, but they were more engaged, collaborative, and content. So how do we pivot from the short-term strategies that got the job done, to long-term solutions that are sustainable, capable, and easily implementable?

Recently, we gathered leaders from across the federal landscape at Riverbed’s Network Transformation Summit where we dissected this very question. The consensus among the federal CIO, CTO, and CISO speakers was that even with the rapid onset of the pandemic and shifting demands of agency networks, the public sector workforce not only survived, but thrived!

The benefits of telework are simply too large to ignore. It allows a for agencies to reduce costs, improve delivery of services, recruit and retain a modern workforce, and create new avenues for citizen engagement.

Even the Department of Defense, an agency that has historically been risk adverse to telework, conceded during a Panel Session with Javier Vasquez, Microsoft’s GM of Technology and Solutions, that it is looking at telework as a permanent part of the agency’s operational plan.

It should be noted that a reimagining what it takes to ensure mission success, isn’t just being discussed at the federal level. We recently co-hosted a Government Technology Webinar with Microsoft where we examined how SLED organizations fared responding to the pandemic. Their challenges and outlooks were extraordinarily similar to their federal counterparts. As they prepare for this next stage of the pandemic, they too are evaluating what worked, what didn’t, how their networks can adapt, evolve, and improve delivery of services to users and citizens alike.

Better Together

As agencies redefine and reimagine operations for a post-pandemic reality and look beyond the network needs to ensure mission success, it’s more critical than ever to maximize current IT investments and to proactively deploy solutions that enable the peak performance and productivity of the network, applications, and users alike.

For over 12 years, Riverbed and Microsoft have worked together to deliver seamless network and app performance, visibility, security, and the successful convergence of legacy architectures and the cloud. We’re stronger together and we’re fortunate to have the honor of providing innovative public sector solutions to reimagine Government-from-Anywhere and ensure mission success.

]]>
Agentless Monitoring of AWS VPC Traffic with AppResponse Cloud https://www.riverbed.com/blogs/agentless-monitoring-of-aws-vpc-traffic-with-appresponse-cloud/ Wed, 16 Jun 2021 15:13:00 +0000 /?p=17016 We are in a time where companies already accept the importance of cloud-driven transformation, but gaining insights across cloud services, applications and infrastructure is still a challenge. There are cloud visibility solutions that are based on agents, which duplicate packets and send network traffic to the monitoring application. These agents are sometimes hard to deploy and manage and often degrade performance. But with Riverbed AppResponse and the AWS VPC Traffic Mirroring feature, users can now gain insight and access to network traffic in a cloud-native way without using these packet-forwarding agents.

In this article, I am going to cover how you can configure AWS VPC mirror sessions with AppResponse Cloud.

AWS VPC Traffic Mirroring Concepts

With the traffic mirroring feature, you can copy network traffic from an attached ENI in an EC2 instance and send traffic to monitoring appliances. There are four key elements of traffic mirroring:

  1. Source: A network resource in a particular VPC. In our case, it will be an ENI, whose traffic we want to monitor.
  2. Target: The destination for mirrored traffic. It can be an ENI or network load balancer.
  3. Filter: A set of rules that define the traffic that is copied in a traffic mirror session.
  4. Sessions: An entity that describes traffic mirroring from a source to a target using filters.

I will use the AWS Console for the mirror session configuration. For my test setup, I have AppResponse with a valid license and a test instance whose traffic I want to monitor. Please follow the steps below:

Step 1. Create AWS VPC Traffic Mirror Target

Choose the network interface target type and select ENI of AppResponse instance. Click create.

Create AWS Traffic Mirror Targetaws vpc traffic mirror target create button

Step 2. Create AWS VPC Traffic Mirror Filter

You can create filters for your traffic in this step. I have created filters for traffic on ports 22, 80, and 443. You can also monitor all traffic.

create aws vpc traffic mirror filter

aws vpc traffic mirror filter create button

Step 3. Create AWS VPC Traffic Mirror Session

In this step, I will create a mirror session with a mirror source as an ENI of test instance. You may notice that the mirror target and mirror filter have values that we created in previous steps. After feeding the required data, click create.

create aws vpc traffic mirror session

create aws vpc traffic mirror session additional settings

create aws vpc traffic mirror session tags

Step 4. Verify Traffic in AppResponse

I ran HTTP traffic in my test setup. As I have a filter for HTTP traffic, I should see that traffic in AppResponse. There are two ways to verify traffic in AppResponse.

  1. Insights at Home Screen: Click on home screen and you should see similar insights as shown in the image below. You can check in the applications tab to verify that the traffic is HTTP, and in the server IPs tab, that the server is an EC2 instance.
appresponse insights
Insights for Top Apps in Application Tab

 

appresponse cloud server insights
Insights for Top Server IPs in Server IPs Tab
  1. Navigator->Apps Stats: Navigate to the navigator->apps section, where you can monitor traffic stats similar to the image below.

appresponse app stats

Summary

Today, customers have to install and manage third-party agents to monitor network traffic in their VPC. These agents impose additional operational and performance costs. In this article, I have provided the steps to configuring the AWS VPC traffic mirroring feature with AppResponse, showing you how it’s possible to monitor traffic without these agents.

]]>
Parallels Between the Roman Empire and Zero Trust Network Security https://www.riverbed.com/blogs/roman-empire-and-zero-trust-network-security/ Mon, 14 Jun 2021 20:39:18 +0000 /?p=17088 It seems that the term “zero-trust” is emerging as the latest buzzword in network security and cybersecurity communities. To explain it, one can look to the Days of Antiquity, at the height of the Roman Empire when its borders encompassed most of Europe, Northeast Africa and the Middle East. Much of the early years of the Empire was focused on what was known as “Preclusive Security,” which was an expansionist approach of fighting opponents either in their own lands or at a heavily fortified border.

The problem was that as the Empire expanded, so did its borders, which increasingly proved difficult to staff and resupply with loyal legionnaires, and ultimately became significantly harder to defend. Once invaders like Attila the Hun were able to breach the heavily guarded border, there was little that stood in their way from nearly capturing both Constantinople and Rome.

These challenges associated with the ever-sprawling border precipitated a shift in the Empire’s strategy to what’s called “defense-in-depth,” which established a series of lightly-defended sentry posts at the borders instead of heavily fortified outposts.

While the border may not have been hardened any longer, the sentry posts served as the eyes and ears of the Empire. In the event of an enemy invasion, instead of holding their ground and fighting their opponents at the border, sentries retreated to reinforced positions within their own territory for a better chance to repel invaders.

Fast Forward Two Millenia

In the 1980s and beyond, we began applying this same defense-in-depth philosophy to our IT networks, layering protection and redundancies to reduce vulnerabilities, instead of a hardened border. In “those days of antiquity” with .rhosts files and unencrypted telnet protocols, often simply penetrating the firewall could lead to a total compromise of an entire network.

As our networks evolved into their modern-day software-as-a-service-heavy, hybrid-cloud infrastructure equivalents, much like the Romans, we find our networks further at the edge than ever before. Many contend that they are so far and distributed that it is difficult to clearly define a border to defend.

Nemo Sine Vitio Est (No One is Without Fault) – Seneca the Younger

At its core, zero trust is the idea that your networks are already compromised. From simple malware running cryptominers to advanced foreign nation-state attackers who are carefully working to stay hidden to sabotage or steal your data, much like Attila, the invaders are inside your networks.

Complicating matters is that for every line of code written worldwide, new vulnerabilities may be introduced, hackers create more capable malware, and the number of possible attacks, backdoors and persistence tricks grows as well.

The defenses that we have traditionally erected—like firewalls, UTMs, IDS/IPS, and malware filters—remain critical but are no longer sufficient without greater visibility. While they create barriers and tripwires, a zero-trust environment requires acknowledging that these will be scaled, circumvented and tip-toed around to gain access to your networks. Think of these traditional static defenses as barriers that force your adversary to change their behavior, giving you a chance to identify. This only works, however, if you are paying attention.

Despite the efforts to protect, visibility is often poor in dispersed, hybrid, network environments. Without either a well-defined border to defend or cybersecurity sentries keeping watch, it may be difficult to determine exactly when or where intruders have penetrated your networks.

It should not escape anyone that the complex supply chain SUNBURST attack from last year went undiscovered for the better part of a year despite having dozens, if not hundreds, of organizations and agencies compromised. The alarm bells simply did not go off as the attack vectors were never seen.

Nil Desperandum (Never Despair) – Horace

So how does one defend a sprawling network with shifting borders and an ever-increasingly number of ways in which the adversary may slip in and stay in? It takes a paradigm shift in thinking and approaches.

With the network border blurry at best, we no longer have a single and convenient point of telemetry collection to force the attacker in the open. Instead, we must rely on a patchwork of overlapping barriers and telemetry sources over the entire network stack.

Endpoint detection solutions must be combined with endpoint forensics and log collection. Infrastructure as a service requires a more traditional firewall approach while enabling the capturing packets and flows for cyber hunting. SaaS solutions will increasingly need to expose usage and security APIs to detect and gain insight into potential adversarial behavior.

The mantra of the next decade is going to be overlapping angles—do not deploy a defensive solution without sources of forensic visibility. Apply policy on the endpoint, the data center, IaaS and SaaS while collecting, storing and creating visibility angles on all.

Visibility telemetry, much like the Roman sentries of yesteryear, are the eyes and ears of the cyber hunter. This is how we spot the most dangerous of all threats: The one that knows how to stay hidden.

 

]]>
The Transformative Power of Technology: Understanding the Business and Economic Impact of Digitization https://www.riverbed.com/blogs/the-transformative-power-of-technology/ Tue, 01 Jun 2021 13:10:14 +0000 /?p=17070 According to the Harvard Business Review, only 23% of companies are non-digital with few, if any products or operations that depend on digital technologies. The vast majority of organizations are technology organizations. They have seen the benefit of automating tasks with computer-based systems, monitoring manufacturing environments with IP-based tools, migrating applications to the cloud, and using technology to streamline business processes.

Technology is no longer viewed as a cost center. It has become integral to almost every facet of business. Today, it’s not about embracing the latest and greatest technology for its own sake. Instead, today’s digital transformation is about evaluating how technology can help businesses do things faster, better, and cheaper.

In a 2020 Deloitte survey, digitally mature companies were three times more likely to report annual net revenue growth significantly above their industry average—across industries. And it’s not just about top line growth. Digital technologies create economic value in multiple ways. Here are three examples based on my professional experience:

Shifting from CAPEX to OPEX

Early in my career I worked with a large law firm in the New York area that wanted to go paperless. The goal was to reduce how much space they used to store thousands of boxes of files. At first glance that may seem like a minor initiative, but they owned a commercial building in a New York City suburb for just this purpose. The cost of the mortgage and maintenance was a huge drain on the business, and the logistics of moving and searching for files resulted in an incredible amount of lost time.

In other words, needing so much physical space along with this glaring inefficiency in their operational workflow was costing them money.  

Their end goal wasn’t to adopt a new technology. No one cared about cloud-based file storage. Their goal was to reduce costs and improve business processes. Technology, in this case, was a means to decrease expenses thereby increasing the law firm’s monthly bottom line.

The law firm had no desire to build out a data storage solution because it was too expensive. However, they were immediately able to see the direct benefit of deploying a cloud-based storage solution that saved them the enormous cost of the building and the cost of implementing a physical data storage solution of their own.

For many organizations, technology isn’t a profit center in the sense that it directly generates profit for the company. Instead, technology is a means of decreasing the cost of doing business in the first place. In the case of this law firm, I saw them transition from seeing technology as a capital expense to an operational one.

Improving Efficiency

This same idea applies to non-business entities, too. Several years ago, I helped design and implement a sensor network for a city’s wastewater treatment facility. The initiative to implement the new system was because of a major failure of one of the main intake pumps the year before. The root cause of the failure boiled down to unreliable, tedious, and manual inspection of each treatment basin and the associated pumps and machinery.

The new sensors were IP-based, wireless and wired, and some with LTE backup connections. Readings would be taken programmatically and continuously relayed to a centralized sensor management system.  Almost no amount of manual intervention would be necessary. The sensor rollout included new infrastructure, collaboration endpoints, ruggedized tablets for plant workers, and an on-premises sensor management system with cloud backup.

The result was a highly efficient, reliable, and safe mechanism to manage the city’s entire facility. No one who ran the treatment facility cared about the cloud-based disaster recovery design. No one cared about the latest silicon chip the sensors used. No one cared what methods we used to collect packet information for the visibility tools. Instead, plant managers cared about reducing risk and improving operational workflow.

The results were an immediate decrease in incidents, far fewer calls to the pump manufacturer’s TAC, and visibility into systems operations they never had before. 

Competing on the World Stage

The examples above involved large organizations. However, remember that today all organizations are technology organizations. Consider a financial services firm in upstate New York with only 14 employees. The only way to survive during the recent pandemic was to rethink how they used technology to compete with much larger companies and generate more profit for the business.

We often think of financial services companies as huge organizations that span the world and have the most sophisticated technology running behind the scenes. However, there are also many small companies—even sole proprietors—that offer many of the same services, and these small companies have to find a way to compete with some of the largest financial services names in the world.

My goal was to work with this small company of 14 to do just that. We developed a new web platform with self-service functionality for their customers. Managing one’s own financial account isn’t a luxury anymore, it’s a standard. And part of the new platform was a collaboration solution for customers to engage a financial expert in a high-definition video chat from the comfort and safety of their home. For a small company to offer these features put them on the same stage as the global companies they competed with.

We also moved as many applications as we could to the cloud so that all 14 financial experts could sell and process transactions for any product, for any customer, from any location. This small company now had the ability to sell the same products their huge competitors offered, and they could serve their customers quickly, reliably, and with that special touch only a small company could provide.

There was a lot of new technology as part of that project. We used the latest hardware, software, and cloud solutions. However, all of it was centered on one thing—making the company more competitive and ultimately creating more revenue.

They saw an immediate benefit to sales, a dramatic increase to inbound leads, higher customer retention, and they were able to expand their portfolio of financial products.

This small upstate New York company is not alone, either. In fact, according to a McKinsey report in 2020, 38% of executives plan to invest in technology to make it their competitive advantage.

Digital Transformation to Transform the Business

Digital transformation used to be centered on the latest and greatest technology. Maybe it was upgrading analog to VoIP. Perhaps it was installing a new wireless network. Those technologies in themselves may be great, but today, the question isn’t how sophisticated or cutting edge a technology is.

Today’s concern is laser-focused on how we can use that technology to improve business operations, increase efficiency, decrease unnecessary expenses, and generate revenue. In other words, today’s digital transformation recognizes that technology is no longer a cost center even for the smallest organizations. Indeed, it’s one of the main tools we have to help businesses do things faster, better, and cheaper.

]]>
Securely Optimize SMB Traffic with the Riverbed WinSec Controller https://www.riverbed.com/blogs/securely-optimize-smb-traffic-riverbed-winsec-controller/ Mon, 17 May 2021 15:30:00 +0000 /?p=16936 Server Message Block (SMB) traffic is a very common type of network traffic in most organizations, and it’s one of the most common types optimized by Riverbed’s application acceleration technology. For years we’ve been able to ensure optimal delivery of SMB traffic using our SteelHead WAN Optimization solution. However, dealing with SMB in a Windows domain poses some problems.

A security and administrative problem

SMB optimization requires the server-side SteelHead to interact with the domain controller as a Tier 0 device. Many domain admins consider this a security and operational concern.

The Microsoft Active Directory Administrative Tier Model (recently renamed the Enterprise Access Model) is used to organize domain elements. The framework is made up of three tiers:

  • Tier 0 is made up of the most valued and secure elements of a Windows domain. Normally these are domain controllers, ADFS, and the organization’s PKI.
  • Tier 1 devices are domain-joined servers and domain admin accounts with reduced privileges. These could be application and database servers, but they could also be a variety of cloud services as well.
  • Tier 2 is comprised of the remaining domain-joined elements such as workstations and user accounts. Tier 2 elements are considered the least secure and by extension the least valuable in the operation of the domain.

For SMB optimization to function, the SteelHead appliance needs to interact with the domain controller as a Tier 0 device right alongside domain controllers.

SMB optimization also requires the SteelHead to use the replication user account to communicate with the domain controller. The replication user account has elevated privileges within a Windows domain compared to standard user and computer accounts or mundane utility accounts. It’s not best practice for a network device to use this type of account, especially when that device isn’t managed by domain administrators.

This leads to our second problem.

A SteelHead appliance is normally managed by the network operations team, not domain administrators.

This poses a problem for the overall IT operational workflow. Normally, Tier 0 devices are managed by domain administrators.

The solution

Riverbed solves these problems by introducing a proxy in between the domain controller and SteelHead appliance.

The WinSec Controller is a completely dedicated, non-network appliance that interacts with the domain controller as a Tier 0 entity. It isn’t used for unrelated daily network operations tasks, and it’s meant to be managed by a domain administrator.

To optimize SMB, the SteelHead intercepts the authorization request the client computer makes to the file server. Then the SteelHead interacts with the domain controller as a Tier 0 device using the replication user account to retrieve the server key from the file server. With the server key, the SteelHead can decrypt the user session key, the SMB flow, and ultimately optimize the traffic.

Sitting between the SteelHead appliance and the domain controller, the WinSec Controller proxies requests and responses between the server-side SteelHead and the domain controller. And, to secure communication between server-side SteelHead and the WinSec Controller, we use a standard IPsec tunnel.

Currently, the WinSec Controller has a physical form factor only, though there are plans to develop a virtual deployment option with complete feature parity.

SteelHead WAN Optimization appliances are the cornerstone of SMB traffic optimization. However, maintaining proper operational, administrative, and security workflows is also extremely important. The WinSec Controller gives us the opportunity to accommodate our Windows, systems, and security teams while at the same time providing the same level of optimization we’ve benefited from for years.

Watch the video below to learn more about Riverbed’s WinSec Controller solution.

]]>
NetIM Simplifies Alert Notifications For Splunk Users https://www.riverbed.com/blogs/netim-simplifies-alert-notifications-for-splunk-users/ Thu, 13 May 2021 20:01:00 +0000 /?p=16915 Application performance is significantly influenced by the performance of underlying infrastructure. IT organizations constantly monitor alerts originating from thousands of network nodes to ensure the highest degree of performance. Riverbed NetIM and Splunk integration allows enterprises using Splunk’s data platform for operational and security intelligence to ingest infrastructure alerts easily.

Built on microservices architecture and a Kafka messaging framework, NetIM delivers the scale and performance necessary to monitor large hybrid enterprise infrastructure. NetIM simplifies operational workflows and day-to-day monitoring with plethora of advanced capabilities, some of which include:

Splunk Alert Notification

Customers can send infrastructure alerts to Splunk Enterprise or Splunk Cloud through HTTP Event Collector (HEC) APIs. Splunk integration allows NetIM to consolidate infrastructure alerts for Security Ops, IT Ops, and DevOps workflows. NetIM provides an out-of-the-box template for Splunk notification and also provides the flexibility to customize the Splunk template to meet specific business needs.

NetIM Splunk Integration
Customizable Splunk Alert Notification Template

Windows Visibility

Gain deep visibility into Windows environments by gathering instrumentation and telemetry from Windows computing systems. NetIM supports Windows Management Instrumentation (WMI) using PowerShell and aggregates Windows system information with other network metrics such as those obtained through SNMP or CLI. Data from NetIM can be presented through Riverbed Portal, a comprehensive operations dashboard across the hybrid enterprise.

NetIM WMI Data Collection
NetIM WMI Data Collection with PowerShell

Self Diagnostics

When NetIM deviates from its expected performance, the root cause can be within the NetIM application or hosted container environment. Isolating the cause is especially challenging when microservices are distributed across multiple physical hosts. NetIM provides self diagnostic tools to isolate issues across guest and host environments or even at the individual container or microservice level for faster resolution.

Analytics

NetIM provides powerful analytics capabilities and simplifies troubleshooting through automation, real-time monitoring, and identifying anomalies and violations. NetIM’s unique health scoring system for every device and interface can quickly communicate health status based on multiple metrics. Site level or group level summarized scores allow users to see global health status at a glance spanning the entire enterprise. NetIM’s intelligent analytics algorithms guide users to where performance issues are originating to save time and effort.

Automation

NetIM provides both northbound and southbound APIs to integrate with other IT systems and automate everyday tasks. Through these APIs, IT teams can automate adding, deleting, and updating devices, groups, interfaces and many other functions. By automating repeated and structured tasks, IT staff have more time to focus on projects of strategic importance to the organization.

NetIM is built from the ground up to tackle complexities of monitoring hybrid enterprise infrastructure. NetIM simplifies NetOps, SecOps, and DevOps workflows with capabilities such as Splunk integration, WMI and supportability tools. NetIM is a component of the Riverbed Network Performance solution, which tightly integrates device monitoring, flow monitoring, and full packet capture and analysis, for faster troubleshooting of complex performance problems.

 

 

]]>
The Challenges of Enterprise Monitoring with TLS and PFS https://www.riverbed.com/blogs/enterprise-monitoring-with-tls-and-pfs/ Tue, 04 May 2021 19:47:38 +0000 /?p=16689 One thing that’s certain today is that network security is a moving target. As attackers become more sophisticated there is a need to adjust the protocols we use and offer better data protection for end-to-end communication. This is true in the case of TLS. It’s long been a practice of many vendor products to use the method of loading the server private key on a network device in-path so that the device can decrypt the payload in transit, do whatever it needs to, and re-encrypt and send it along its way. Most security organizations don’t recommend this practice, however, it’s interesting to note that many security vendors themselves use this method to provide IPS/IDS and other functionality.

So, what adjustments have been made in TLS to improve overall security? In previous versions of TLS, up to TLS 1.2, Perfect Forward Secrecy (PFS), also known as forward secrecy, is optional, not mandatory. In TLS 1.3, PFS becomes a mandatory function of the protocol and must be used in all sessions. This is significant because PFS negates the ability to load the server private keys on the in-path devices to perform decryption. Before getting too far along, let’s cover a few TLS points from a high level.

What is TLS?

For most who find this article, you’ll probably be familiar with TLS. TLS stands for Transport Layer Security. It’s a protocol that sits behind the scenes and often doesn’t get the credit for the work it does. When you navigate to a secure website and the URL has “https” at the beginning, it’s TLS that gives you the “s.” In fact, some may even refer to it as an SSL (Security Socket Layer) connection, but it’s been TLS for quite some time now. The idea behind TLS is that it provides a secure channel between two peers. The secure channel provides three essential elements:

  1. Authentication
  2. Confidentiality
  3. Integrity

Authentication happens per direction. The server side is always authenticated. The client-side is optional. This happens via asymmetric crypto algorithms like RSA or ECDSA but that’s beyond the scope of this article. Confidentiality is another way of saying “encrypted.” The Integrity portion of TLS is used to ensure that data can’t be modified.

There are several major differences between TLS 1.2 and TLS 1.3, namely that Static RSA and Diffie-Hellman cipher suites have been removed in TLS 1.3 and now all public-key exchange mechanisms provide forward secrecy. This begs the question, “What is PFS?”

PFS is a specific key agreement protocol that ensures your session keys aren’t compromised even if the server’s private key is. Each time a set of peers communicate using PFS, a unique session key is generated. This happens for every session that a user initiates. The session key is used to decrypt traffic.

The way it works without PFS is that during session establishment, a Pre-Master Secret is generated by the client using the server’s public key. It is then sent to the server and the server can decrypt it using its private key. From there each side generates a symmetric session key known as the Master Secret. This is used to encrypt data during the session. If an attacker gets the server private-key, then it can also generate the Master Secret, meaning it can then decrypt any traffic from that session onwards.

To oversimplify the function of PFS, the client and server use Diffie-Hellman to generate and exchange new session keys with each session. Make sense from a security perspective? It should. PFS makes it much more difficult to get at user traffic and that’s the goal. But what does that mean to enterprise IT?

TLS, PFS, and the Impact on Enterprise IT

With the idea of PFS in mind, what’s the impact on Enterprise IT? First off, the task of not only securing traffic within an enterprise but also providing the required performance and monitoring of said traffic puts things in a bit of a grey area. We don’t want to let attackers get to the data so we protect it, but we also need to see the traffic to monitor it and accelerate it. So, what is an enterprise to do?

Traditionally, we’ve used a proxy to rip open the packets, do something to the packets, pack them back together and securely forward them. If you’ve worked with firewalls and IPS devices, then this isn’t a new concept. In fact, it’s something that is quite common in today’s networks. Considering the idea of performance and security monitoring, the process is no different.

Still, even though security departments make use of this traditional method of exposing encrypted data, it often comes with pushback from IT security departments when the request to do so comes from the infrastructure and monitoring teams. A visibility platform should be able to employ the same tricks as our security products (Riverbed NPM solutions do for what it’s worth), and up until TLS 1.3 came about they could.

But now we have a new hurdle to tackle. TLS 1.3 is making its way onto the scene and the traditional methods of exposing the data are no longer an option. In fact, since PFS is optional in TLS 1.2, if it’s used, current NPM solutions will have problems even prior to TLS 1.3 being rolled out. Why? Because the use of specific keys are only used for limited data transmission sessions. This makes packet inspection very challenging.

Addressing the Challenge with AppResponse

Despite the challenges that we face with advancements in security protocols, here at Riverbed we have continued to look for new ways to overcome these challenges. Before AppResponse 11.9, we supported RSA Key Exchange. In this scenario, private keys get uploaded to decrypt the traffic​, the pre-master secret is transmitted, and we decrypt it using the provided private key​. This is pretty much how any vendor would do it.

However, in some cases with TLS 1.2 and down the road with TLS 1.3 this is no longer a possibility. Therefore, we’ve started to make significant changes to how we handle the decryption of packets. With our feature enhancements we can still provide deep visibility and performance monitoring for today’s IT organizations while maintaining the level of security that TLS 1.2 provides. How can we do this?

The PFS API

When a Diffie-Hellman key exchange is used, a unique master secret is used for each session. Using integration partners, we can retrieve the required keys, giving us the ability to decrypt and inspect the traffic. A new PFS API in AppResponse, communicates with external sources and retrieves the Master and Session Keys. These external sources could be a load balancer or an SSL Proxy. I won’t go into the details here, but it solves the problem in an elegant way. A glimpse of the functionality is seen in the image below. Here AppResponse has a PFS API that is always on. Master and Session keys are retrieved from an external source, which could be a load balancer or SSL Proxy.

PFS API
The PFS API Process for Retrieving Master and Session Keys.

As the traffic is being sent to AppResponse, you might want to buffer it until the keys are received. If not, you’re going to lose visibility into those packets. For this reason, there’s a new feature that you can toggle on as seen below.

Buffering for PFS option in AppResponse

PFS processing in AppResponse may require a lot of data to be buffered while waiting for the key. For this reason, buffering is not enabled by default even if SSL encryption is enabled. Finally, if you disable PFS then all the keys received by the REST API will be discarded.

Wrap-up

It’s evident that the world is changing. When it changes in the name of security, enterprises can’t compromise visibility. Riverbed understands this and continues to provide innovative ways to address these unique challenges. To take a detailed look at Riverbed AppResponse and the capabilities discussed in this blog, watch our webcast Breakthrough Visibility & Performance for Encrypted Apps & Traffic and let your packets, whether sent in plain text or encrypted with TLS, start providing actionable answers to what’s happening in your network.

]]>
NetSecOps: 5 Reasons to Unify Your Network and Security Operations Teams https://www.riverbed.com/blogs/netsecops-5-reasons-to-unify-your-network-and-security-operations-teams/ Thu, 22 Apr 2021 23:31:00 +0000 /?p=16871 For decades network operations and security operations teams have functioned separately. That’s starting to change and with good reason, even though their fundamental goals are diametrically opposed. The network team focuses on ensuring access to applications and services while the security team focuses on locking down data and limiting connectivity. But, EMA has found strong evidence that over the last couple of years network operations teams are working more closely than ever with IT security teams. In fact, 63% of enterprises have formalized collaboration between the network team and the security team.

Figure 1. Relationships between today’s network ops and security ops teams. Only 37% can be considered NetSecOps!
Figure 1. Relationships between today’s network ops and security ops teams. Only 37% can be considered NetSecOps!

What’s more, they found a very strong correlation between close NetOps and SecOps (henceforth NetSecOps) collaboration and overall network operations success. Successful teams are very likely to have converged groups or share integrated tools and processes. Bridging the gap between groups is more likely to ensure a secure, highly performant network. Here are five reasons why your Network Operations and Security Operations teams should collaborate in a more formal manner:

1. Better network performance. Data shows that organizations with unified NetSecOps teams spend less time on reactive troubleshooting and more time on proactive problem prevention. This enables collaborative teams to focus on improving network performance, leading to a better user experience and business results.

Security system problems and security incidents are common root causes of IT service problems, so by joining forces, NetSecOps teams are also better equipped to root out security problems that affect network performance. For example, DDoS issues that take the network offline might formerly be considered a network issue but can now be properly diagnosed as a security problem and can be mitigated more quickly.

2. Accelerated security incident detection and response. Unified NetSecOps teams (36%) and teams that share tools and processes (27%) are focused more often on accelerating security incident detection and response. These teams identify and respond more quickly to incidents and breaches than separate NetOps and SecOps teams. Together they can investigate malware, breaches, and misconfigurations that can affect both security and performance. Surprisingly, infrastructure management (SNMP, WMI, etc.) is a key tool of unified teams. It can detect unusual activity on a network device, such as saturation of an interface by an attack or a misconfiguration and is not a tool that is typically in the security toolbox.

3. Cost efficiency. A side benefit of this collaboration is both operational and capital cost efficiency. By sharing tools—full-fidelity flow monitoring, packet capture and analysis, network infrastructure monitoring, NACs, etc.—then teams share one solution, and don’t have to purchase two very similar products. That also means there’s only one support contract and fewer devices to support in the data center (less power costs, rack space, etc.). It’s a win all around: faster, more secure network performance at a cost savings!

4. Faster response to business change. When two teams are comfortable working together, they get comfortable planning for changes together—like cloud migrations or work for home (WFH). Integrated plans are always more comprehensive and reduce the risk that change introduces, which brings me to my last benefit…

5. Risk reduction. When network operations and security operations work well together, the outcome is risk reduction. This is the ultimate measure of success for any NetSecOps team. As the adage goes, more hands make light work. Even if NetOps aren’t complete security experts, they are bound to notice some issues, because they are covering different ground, often in the network’s deepest recesses. And, as we established above, incident detection and response is accelerated, then malware and the like stays in the network for a shorter time. All of this is goodness when you have more brains thinking security.

The benefits of unifying your NetOps and SecOps teams should be clear by now. Ensuring your integrated NetSecOps team has the tools to enable full visibility from cloud to edge, assuring your network is always secure and high performing. Your enterprise-wide visibility toolset should include:

  • Full-fidelity flow data (no sampling)—enterprise-wide traffic and security visibility; behavioral analysis; threat detection; threat hunting
  • Packet data, not just metrics—network and application analysis; forensic analysis; encrypted traffic analysis; certificates analysis, etc.
  • Infrastructure management (SNMP, WMI, CLI, API, synthetics, etc.)—performance metrics; configuration/change management; device compliance; path analysis; network diagrams

To learn more about unifying your network and security operations teams and the benefits you can achieve, check out this EMA analyst paper, The Convergence of Network and Security Operations.

]]>
The Elevated Role of IT: Driving Business Forward and Beyond COVID-19 https://www.riverbed.com/blogs/the-elevated-role-of-it/ Wed, 21 Apr 2021 13:46:38 +0000 /?p=16885 At the end of April, I will celebrate my one year anniversary at Riverbed. I joined the company when Riverbed had already started working remotely due to the COVID-19 pandemic. To date, I have only met one person face-to-face—our CEO, Rich McBee—who I met when I was interviewing to join Riverbed. I don’t have an employee company badge (an extreme rarity in technology) and I’ve only met my team and fellow leaders at Riverbed via computer screens and cell phones. But today, almost 365 days later, I can honestly say that this last year has been both a challenging and incredibly rewarding journey. So, what are some of my biggest takeaways?

IT plays a critical role to make everything work together

As the CIO of Riverbed, my role is to make sure that IT systems that run the business operate smoothly and that we meet our deliverables to our customers. I also need to make sure our systems work for every employee—regardless of their location.

With employees becoming acclimatized to the idea of work from home (WFH), as well as meeting and transacting online, organizations will shift to WFH or work from anywhere (WFA) as a norm rather than as an exception. That’s why the connection to company resources via a laptop is so vital. If your employees are not connected consistently and securely then they’re unable to collaborate and be productive.

As with most organizations, the use of video and audio-conferencing tools at Riverbed has increased significantly. As a result, we’ve ramped up our technology infrastructure to account for the surge. We’ve also increased our investment in bandwidth expansion, network equipment, and software that leverages cloud services.

Digital transformation has been accelerated by 10x

Most IT organizations have a short-term action plan and a long-term strategy. But COVID-19 served as a forcing function for both simultaneously. Digital transformation strategies that were five years forward needed to be implemented in weeks. And having a strategy in which IT sits directly with the CEO is the only way to truly drive business forward at such a rapid pace.

When we look back on last year, we realize the milestones we accomplished and the radical change that was accelerated to connect people with technology. At Riverbed, we accelerated our cloud-first strategy to operate more effectively. I would have never dreamed that it was possible to work 100% remote in a matter of weeks, but having a framework in place enabled the transition to happen quickly. And because Riverbed helps customers with network and security operations, as well as accelerating access to networks and applications regardless of a user’s location, we’re also leveraging our own solutions to help ensure user productivity and the overall performance of our business.

A remote team is a more productive and happier team

You may think of IT as technology forward, but in the end we are a people-driven function. This last year as we faced incredible pressure to keep our IT systems driving the business, I’ve seen a huge performance increase in our team. The time we used to spend commuting is now spent closer to home and that work/life balance makes for a happier workforce that is just as productive.

As I look back to the last year, I’m very proud of the work that we’ve accomplished at Riverbed. Nothing impacted our ability to run productively and meet all of our customers’ deliverables. It wasn’t easy, but our team has left a lasting impression on the business and our employees. I just can’t wait to meet my fellow Riverbed-ers face-to-face in the coming months.

]]>
The Do’s and Don’ts of Marketing in a Crisis https://www.riverbed.com/blogs/b2b-marketing-in-a-crisis/ Mon, 05 Apr 2021 14:41:56 +0000 /?p=16845 In March 2020, I was in London meeting with customers, partners and teammates and upon my return to the Bay Area, organizations were preparing for what was believed to be a brief lockdown to slow the spread of COVID-19. Weeks turned into months, and here we are, more than a year later, still dealing with the virus and its effects on the global economy.

When a prolonged crisis like a pandemic takes place, market conditions are highly unpredictable—making it a real challenge to create and implement relevant marketing strategies. Here are my thoughts on how B2B marketers can maintain relevance and drive stability and growth for their organizations during times of uncertainty:

1. Don’t stop marketing

In the early days of the pandemic, the fear of being perceived as “exploiting a crisis” caused many organizations to pull back their marketing efforts. The fact is, during difficult times, it’s essential to remain visible in your customers’ minds. The key is to be relevant and get laser focused on what your customers need, WHEN they need it.

For example, in late Q1 and through Q2 2020, we concentrated exclusively on marketing and selling our remote work solutions to our existing customers. At that time, they were all dealing with the IT challenges of enabling remote work for their entire workforce. What they needed were solutions that gave them visibility and control over network and application performance so their work-from-home employees could stay productive.

With their immediate needs met, our customers began looking at the long-term ramifications of the pandemic: the future of work, digital acceleration, cloud transformation, business resiliency and network security. So, in the second half of 2020, the timing was right to broaden our marketing activities. We expanded our target audience, enabled our partners, and launched new campaigns around the visibility and performance solutions germane to those concerns.

Even if your organization is in a situation where your customers are not buying right now, keeping your brand in front of them improves perception. Just sending a reassuring message or helpful resource goes a long way in establishing trust and loyalty.

2. Do reevaluate your go-to-market mix

According to McKinsey’s B2B Decision-Maker Pulse Survey, 96 percent of businesses have changed their go-to-market model since the pandemic hit, with the overwhelming majority turning to multiple forms of digital engagement with customers.

With digital and remote engagement proving to be as effective or more than traditional field sales, it’s imperative that sales and marketing leaders reevaluate their go-to-market mix through the lens of their buyers’ digital experience.

For marketers, this means taking a close look at the effectiveness of virtual events, online content, and digital channels such as social media, search, and email. And as your primary channel for engagement, pay special attention to your websites. Are they working for you 24×7? Are you harnessing the behavioral data that is generated online to personalize and optimize engagements?

We have a long way to go at Riverbed in this regard, but we’re making progress by adding conversational marketing, hiring more data analysts, and implementing AI and ML powered technologies to automate customer/prospect engagement across our websites and through our sales organization.

3. Do double-down on account-based marketing

In an economic downturn, when everyone is asked to do more with less, use data to guide your spending decisions. Last year, we took the time to conduct an extensive analysis of our installed base to inform our GTM and RTM models for 2021. This data also proved valuable in making choices on where to invest our marketing dollars.

What the analysis found was that the law of the vital few is true. Within our base is a set of really loyal customers who continue to buy from us, year after year. These “franchise” customers have all been impacted by the pandemic, but in different ways. Some were hit hard, others thrived, and every one of them is transforming in some way to adapt to their current environment and an uncertain future.

That’s why we made the decision to further invest in targeted account-based marketing into these franchise customers as well as a set of “look-alike” accounts. In our case, it was critical that we developed a deeper understanding of each customer’s unique situation so that we could help our sales counterparts accelerate, solidify, or increase opportunities. ABM gives you an ability to do full lifecycle marketing, not just top-of-the-funnel acquisition. And that’s the ultimate goal of marketing.

4. Do strengthen your agility

During unpredictable times, agility and flexibility is key to responding to constantly and rapidly changing customer needs. It’s essential that organizations place a premium on building these skills within their teams, investing in technologies or process improvements that enable faster decision making, and streamlining operations for maximum efficiency.

I’m still amazed at how our employees demonstrated their ability to pivot based on new priorities, try new tactics, and double down on those that worked or move on from those that didn’t. This mindset, and their ability to navigate ambiguity, enabled us to execute and drive results, even while making the transition to working from home.

5. Don’t deprioritize innovation

Riverbed was founded on the creation of a new technology category: WAN optimization. Our product called SteelHead was so entrenched into our brand identity that to this day, customers say “Oh yes, I have a Riverbed on my network.”

When you are a category creator, you win. Followers may have some success, but they will never come close to being the market leader. However, category relevance changes over time. That’s why it’s so important for organizations to always prioritize innovation, even in times of crisis.

According to a 2021 Chief Outsiders Survey, nearly half of CMOs believe the pandemic created as many, if not more, opportunities for businesses than it eliminated. Think about how many industries have already transformed their business models—retail, restaurants, entertainment, healthcare, just to name a few. Yes, the pandemic wreaked havoc, but disruption fuels innovation.

As marketers, it’s our job to know what customers are thinking about now AND what they will be thinking about in the next 1-3 years. This insight is key to both reevaluating existing portfolios and identifying new categories that seize opportunities created by changing customers’ needs.

Closing thoughts

During the worst of the pandemic, the value of marketing in driving business priorities such as brand awareness and customer retention was evident. Now, as we look to the future, we must improve our ability to understand our buyers’ mindset, provide an exceptional brand experience, and respond with agility to drive innovation and growth.

Our customers are accelerating their digital transformation initiatives to stay competitive in this new business and economic environment. I’m looking forward to engaging with them on ways Riverbed can add value in their journey—and hopefully soon, gathering with customers, partners and teammates, in person!

]]>
4 Trends Guiding HR in 2021 and Beyond https://www.riverbed.com/blogs/4-trends-guiding-hr-in-2021-and-beyond/ Fri, 26 Mar 2021 19:05:30 +0000 /?p=16811 It’s hard to believe that it’s been over a year since the onset of the global pandemic. I remember thinking initially that our offices would be shut down for a couple of weeks, maybe a month, and then life and work would return to normal. But 365+ days later, we’re still battling COVID-19 and employees are still primarily working from home.

The pandemic presents challenges HR leaders have never faced before—challenges made more complex by constantly evolving requirements and restrictions that differ city by city, state by state, country by country. There was no playbook for how to reengineer every aspect of the employee experience in a pandemic, yet that is exactly what HR teams have had to do.

Even as we begin to recover, it’s clear that COVID-19 will have a lasting impact on how organizations operate and manage their workforce. HR leaders need to prepare for new realities facing businesses and evolve their people strategies accordingly. Here are four global work trends guiding 2021 and beyond:

Trend #1: Significant and permanent increase in remote work

Pre-pandemic, an estimated five percent of full-time employees with office jobs worked from home at least three or more days per week. Now that many organizations have experienced the benefits of remote work (cost savings, increased productivity, improved recruitment and retention, etc.), that figure is expected to be at least 40% one year after the pandemic subsides.1

In flexible, hybrid work models, having adequate and reliable technology is essential to employee engagement and productivity. HR and IT teams must work together to provide remote work solutions that provide seamless and secure access to the resources employees need to perform their jobs, no matter where they work.

Be sensitive, however, to technology burnout. The phenomenon of “Zoom fatigue” is real. I’ve encouraged my team to vary their communication methods (voice, text, email, instant message) and to schedule 45-minute video conference meetings so that there’s time for breaks in between.

Trend #2: Greater focus on employee wellness programs

In a recent Global Human Capital Trends survey, 80% of business leaders identified well-being as their top-ranked priority for organizational performance and success.2  That’s no surprise given the abnormal difficulties of 2020. The pandemic, economic uncertainty, political turmoil, social injustices, and natural disasters have taken an enormous toll on us—mentally and physically.

People are like icebergs in that you can’t see what’s beneath the surface. We recognized that our employees would need additional resources to help them cope with these crises, as well as the added stress of making the transition to work from home. We enhanced our global wellness program to address the “whole person” with new services covering mental fitness, financial well-being, and confidential counseling that extends to every member of an employees’ household.

Trend #3: Spotlight on diversity and inclusion (D&I)

History tells us that during times of crises, D&I initiatives are at risk as businesses focus on their most pressing needs. But that certainly wasn’t the case in 2020. As the world combatted COVID-19, a massive protest against systemic racism and social injustice erupted, prompting business leaders to take a serious look at their organizations’ D&I practices.

Riverbed has always been a place of inclusion, diversity and community, but we know we can and must do more. With full support from our Board of Directors and senior leadership, we’ve extended our D&I programs to include a special task force focused on creating new opportunities for employees to connect and get involved. This task force also looks at D&I barriers within our recruitment, retention, advancement and onboarding practices.

With so many organizations repledging their commitment to diversity and inclusion last year, the spotlight will be on how these organizations, and the business community at large, can make an impact on issues of racism and social inequity, both within and beyond the workplace.

Trend #4: Untethering talent from location

In a virtual world, enabled by the right technology, talent plans are no longer restricted by location or a candidate’s willingness to move. This means employers can source the best talent from anywhere in the world and reduce costs associated with relocation and office setup. And it means more opportunities for job seekers, who perhaps live in rural or more remote areas of the world, to pursue roles that were once off limits to them because of where they call home.

Larger talent pools won’t necessarily make recruiting easier, especially in the tech sector, where there continues to be fierce competition to attract and retain talent. This is why factors such as culture, honest and empathetic leadership, and proven resilience are so important. These are the differentiators that give employers an advantage over the competition.

Looking back to move forward

While the pandemic has been difficult for all of us, we can find positive outcomes. At Riverbed, we’ve reached even higher levels of frequency and transparency in our communications. We’ve significantly advanced our diversity and inclusion efforts, which are core to our company culture and values. We’ve quickly transformed our learning and development courses, exceeding pre-pandemic enrollment. And through it all, we’ve kept our employees’ safety and well-being front and center.

Eventually, as restrictions are lifted, we’ll begin the complex task of returning to the workplace—a workplace that will be quite different than the one we left some 365 days ago. It will be a gradual process that safeguards employees in every way and acknowledges varying levels of personal readiness.

Looking back, I’m inspired by the resilience we’ve shown as an organization. How we have become closer to each other despite being physically apart. And I am confident in moving forward, knowing that it’s in our DNA to power through any challenge that comes our way.

 

1.Source:The Conference Board online survey of 330 HR executives, September 14 and 25, 2020, published as Adapting to the Reimagined Workplace: Human Capital Responses to the COVID-19 Pandemic

2.https://www2.deloitte.com/us/en/insights/focus/human-capital-trends/2020/designing-work-employee-well-being.html

]]>
Alerting and Troubleshooting Network Performance with Unified NPM https://www.riverbed.com/blogs/alerting-and-troubleshooting-network-performance-with-unified-npm/ Thu, 25 Mar 2021 21:59:00 +0000 /?p=16746 Unfortunately, IT is often the last to know when there’s a problem with the network. In fact, it’s often the end users experiencing problems who alert the help desk. A proactive IT department needs to be able to detect incidents as they happen, and ideally anticipate a potential issue before it happens. The result is a reduced time to resolution, an improvement to overall uptime, and the ability to solve a smaller problem before it turns into a catastrophe.

Riverbed’s network visibility portfolio brings together sophisticated monitoring, alerting, and troubleshooting capabilities all under one banner. NetIM, NetProfiler, and Portal integrate together seamlessly to bring you a robust solution that can detect and alert on very specific network anomalies and application performance issues.

Visibility for the Underlying Infrastructure

NetIM is focused on the actual network infrastructure itself. In other words, it looks at what’s going on with the routers, switches, firewalls, and all the devices that underpin your applications. With NetIM, you can monitor device health and drill down into specific metrics such as interface errors, packet drops, CPU utilization, link utilization, and packet discards.

NetIM leverages a variety of approaches for real-time monitoring, including:

  • Device APIs
  • SNMP
  • Device CLI
  • Syslog
  • WMI
  • Synthetic testing

With the information coming from the network, NetIM can then build robust visualizations of your network topology to help you understand the path an application takes through the network.

Monitoring Applications

NetProfiler is a very powerful monitoring platform you can use to analyze application flows. It combines flow data and packet-based flow metrics to provide full-fidelity traffic monitoring. NetProfiler takes you beyond the underlying infrastructure to provide behavioral analytics such as baseline traffic patterns and dependency mapping.

Network data and flow records are captured using NetFlow and sFlow, but it also collects AWS VPC Flow Logs, IPFIX information, and data directly from Riverbed tools such as the Riverbed NPM Agent and AppResponse.

For cloud visibility, NetProfiler can be deployed in AWS and Azure. And whether NetProfiler is deployed in the cloud or on premises, it can always collect flow telemetry information from resources in public cloud.

Bringing Everything Together

Portal is the dashboard that brings it all together in one place. It aggregates telemetry from NetIM and NetProfiler, and also integrates with other Riverbed visibility tools including AppResponse, UCExpert, Aternity EUEM, and Aternity APM.

With Portal, you have an active launchpad of interactive dashboards, application discovery mechanisms, and network path visualizations. A network operator can begin their daily monitoring and troubleshooting with Portal, a single source of truth for what’s going on in the environment.

Remember that Riverbed’s visibility portfolio is a collection of powerful tools that integrate with each other. Each individual component is designed to focus on one aspect of NPM. Together, they cover all the bases of network and application performance monitoring. And with Portal as our visibility homepage, they work together as one visibility solution.

Watch the video below to walk through a scenario in which end users are reporting bad network performance. Using Portal, NetIM, and NetProfiler, we diagnose the problem and discover a specific application overloading the WAN interface of the branch router.

Watch Video

 

 

]]>
Adapting Work Environments for a New Normal https://www.riverbed.com/blogs/adapting-work-environments-for-a-new-normal/ Sun, 21 Mar 2021 12:48:03 +0000 /?p=16795 I can’t believe it has been 12 months since the world locked down from the pandemic. A year ago, I was just finishing up a business trip from Europe and was back in my San Francisco office when we decided to close the office for two weeks. We all packed up and prepared to work remotely for the next few weeks, but then two weeks quickly became two months, then four, etc. We all know the story. Fast forward and here we are today still working from home one year later. It is amazing what we have learned to adapt and change the way we work and how we engage with our customers.

Virtually overnight, we enabled our employees to safely work from home and introduced new capabilities to make everyone productive. Company plans were suddenly challenged, and significant amounts of time were unexpectedly allocated to understanding and guessing the implications of the pandemic. There was no playbook to reference what to do when one billion workers suddenly work entirely from home, because their offices have shut down, while other businesses completely shut down. For businesses around the globe, discussions of growth and expansion were replaced with discussions contemplating furloughs, layoffs and pay cuts in an effort to maintain business continuity. I think we all underestimated the increase in our personal workloads and the reality of Zoom fatigue and how quickly that would set in, but we have continued to adjust and thrive. The ability to pivot in times of crisis is something we have all mastered this past year and is a lesson we will not soon forget.

While some markets felt pain, other markets flourished. Cloud and security became top of mind. Companies began to accelerate their cloud-first strategies to address the growing demands of a nomadic workforce and rethought their future physical infrastructure requirements, including real estate. A new normal was forming that will change the way we work moving forward. For Riverbed, that meant accelerating our strategy to move from an appliance-heavy business to an end-user performance business. Our Client Accelerator product enabled users to have an in-office experience accessing applications while they worked from home or anywhere, without negatively impacting productivity.

Companies were also faced with a new challenge: visibility of their infrastructure and new risks of vulnerabilities with everyone away from the safe haven of their office. Increasing demands on monitoring networks and applications put pressure on IT organizations to provide solutions to ensure business continuity and security. Riverbed’s Network Performance helped customers overcome these challenges and more importantly, helped customers through challenges when other products proved to be insufficient or exposed vulnerabilities.

The word “hybrid” has taken on a new meaning this past year and we have proven that many industries can and will work from anywhere. The need to create a secure and seamless experience for employees and customers, both in the office and remotely, will remain part of the fabric of our work life going forward.

Another learning through the pandemic was adapting the sales and customer relationships in this new environment. Being able to create a connection with the customer without being face to face can be challenging. However, new creative communication vehicles became critical for developing and maintaining customer connections and establishing value in a virtual environment became the new art. Through virtual customer events, sales kickoffs, partner summits and customer executive sponsorships, we were able to stay ahead of the curve and provide our sales teams and customers what they needed to be successful. We began to master virtual events and create experiences that could scale much broader than being trapped in a single location. This allowed us to share experiences to broader audiences that may not have an opportunity to travel even in normal circumstances.

The pandemic has been challenging for everyone on both a personal and professional level. It has forever changed us, the way we live and the way we work. As leaders in the business, the crisis has changed the way communicate, engage and lead in times of ambiguity and uncertainty with a higher sense of empathy and purpose. As we look ahead to a world where the pandemic is not at the forefront, I know there is a new normal on the horizon and we have learned a great deal this year that will help us all navigate what’s next. Finally, we must all take note of our experiences and pass our learnings on to the next generation of business leaders to prepare them for something that we did not have the opportunity to prepare for ourselves.

]]>
Troubleshoot Financial Services Applications with AppResponse https://www.riverbed.com/blogs/troubleshoot-financial-services-apps-with-appresponse/ Fri, 19 Mar 2021 12:30:00 +0000 /?p=16658 The financial services industry has led the way in network security and data encryption, but it also relies on frequent transactions that need to go through no matter what. Troubleshooting financial services applications can be difficult because it means having visibility into traffic that’s usually encrypted.

Riverbed’s Network Performance solution can decrypt certain types of traffic giving network operators the visibility they need to troubleshoot problems quickly. AppResponse, part of our Unified NPM suite, focuses on application performance monitoring and provides continuous full-fidelity packet capture of targeted applications. With AppResponse, no data is lost.

Network visibility derived from a completely reliable packet capture is very powerful, but AppResponse can go further by decrypting certain PFS, SSL, and TLS traffic. This gives network operators the ability to troubleshoot problems with traffic that they would otherwise be blind to.

Decrypting Application Traffic

First, IT provides the private server key to AppResponse. We can then intercept the session key and decrypt certain non-PFS traffic in real-time. And though using non-PFS is actively discouraged today, it’s still commonly used in enterprise environments.

Next, to decrypt traffic that does use PFS, AppResponse exposes an API that allows an external entity such as an SSL proxy to send ephemeral keys to it. Typically, this means deploying software agents to Linux and Windows systems, which then send their private server keys to AppResponse. We can also run a relatively simple script on an F5 load balancer to send the necessary keys.

AppResponse isn’t able to decrypt all public web traffic, but for internal applications it can see what’s happening with encrypted traffic on a transaction-by-transaction basis. Whether the issue is with the network, the server environment, or the end-user’s client, AppResponse can provide granular visibility into every component of an IP conversation.

Finding Correlation with AppResponse

When we open AppResponse, we start with a view of all traffic. We can locate our applications on the list in the lower left of the page, or we can open the Insights menu and select Applications there.

View all traffic and applications from the main screen of AppResponse
View all traffic and applications from the main screen of AppResponse

If AppResponse has the API key and/or server key, it will be able to show a network operator details for a secure application with full fidelity and granularity. For example, notice in the image below that we can see the transaction metrics of encrypted application traffic including page times, payload transfer times, and server response times.

See the transaction metrics of encrypted application traffic
See the transaction metrics of encrypted application traffic

We can also visualize patterns in network activity, which is a great way to see if there’s a correlation between specific metrics and application behavior. We call this a TruePlot visualization and it can be modified to focus on specific metrics or date ranges.

Use TruePlot visualization to see correlation between specific metrics and application behavior.
Use TruePlot visualization to see correlation between specific metrics and application behavior.

Correlation is just a clue, though, so from AppResponse a network operator can select a single transaction and launch Transaction Analyzer, a companion tool that allows us to look at every single step in an IP conversation.

Going Deeper with Transaction Analyzer

Transaction Analyzer can look at specific protocols and applications to provide a readout of everything going on between two hosts. For example, a financial services organization experiencing slow application performance can start with AppResponse to identify the possible cause of the behavior. Then they can use Transaction Analyzer to drill down into the back-and-forth communication between the specific client and server. Look at the image below and notice how we easily can focus on one transaction.

Transaction Analyzer makes it easy to drill down into the back-and-forth communication between the specific client and server.
Transaction Analyzer makes it easy to drill down into the back-and-forth communication between the specific client and server.

Because AppResponse stores all of the packets captured when an application is in use, we can use Transaction Analyzer to get as granular as we need to. Transaction Analyzer works in real time, so we can also use it to take ad-hoc traces between any hosts in the network as the problem is happening.

Network security and data encryption are certainly important to the financial services industry, but so is the ability to resolve application performance problems as quickly as possible. Blind spots in network activity aren’t an option when every application transaction is money.

AppResponse and Transaction Analyzer, two foundational components of the Riverbed Unified Network Performance Management solution, provides IT the ability to troubleshoot encrypted application problems in real-time and keep the business moving.

Visit our content hub to learn more about our solutions for financial services organizations.

]]>
365 Days of 100% Remote Work https://www.riverbed.com/blogs/365-days-of-100-percent-remote-work/ Tue, 16 Mar 2021 19:22:32 +0000 /?p=16779 Just over a year ago today, I wrote a note to our employees that we were planning to close our offices for at least two weeks due to COVID-19. While we didn’t know it then, two weeks would turn into 52 weeks, and still counting.

As CEO of Riverbed, and in every leadership role throughout my career, I never went a week without either working at headquarters, visiting another office, or meeting in person with customers, partners or investors. But for the last year, I have worked 100% from home, have not used a traditional office phone once, and only on a few occasions have seen a member of my leadership team in person. Our entire organization has been remote for a full year! After dozens of meeting interruptions from my dogs and 365 pots of coffee later, here are some of my lessons learned and what’s next for the future of how we work.

People always come first

When the pandemic hit, our top priority was the health and safety of employees, customers, partners, and the community. And in times of crisis, the role and importance of communications is crucial. We communicated to employees frequently and transparently in town halls, Q&A forums, BU sessions, staff meetings, one-on-ones. Things were changing fast—by the hour at times. It was important to the leadership team to stay close to our people and help them through this challenging time. We also asked our employees to stay close to our partners and customers—many who were also dealing with challenging and stressful times at home and work. When you are there for your customers, partners and employees during tough times, they remember.

We worked 100% remote and stayed very productive

At the onset of the pandemic, we did an initial test with one business unit in early March, and a week later, every single employee was remote. I believed we’d be okay, but this was new territory for our company, our systems, managers and culture. What we experienced with our team is when you have high performers, they perform and deliver regardless of location.

Fortunately, our company also offers software that helps. Client Accelerator delivers application acceleration to mobile workers by optimizing laptops and PCs; and SaaS Accelerator boosts the performance of popular SaaS applications such as Microsoft Office 365 and Salesforce by reducing network latency. These solutions, along with video and collaboration apps gave our employees an in-office experience at home and ensured our team was able to stay highly productive even in the midst of a pandemic where we were 100% remote.

Business does not stop in a pandemic

It may pause momentarily, but it keeps moving. You can take the opportunity or be paralyzed by it. Riverbed, like many others, took the challenge and navigated the business through the storm. There were good days and there were turbulent days. But we kept going, and I’m very proud of our team’s perseverance. We pivoted to meet the pressing needs of customers, leveraged video collaboration tools for sales calls, and did a lot of contingency planning.

For our customers, we placed greater focus on our work-from-anywhere solutions as well as our Network Performance offerings. As employees started working remotely, organizations needed greater visibility across the network to ensure users were up and running and that productivity and performance were not impacted. Additionally, enterprise and government customers are also finding that as more users are remote, the security perimeter greatly expands. Leveraging network visibility can play an important role in identifying and mitigating cybersecurity threats by helping with threat hunting, incident response and forensics.

We’re capable of doing a lot more than we think

When I worked at Mitel, we were planning for a day when collaboration tools in business would become ubiquitous and digitization would drastically change business models. We were beginning to see progress, but mass market adoption looked to be five years out. And then COVID hit, and a five-year build happened in THREE months! Businesses, healthcare organizations, government agencies, manufacturers, banks, retailers all evolved their business models, while supporting one billion remote workers overnight (up from approximately 350 million).

There were many heroes. As a CEO of a technology solutions provider, I can’t say enough about our customers—and how big of a role the CIO and IT organizations played in helping businesses and governments handle this massive and immediate change, and help maximize productivity and performance in their organizations during a very tough environment. Yes, we are capable of doing a lot more than we think.

The future of work will be different and better

Post COVID, many organizations will shift toward hybrid models, with employees increasingly remote or working from anywhere (#WFA). Offices won’t fully vanish. However, increasingly HQ and regional offices will become collaboration centers, with employees coming in for critical meetings or projects, using large hoteling and collaboration spaces. This is the direction Riverbed is moving. Prior to COVID, approximately 70% of our employees worked in offices full-time. After COVID, this will drop to 20%, with most employees working remote and coming into the office a day or two a week. While the percentages may vary by industry and region, and there are still roles that are best served in person, there is a clear shift toward hybrid and remote work with 600 million people expected to work remote by 2024, up approximately 70% before the pandemic. And while traveling to meet customers and colleagues will still remain important, we will also find ourselves doing some of those meetings face to face using video.

What’s encouraging is the future of work will bring forward a number of benefits we always strived for—fewer commutes for better work/life balance and less impact on the environment; greater conveniences and experiences with new digital models; and the democratization of talent, where opportunities once out of touch due to location will be within reach for future generations. With the future of work, we are starting to unlock the true promise of technology.

In closing, this has been a very challenging year with a global pandemic that has impacted so many. But it has also taught us many lessons in business and life—what matters most, what we’re capable of, and how the future of work will be better for us and our world.

]]>
Troubleshoot Multi-cloud Applications with AppResponse https://www.riverbed.com/blogs/troubleshoot-multi-cloud-applications-with-appresponse/ Mon, 15 Mar 2021 12:30:00 +0000 /?p=16629 For years network visibility has been about looking at traffic flowing through switches and routers on our local and wide area networks. Those of us who were a little ambitious might also take a packet capture every so often only to spend the entire afternoon studying hundreds of lines in our capture file.

However, the way we do business today requires visibility beyond our switches and routers. We need visibility into what’s going on with our resources in the public cloud.

Hosting resources in Azure and AWS poses its own unique challenges to an IT department. It’s difficult to troubleshoot what can’t be seen, and that’s exactly where Riverbed AppResponse  visibility solution comes in.

Visibility for the Cloud

IT departments don’t own the environment their cloud resources reside in. An IP conversation between an end user and a server in Azure will traverse the local network, the service provider’s network, and the cloud’s network.

It’s difficult to see what’s going on with servers and network devices when almost the entire environment belongs to someone else.   

AppResponse solves this problem by continuously capturing traffic between cloud-hosted applications and everything they communicate with. The packets never lie, so with a reliable, full-fidelity capture of activity at the most granular level, a network operator has both real-time and historical data down to microsecond intervals.

This works in hybrid cloud and multi-cloud environments as well. A hybrid cloud is a combination of a private data center and public cloud environment, and that has become a standard deployment method for many organizations. Multi-cloud environments introduce complexity by virtue of having multiple public cloud vendors often using disparate technologies.

AppResponse Cloud captures traffic in hybrid and multi-clouds just as well as it does for local resources because it focuses on packets, the purest and highest fidelity information that exists.

How it Works

AppResponse is deployed as a virtual machine in Azure or AWS and works with Riverbed Agents, AWS VPC Traffic Mirroring, Azure Virtual Network Taps, and various cloud brokers. In this way, AppResponse can capture all the packets flowing to and from a particular cloud-hosted application.

Keep in mind that Riverbed’s entire visibility portfolio is really one solution with several tools under the hood. AppResponse integrates with our other visibility tools such as NetProfiler, NetIM, Transaction Analyzer, Packet Analyzer, and Riverbed Portal. To provide external domain information, AppResponse provides links to ARIN WHOIS Search, Geotool, and a variety of third-party network tap aggregators.

When network operators open up the AppResponse dashboard, right away they’re presented with a wealth of information for their cloud applications. Drilling down is a matter of choosing an application, a time range, an IP address, or whatever information is relevant.

Network teams can view trends, patterns, baselines and also drill down into individual transactions and packets for root-cause analysis. Because they’re looking at individual packets between clients and application servers, they get a much better picture of what’s happening end-to-end. That means with AppResponse Cloud, they can determine if the problem is with the network, the application, or with the client.

Packet captures on LANs are certainly very powerful for troubleshooting, but being able to capture application traffic in hybrid and multi-cloud environments brings us to the next level. Today’s applications are hosted in on-premises data centers, in hybrid cloud, and in multi-cloud environments, so visibility means seeing what’s going on beyond just our switches and routers. AppResponse enables us to find the root cause of performance problems with our applications—no matter where they are.

 

]]>
AppResponse Cloud Supports Amazon Virtual Private Cloud https://www.riverbed.com/blogs/appresponse-cloud-supports-amazon-virtual-private-cloud/ Wed, 10 Mar 2021 13:30:00 +0000 /?p=16635 With users forced to work from home for nearly a year, and many never returning to the office, it should come as little surprise that infrastructure as a service (IaaS) grew 13.4% to $50.4 billion in 2020, according to Gartner.[1] The effects of the global economic downturn are intensifying organizations’ urgency to move off legacy infrastructure operating models, with most organizations turning to cloud system infrastructure services. In fact, almost 70% of organizations using cloud services today plan to increase their cloud spending in the wake of the disruption caused by COVID-19.[2]

Supports Observability-enabling Technology

Launched in 2019, Virtual Private Cloud (VPC) Traffic Mirroring allows AWS customers to gain native insight and access to the network traffic across their VPC infrastructure for network and application performance analysis, and threat monitoring. With this feature, customers can copy network traffic from an Elastic Network Interface (ENI) of supported compute instance types in their VPC and send it to Riverbed AppResponse for network and application analysis in order to monitor and troubleshoot performance issues.

AppResponse Cloud provides rich, unparalleled network and application visibility into AWS environments. It enables IT Operations to quickly pinpoint performance degradations and high latency in cloud and hybrid networks. AppResponse Cloud automatically identifies more than 2,500 applications for detailed application analysis as well as identifies and troubleshoots network issues faster. AppResponse Cloud supports a number of packet sourcing options; chief among them is AWS-native VPC Traffic Mirroring. This allows you to replicate the network traffic from EC2 instances within your VPC to security and monitoring appliances for use cases such as content inspection, threat monitoring, and troubleshooting.

Amazon is expanding the availability of this critical observability-enabling technology. Traffic Mirroring now supports additional select non-Nitro instance types. Until now, customers could only enable VPC Traffic Mirroring on their Nitro-based EC2 instances. With this announcement, customers can now enable VPC Traffic Mirroring on additional non-Nitro instances types such as C4, D2, G3, G3s, H1, I3, M4, P2, P3, R4, X1 and X1e that use the Xen-based hypervisor (it is not supported on the T2, R3 and I2 instance types and previous generation instances). This feature is available in all 22 regions where VPC Traffic Mirroring is currently supported.

Public cloud-based infrastructure has become the dominant platform for delivering mission-critical IT applications and services. Broader availability of VPC Traffic Mirroring enables Riverbed AppResponse Cloud to keep up with that trend and deliver end-to-end network and application analysis for the growing diversity of cloud compute infrastructure.

To test drive AppResponse Cloud with  VPC Traffic Mirroring, please contact Riverbed sales.

 

[1] https://www.gartner.com/en/newsroom/press-releases/2020-07-23-gartner-forecasts-worldwide-public-cloud-revenue-to-grow-6point3-percent-in-2020#:~:text=The%20second%2Dlargest%20market%20segment,of%20legacy%20infrastructure%20operating%20models.

[2] https://www.gartner.com/en/newsroom/press-releases/2020-11-17-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-grow-18-percent-in-2021

]]>
Cyber Security Threat Hunting Using Network Performance Management Metrics https://www.riverbed.com/blogs/threat-hunting-using-network-performance-metrics/ Mon, 08 Mar 2021 09:30:38 +0000 /?p=16640 If you are familiar with Network Performance Management (NPM) metrics, you’ll recognize the following key performance indicators. But did you know that these same KPIs, along with many other metrics, are helpful for cyber security threat hunting?

  • Top-Talkers,
  • IP Addresses
  • Typical Port and Protocol Usage
  • HTTP Return Code Ratio
  • Traffic Volume Metrics, and many more…

Threat hunting is what cyber security analysts do…but they need data sources that can’t be compromised like full-fidelity network wire data or network flow data. Why network wire data? It is clean and consistent across the network. Attackers can manipulate logs, source of event, and break through deployed security infrastructure, but they can’t manipulate network packet/wire data.

Let’s focus on two key aspects of cyber security:

1. Threat Hunting: Proactive threat identification applies new intelligence to existing data to discover unknown incidents.

What you should be looking forward for: Threat intelligence often contains network-based indicators such as IP addresses, domain names, signatures, URLs, and more. When these are known, existing data stores can be reviewed to determine if there were indications of the intel-informed activity that warrant further investigation.

2. Post-Incident Forensic Analysis: Reactive detection and response examines existing data to more fully understand a known incident.

What you should be looking forward for: Nearly every phase of an attack can include network activity. Understanding an attacker’s actions during each phase can provide deep and valuable insight into their actions, intent, and capability.

Why Threat Hunting is Important

No evidence of compromise does not mean evidence of no compromise. Hackers are always busy trying to avoid detection. You don’t know today what you’ll need to know tomorrow! You need to investigate. If you are not putting telemetry in place, you don’t have a recording of what’s happening, which means you will not see who’s doing what, with whom, etc.

If you have a Network Performance Management background and are not a professional threat hunter, then let’s start by describing the phases of an attack and how the attacker sees your network. There are seven specific phases of cyber attacks, several of which include network activity:

  1. Reconnaissance (recon) to know the target
  2. Scanning to find something attackable
  3. Gaining an initial point of compromise into the target network to create a foothold and use it for a pivot point for additional recon and scanning
  4. Pillaging the network for valuable resources (e.g., useful info, internal DNS, username enumeration, passwords, other attackable machines)
  5. Exploiting data to get resources (i.e., data exfiltration)
  6. Creating back doors to stay in the network, including creating listeners and/or backdoor C2 channels, installing software, maintaining persistent access
  7. Covering tracks by cleaning up logs, backing out of changes, and patching systems

You’ll notice many familiar KPIs related to network performance management. That’s because nearly every phase of a cyber attack can include network activity—which is why monitoring for traffic anomalies is a great starting point for threat hunting.

Practical Advice

Here are a few examples of how Riverbed Network Performance can help you leverage network KPIs for threat intelligence and hunting:

 Network Performance KPI Data Source Existing Usage Threat Hunting Usage
Top-Talking IP Addresses Full-Fidelity NetFlow The list of hosts responsible for the highest volume of network communications in volume and/or connection count. Calculate this on a rolling daily/weekly/monthly/annual basis to account for periodic shifts in traffic patterns. Unusually large spikes in traffic may suggest exfiltration activity, while spikes in connection attempts may suggest Command & Control activity, their actions, intent, and capability.
Traffic Volume Metrics Full-Fidelity NetFlow Maintaining traffic metrics on time-of-day, day-of-week, day-of-month, and similar bases. These will identify normative traffic patterns, making deviations easier to spot and investigate. A sudden spike of traffic or connections during an overnight or weekend period (when there is typically little or no traffic) would be a clear anomaly of concern.
Top DNS Domains Queried Network Wire Data & Full-Fidelity NetFlow The most frequently queried second-level domains based on internal clients’ request activity. In general, the behaviors of a given environment don’t drastically change on a day-to-day basis. Therefore, the top 500-700 domains queried on any given day should not differ too much from the top 1000 from the previous day. Any domain that rockets to the top of the list may suggest an event that requires attention, such as a new phishing campaign, C2 domain, or other anomaly.
Typical Port and Protocol Usage Full-Fidelity NetFlow The list of ports and corresponding protocols that account for the most communication in terms of volume and/or connection count. Calculate this on daily/weekly/monthly/annual basis to account for periodic shifts in traffic patterns. Similar to the purpose for tracking top-talking IP addresses, knowing the typical port and protocol usage enables quick identification of anomalies that should be further explored for potentially suspicious activity.
HTTP GET vs POST Ratio Network Wire Data The proportion of observed HTTP requests that use the GET, POST, or other methods. This ratio establishes a typical activity profile for HTTP traffic. When it skews too far from the normal baseline, it may suggest brute force logins, SQL injection attempts, server feature probing, or other suspicious/malicious activity.

Network forensics is a critical component for most modern incident response and threat hunting work. Network data can provide decisive insight into the human or automated communications within a compromised environment. Network forensic analysis techniques can be used in a traditional forensic capacity as well as for continuous incident response/threat hunting operations.

What you really need is complete data so threat hunting can be meaningful, not sample data that retains only statistics. It’s best to use Riverbed AppResponse and NetProfiler to start collecting full-fidelity network packet and network flow data for threat hunting.

Riverbed NetProfiler Advanced Security Module is a full-fidelity network flow solution that watches for changes in behavior. These changes could be new services on a sensitive host, connections to untrusted systems, or unexpected data movement. The network fingerprinting process creates a statistical profile of network connections to identify the abnormal sessions.

The threat hunting process is data- and time-intensive. Focus on filtering key assets, unique threat identifiers, or other known aspects in the search—these are great starting points for threat hunting!

]]>
Network Admin Nirvana: Fixing Performance Issues Before Users Notice https://www.riverbed.com/blogs/fix-performance-issues-before-users-notice/ Mon, 15 Feb 2021 00:00:00 +0000 /?p=16538 For many organisations, the status quo involves IT teams troubleshooting and firefighting network and application performance issues after users have called the help desk to complain about them.

This reactive modality can create a material impact to user productivity and has a knock on effect that impacts the organisation’s customers and partners if the issue is pervasive. Unfortunately, this is not uncommon. In recent times, we have seen multiple banking outages prevent customers from accessing their funds, credit cards, payment systems, and online and mobile banking platforms due to technical glitches. Even worse, what if technical glitches prevented access to patient medical records or heavy machinery equipment?

The exponential growth of digital transformation will continue, but so will the demands on IT further necessitating the requirement for organisations to get more proactive on health of their network and application performance.

What would it mean to your business if you were able to spot trends and have the insights to forecast degradation of network and application performance?

Organisations forecast their performance to measure their business health and investments as standard practice. The same cadence should be implemented for the organisation’s IT ecosystem to be able to predict and identify potential performance problems before users even notice.

In his recent article on ensuring productivity of staff working from home, Pinpointing Application Performance Issues with Unified NPM, Leigh Finch discussed how the combination of technology tools, processes and people gives you an effective solution for overcoming challenges on an ongoing basis.

Getting all of the data all the time

Data is key. When continuously monitoring the right data and accurate data, all the time, we can generate insights into IT ecosystem health and performance.

As Leigh said in his article, degraded application performance is not always a network problem. With Riverbed’s Network Performance  solution, you are able to continuously collect and monitor all packet data, flow data and device metrics and stitch the data together into meaningful insights.

Only an integrated (or unified) view from flow data, packet data and device metrics can provide multiple perspectives to see the data at different angles. Telemetry of the data consists of:

  • Network flow data, such as NetFlow, jFlow, IPFIX, exported from routers, firewalls, etc.
  • Device metrics, such as SNMP WMI, etc., polled from virtually any network device
  • Packet data collected from on-premise, virtualised and cloud environments

That way nothing is missed—performance monitoring should be ‘always on,’ continuously monitoring everything everywhere.

Tools that sample data or do not scale across the entire IT ecosystem can likely be inefficient or ineffective. Organisations should consider the potential impacts to the business if the right data at the right time is not there during times of network and application degradation.

It’s better to have data that you don’t need than not to have data you do.

Getting the right data

To become more mature about dealing with performance health, the key is broader data visibility and multiple perspectives—with the right sort of insights from it to make actionable decisions. You want to anticipate and resolve degradation quickly. In other words, see it before your users, customers and partners do.

Imagine a non-typical scenario. A user calls the help desk to flag an issue. Instead of the help desk taking them through a routine of items to test, services to reboot, etc., they are able to tell the user that “we are aware of an issue with this application and we have identified the source. We expect to have it fixed in X minutes or Y hours.” The help desk can even let them know by email that a particular application is ‘acting up’ and under investigation to save them the trouble.

This shifts the paradigm and the perception of IT and the criticality of IT to the business, often overlooked, would positively change.

I’ve worked with a number of our customers who have made this paradigm shift and transitioned from being ‘reactive’ to ‘proactive’ using Riverbed’s Unified NPM solution.

Different teams and key stakeholders have visualisations of the right data: intelligent and insightful information that each needs—all rolled up into relevant fit-for-purpose dashboards, such as:

  • High-level overview for the C-execs
  • Business-centric and application views for lines of business owners
  • Application-centric and performance detail views for application owners
  • Network and device health views for network and infrastructure teams
  • Key performance metrics and trending analysis views for operational teams

This enables them to get very accurate about what’s actually going on so potential issues can be identified and resolved fast.

Stronger security is a bonus

Getting all of the data adds also enables you to parse anomalous or bizarre behaviour within your network—such as a user accessing a server they’ve never accessed before. This gives you a stronger security posture because you can detect when it’s actually happening, rather than waiting for IDS/IDP reports to come in—which are only looking for signatures that something from outside your perimeter is trying to get in. But that’s a whole other topic for another article…

If you’d like to know more, our recent webinar Network Performance Metrics That Matter is available on demand.

]]>
Pinpointing Application Performance Issues with Unified NPM https://www.riverbed.com/blogs/pinpoint-app-performance-issues-with-unified-npm/ Thu, 11 Feb 2021 22:03:00 +0000 /?p=16511 With employees and business partners working from home due to the pandemic, non-enterprise internet connections have exacerbated the issue of application performance. If services to your premises are degraded, you have recourse to your communication service providers. But if your users are dependent on mobile broadband, ADSL or NBN links, network performance is far from guaranteed—which can lead to frustration and reduced productivity.

However, application performance issues are not always caused by communication links. Sometimes issues and delays are created by the applications themselves, or the third-party services they depend on. In other instances, organisations delivering application services to anyone off their network have the need to identify and eliminate application performance issues for their own competitive advantage.

The challenge is how to comprehensively monitor, pinpoint and diagnose any issues across network links, the application and its dependencies, to avoid unproductive finger pointing and lengthy resolution times.

‘It’s a network problem!’

Time-sensitive applications—from ERP systems to online shopping to trading platforms—are dependent on optimum performance. It’s easy to simply attribute performance issues to the network or internet access, even in these days of cheaper bandwidth. In fact, degraded application performance is often due to other issues—so throwing more bandwidth at your WAN or cloud service links won’t necessarily fix it.

It is best practice to closely monitor and troubleshoot performance bottlenecks to isolate the true causes. Think about applications that rely on an external service as part of its operations. If your application goes off to another service such as check postcodes, ABNs, credit ratings or the like, you should have an SLA with that service provider. But if they are not meeting their SLAs, how can you quickly tell?

Breaking down a transaction, queries are sent to servers, which may query other servers, and back again. Then the response is provided to the user. If there is an unacceptable delay at any of these stages, it is critical to pinpoint the exact point so it can be resolved.

Success = technology, process, people

As in many complex situations, overcoming challenges efficiently and continuously depends on a combination of factors working in concert.

To gain visibility across performance bottlenecks, you need the right technology. Riverbed’s Network Performance solution not only monitors network connections, it uses packet capture to be ‘application-aware’—providing you with a holistic view of the factors involved in application performance. When you can be accurate in your identification of issues, they’re faster to fix. Further, when you have evidence, you can ensure responsible third parties resolve them, without going through loops as fingers are pointed in all directions.

Stringent processes for monitoring, escalating, diagnosing and resolving issues—fast—are also essential. Over the years, Riverbed has developed the processes and methodologies for timely resolution and can share them with our customers.

Finally, your people need the right skills to go beyond network monitoring in these days of ‘working from anywhere’ to rapidly diagnose and resolve—or escalate to the right service provider—in real time. Again, Riverbed offers consulting services for skills transfer or ongoing support, as well as training services to get your team up to speed.

If you’d like to know more about the possibilities of application-aware network performance monitoring, our recent webinar Network Performance Metrics That Matter is available on demand and is a good start. Or, if you would like a demonstration of our Unified Network Performance Management solution, talk to your ICT service provider or contact us.

]]>
Evolving IT Operations to Support New Ways of Working https://www.riverbed.com/blogs/evolving-it-operations-to-support-a-hybrid-workplace/ Fri, 05 Feb 2021 16:19:21 +0000 /?p=16543 The challenges of working from home have caused organizations to reevaluate how they look at networks for enterprise workloads and hybrid workplaces. The range of at-home networks and devices now engaged in critical business operations has grown by an order of magnitude. With more diverse and dispersed operations, IT decision-making processes—and IT teams, themselves— will need to evolve to meet new technical challenges, new attitudes towards privacy, and fundamentally new ways of working.

With this in mind, here are four actions organizations must take to support the future of work:

1. Invest in deeper structural changes

Up to this point, businesses have been learning as they go when it comes to optimizing the ability of their teams to work remotely. No one expected the massive disruption that COVID-19 caused, so there was never any detailed plan regarding how to optimize existing IT infrastructure for work-from-home environments. With no definitive end to the pandemic or the WFH experiment, many organizations opted for a patching approach—making small fixes as the need for them became obvious. This may have been acceptable at first, but as continual data breaches and security mishaps have taught us, a patching approach won’t cut it as a viable, long-term IT strategy.

Instead, organizations need to take a deeper look at their core operating models and invest in structural changes that will prepare them for the future of work. We’re at a point where the scales are finally tipping, and decision makers recognize that the ROI for making these changes are far greater than continuing to make small fixes in the hopes that the old ways of working will return. This is an important moment in the story that began in March 2020 and we’ll look back at it as a time when the ‘winners’ laid the groundwork necessary to emerge from the pandemic as truly evolved, resilient enterprises.

2. Move enterprise networks and workplace policy ‘closer to home’

What does this look like in practice? One fundamental change organizations will make is to offer WFH-conducive alternatives to in-office enterprise networks. While the concept of BYOD has been around for some time, its definition has changed with COVID-19. Working from home has created scenarios where individuals using two different devices may be regularly tapping into the same home network to access proprietary or otherwise sensitive information from two different organizations. Employees are also often using the same device and network for both personal and work-related tasks.

How do you ensure the security of proprietary data and separation of personal digital identities from professional digital identities? The answer may lie in dedicated 5G networks that remote employees can access from their personal devices. This gives companies a single dedicated network to focus their security efforts and may help keep personal data flows separate from enterprise-specific activity, while also addressing at-home bandwidth issues. With dedicated 5G networks or other solutions, hard boundaries (both for the network and for workplace policy) will need to be established between personal and professional digital identities. This will require new kinds of digital workplace norms, organization-wide understanding of security, and intelligent IT policy working together to ensure that employees are both protected and empowered in hybrid work environments.

3. Establish privacy as its own business category

Privacy has long been placed under the broader security umbrella when it comes to corporate policy, team responsibilities, and investment strategy. With the growing impact of GDPR and new conversations started by the shared experience of working from home, privacy considerations are branching out into their own category and sometimes even find themselves at odds with security interests. Going forward, these distinctions will become even clearer as organizations settle on the extent of visibility they can and will impose on employees working remotely.

Stronger consumer privacy rights, highlighted on the political stage by the Big Tech Senate hearings, may push employees to advocate for similar protections within their companies. This will create the need for more Chief Privacy Officers and privacy-focused teams down the chain of command that understand local regulations and the distinct challenges and sensitivities around privacy. These challenges will reinforce the need for the kind of distinct digital identities discussed earlier and how organizations choose to articulate their privacy posture can have an impact on the company culture writ large.

4. Evaluate SD-WAN in the context of hybrid work environments

With changing employee expectations and many organizations now realizing that they can stay productive while working remotely, a shift to hybrid, mobile-first environments in many industries is inevitable. We’ll see scenarios where employees go into the office once or twice a week, causing enterprises to want to rent, rather than own, much of their IT infrastructure. This will create a new demand for multi-tenant SD-WAN environments. Two primary capacities of SD-WAN—connecting branches with the data centers and onboarding to the internet—will need to be more deeply explored from the context of hybrid work environments. Whether SD-WAN deployments will slow remains to be seen. What is clear is that the relationships between IT teams, SD-WAN vendors, and other solution providers will need to evolve to meet the new needs of a hybrid workforce.

Looking back to look ahead

The changes and challenges of 2020 hit the enterprise at breakneck speed. While organizations have adapted quickly and admirably, many are still taking a thorough look at their performance. Rather than a sign of what’s to come, the past year is an indication of what’s already here, and here to stay. Decision makers will need to reflect quickly, develop clear strategies around privacy, BYOD, SD-WAN, and network performance , and then make investments to support their workforce as it continues to evolve.

]]>
Takeaway Only! The Parallels of Restaurants and IT During Lockdown https://www.riverbed.com/blogs/parallels-of-restaurants-and-it-during-lockdown/ Thu, 04 Feb 2021 16:52:01 +0000 /?p=16515 Like most Melburnians and other people around the world, visiting a local restaurant for a leisurely group meal is a strong part of my family’s social life. For much of 2020, this has been impossible as restaurants and cafes that have stayed open during lockdown bear ‘Takeaway Only!’ signs. This has led many of us to try to recreate the restaurant experience in our own homes.

With the mass evacuation of corporate offices so we can practice social distancing, our work is also in ‘takeaway only’ mode. With the implementation of work-from-home (WFH) initiatives, the initial focus of IT teams was on equipping staff with laptops and establishing access and security. As individuals, we had to work out the best place for necessary work equipment in our homes—often having to fit in with housemates, partners and children home-schooling.

Once equipped and settled, we were subject to unmonitored home internet connections of every description put under great pressure from the significant increase in video conferencing with our colleagues, as well as the conflicting needs of others in the household.

The question for IT teams turned to “How can we recreate an in-office experience for our WFH staff in terms of application performance and productivity?”

First look at the network

Many organisations accelerated their plans—or initiated new ones—to place workloads in the public cloud during the early months of 2020 in an effort to make them more accessible for distributed employees. As VPN and remote user infrastructure now had to support many more users, more consistently, there was increased reliance on IaaS and SaaS.

This involved reconfiguring links between these workloads and the enterprise data centre—making monitoring and management to optimise application performance in new configurations a critically important task. Access and security also required modification, increasing the amount of change and therefore potential performance degradation.

Clear visibility over a rapidly evolving network is essential to assure acceptable performance. Riverbed Network Performance tools are able to help you to understand just what the user experience should look like, and to identify where and why performance issues happen. This knowledge enables you to pinpoint actual or potential bottlenecks, providing the forensic evidence to present to your network service providers for rapid resolution.

Then look at remote connections

With infinitely greater numbers of employees now relying on their home connections to work productively, there is a simple way to deliver performance. Riverbed Application Acceleration solutions, including Client Accelerator and SaaS Accelerator, can provide an office-like experience to these workloads.

A word of caution around allowing users to accept poorer performance and productivity at home. They may just accept that this is the way it has to be when working away from the office—but what is the cost to your business in lost productivity?

Poor performance should not be a given. One organisation that discovered this is Landform, a professional architecture and engineering services company. Landform had to factor in the real impact of reduced productivity on their revenues given they had the same wage bill, but their employees could produce less. The solution was Riverbed Client Accelerator on employee laptops and desktops, with SteelHead CX in the data centre. This enabled 92% faster opening of CAD files from home—in one case, down from 20 minutes to just 90 seconds.

Into the future

Much research indicates that this year’s pandemic has escalated the move to working from anywhere—and that even when we can all go back to office, some of us will spend less time there in the coming years. In fact, we are now at a point where many employees have settled into a pattern of WFH—so now is the time to address the productivity and performance of these new work habits. This means that the above-mentioned technologies will continue to help us ensure ongoing productivity for our people, wherever they choose to work.

Meanwhile, if you are working from home and enjoying your favourite takeaway food at the same time, try not to spill that laksa on your keyboard!

If you’d like a 60-day trial of our Client Accelerator solution, talk to your ICT service provider or visit our website.

 

]]>
New Data Security Challenges in the Rush to the Cloud https://www.riverbed.com/blogs/data-security-challenges-in-the-cloud/ Wed, 27 Jan 2021 14:51:16 +0000 /?p=16519 The challenges of working from home have caused organizations to reevaluate how they look at their networks and the data that lives on them. The range of at-home networks and BYO-devices now engaged in critical business operations has grown exponentially and amplified our reliance on cloud-based infrastructure and solutions and scattering our data into what is frequently the unknown.

In their rush to the cloud, enterprises will need to take into consideration three new data security challenges as they reevaluate where their data is and whether they have taken enough responsibility for it:

1. Cloud whiplash

Accelerated by the dramatic shift to remote work, organizations have been steadily moving all of their data outside the enterprise and into the cloud. What this means in reality is that all the data that makes up our digital enterprise is on someone else’s computer. With the rise of SaaS, the applications that serve as the foundation of our businesses are maintained by someone else, and although that generally ensures the security of the application, the visibility on the data stored within is generally significantly diminished. Whereas in days past, a company had its own datacenters and computers, today the paths our corporate data takes are no longer owned; and therefore, visible to the company. And whether or not the infrastructure that is owned and operated by another company is monitored is frequently (and frighteningly) unknown.

We are already deeply relying on fundamental business applications like Office 365, Salesforce and Slack—the most used applications—moving to the cloud. Even the more tailored applications that don’t yet have a SaaS equivalent are moving from the corporate datacenter to IaaS to be consumed as a service.

As a result, we see enterprises starting to grapple with the complex question of where their data is, and who really has access to it, and how they might audit or track this. Their heads will suddenly turn to realize their ability to govern data is limited at best, and they have few processes in place to understand who is accessing what data and from where (internally and externally), and what the actual costs are. Visibility will become the new watchword.

2. Diminishing returns on cloud storage

As corporate entities, we generate an awful lot of data. Inevitably, the path of least resistance is to keep buying more and more storage to stuff all of our data into the cloud. And the reality is all the data we create ends up stationary, ie. “sitting around” and frequently untouched or unused for long periods of time. For example, just consider the SharePoint files of former employees. We lose sight of where that data really is, what’s happening to it, and whether or not someone may be moving it out of the organization.

We expect many enterprises will start to recognize that that path of least resistance that cloud storage represents—when not used thoughtfully and strategically—turns all that data into a liability. Companies will start to understand that we have passed the point of diminishing returns with a haphazard approach to cloud storage, both from a security and cost perspective.

In addition to acting on the understanding that not all data is worth paying to keep, especially considering its potential liability, enterprises will focus more than ever before on how they will apply cloud storage smartly, securely and affordably.

3. Think global, act local privacy

In the big picture, we have seen broad protection for consumer and individual privacy enacted through regulations like GDPR and CCPA that say people must be told what data is being collected about them. National measures in the United States have failed to pass so far, but we did see California forge ahead and New York and Massachusetts are considering following suit. But what will happen if a more progressive city, like San Francisco, decides that consumers need stronger protection of their personal data than California deemed acceptable?

We expect to see that some municipalities will begin to impose more restrictive data privacy laws than those adopted on a federal or state level. For companies who store consumer data in the cloud, their model is to use very few, but very large, datacenters to hold all that information. Such companies, like Fitbit, may find themselves forced to find local datacenters so that they can meet new municipal requirements to do business in a city like San Francisco. In turn, we may see the large cloud service providers capitalize on this dynamic by starting microfacilities across many locations and regions in order to help their customers comply.

Doing a double-take

The changes and challenges of 2020 hit the enterprise at breakneck speed and accelerated a rush to the cloud. While organizations have adapted quickly and admirably, many will start to take a second look at what they’ve done with their data, and what they need to do going forward. In the coming years, we expect organizations will implement new ways of ensuring responsibility for data, wherever it lives.

]]>
The Future of End-to-End Network Management https://www.riverbed.com/blogs/future-of-end-to-end-network-management/ Wed, 20 Jan 2021 03:00:45 +0000 /?p=16490 Due to the global pandemic, enterprises have had to accelerate digital initiatives in a matter of weeks, rather than years, as a top priority to overhaul their business processes and transform services to deliver value to their customers and employees.

As organizations continue to support remote workforces and shift toward work-from-anywhere models and hybrid work environments, network technology will play a critical role in connecting every individual, device and organizational structure that together form the digital enterprise.

With this in mind, here are five trends that will shape the future of end-to-end network management:

1. Continued consolidation of the SD-WAN market

As markets begin to take shape and mature, it often becomes increasingly difficult for smaller players to compete as larger entities begin to invest more fully. As Covid-19 has elevated the importance of how we manage and operate networks for remote work, many smaller SD-WAN players now face increasing market pressures to enter acquisition deals with larger enterprises.

A primary example is the acquisition of SD-WAN vendor 128 Technology by Juniper Networks in October of this year, a move intended to bolster the latter’s networking portfolio. Larger vendors see significant potential for incremental business growth, in particular with big existing customers, and see acquisitions as a way to expand their roster of SD-WAN features and capabilities that they can use to expand existing service subscriptions.

In the coming year, the consolidation of SD-WAN vendors will continue as larger players such as Juniper, Cisco and HPE continue to buy up smaller players in the SD-WAN space that no longer have the resources to compete.

2. The rise of predictive operations

AI and ML have increasingly played an important role in approaches to network monitoring. We expect to see the value of analytics and number of real-world implementations continue to grow, especially when it comes to identifying active and potential threats when it comes to the job of securing the network.

The predictive power of AI and ML is a powerful tool not only for threats, but for operational purposes as well. Taken together, AI-enhanced security and operational capabilities can give us the ability to both recognize existing breaches and predict faults and threats before they happen, determining how they are likely to evolve over time. Significantly, this may open the door to predictive security suites within network performance management. Taking this concept of predictive operations a step further, we even see predictive analysis and rank analysis coming together, allowing us to rank predictions based on their likelihood.

3. The fall of static development

The Covid-19 pandemic has been a remarkable accelerant for the concept of remote work. Organizations of all kinds were pushed, essentially overnight, to connect their entire workforce and ensure business continuity. We realize that the new approaches to remote work—how each company has chosen and implemented technology solutions—may be permanent in some cases and temporary in others. Which technologies remain and what percentages of people work remotely versus in-office may vary, but it’s becoming increasingly evident that ‘anywhere’ is the new axis, rather than the branch.

Increasingly, we expect to see developers grasp this new reality and begin to leave static development behind. Developers will see limited return on the idea of developing solutions oriented toward the branch office and gravitate toward anywhere as their primary development environment. In doing so, they will need to consider the proliferation of entry points and end points, and are likely to make notable advances in securing “the anywhere.” In a sense, developers will adapt their thinking to accommodate the reality that every endpoint has become a microbranch. Developers will see the client as the new branch, finding new scenarios that optimize the capabilities of the client while also ensuring that new applications and services can be managed by IT from a single point of control.

4. The emergence of cross-vendor visibility

We advocate visibility of the network and its implications for the business overall as essential for the new way of working. Being able to monitor and manage everything that happens on the network will continue to be a business critical capability in the work-from-anywhere world. Providing comprehensive visibility will rapidly become a priority in the coming year, which will push a number of vendors to reach beyond the purview of their own solutions. We expect to see more and more companies developing solutions that offer visibility into other vendors’ solutions in 2021.

5. A new chapter in the client-to-cloud story

How well applications perform in the work-from-anywhere environment will continue to be a priority for businesses moving forward. A number of vendors have taken runs at accelerating applications in the past, from one end or the other, but with limited success. But the power to accelerate applications is a claim we will see re-emerge in 2021, likely rolled into SDN offers.

How the network delivers and handles applications has changed. Luckily, Riverbed was a very early mover in approaching application acceleration from both the data center side and the client side; neither of which is a simple proposition. The acceleration technologies developed for the data center and the branch can also be implemented on AWS or Azure, accelerating the cloud, or placed in front of a SaaS application like Office365 or Salesforce. This bookends performance with acceleration in a real client-to-cloud approach. Client-to-cloud acceleration is a capability that many vendors will promote in the future, but few will be able to deliver it in a masterful way.

A year of change

2021 will be a year of rapid evolution for the networking technology that has become so fundamental for new ways of working and operating models in the Covid-19 era. With the whiplash shift to remote work somewhat stabilized, IT professionals will focus on the bigger picture and enduring opportunities that smarter network management holds. Seeing end-to-end, accelerating end-to-end, developing for end-to-end and innovating end-to-end will dominate the network for years to come.

]]>
Answering: “Am I Affected by SUNBURST?” https://www.riverbed.com/blogs/answering-am-i-affected-by-sunburst/ Tue, 12 Jan 2021 16:30:00 +0000 /?p=16409 When a high-profile hack or malware campaign hits the news, everyone’s first question is, “How do I know if I’m affected?” Security analyses and official guidance frequently contain indicators of compromise, but they rarely explain how to make use of them. Network visibility tools such as Riverbed NetProfiler and AppResponse can form an important part of any enterprise’s plan to scour its infrastructure for signs of compromise. This post references a recent, widely-reported cyberattack, SUNBURST, to illustrate how to use Riverbed NPM solutions to find and root out malicious actors based on common indicators of compromise.

Malware and similar malicious software must often use the network in executing a cyberattack, which may include communicating with Command and Control (C2) servers, downloading malicious payloads, uploading stolen data or spreading through the network. Oftentimes, these actions are designed to appear innocuous, but can still be identified as suspicious through indicators such as domain names, IP addresses, file names or unique ports. Enterprises can use these indicators to search their networks for malicious activity.

In December 2020, cybersecurity firm FireEye released an investigation of a global network intrusion campaign where hackers managed to insert a vulnerability within certain SolarWinds® Orion® Platform software builds and software updates released between March and June 2020. The Cybersecurity and Infrastructure Security Agency (CISA) followed suit with its own analysis and advisory. This cyberattack, also known as SUNBURST, has had pervasive reach due to its roots in the compromised vendor supply chain. Investigations published so far have included several indicators of compromise that are potentially of use.

Riverbed NetProfiler: long history, global reach

One of the biggest challenges in detecting compromises is that by the time the details and behavior of malware are known, it may have been weeks or months since that malware first started circulating. This is why flow tools, like Riverbed NetProfiler, are indispensable in looking for malware: it is possible to search historical network flow data for even small connections that would otherwise fly under the radar. In order to scale well, flow records must be sparse, but they contain IP addresses and ports and show how hosts connect to each other.

FireEye identified a number of IP addresses of forensic interest in relation to the SUNBURST cyberattack. NetProfiler customers can easily search for these hosts by scanning historical flow data. In the excerpt below traffic expression is merely built up from one or more host names:

Sunburst Traffic Expressions

Very often, further analysis turns up new IP addresses—either because new features of the malware have been discovered, or because the attackers have made changes to their infrastructure in response to news coverage. Using NetProfiler, it is easy to adjust the filter and check again, looking at as much history as possible.

Another indicator to watch for is the malicious use of cloud services. In the excerpt above, the first host listed is in Amazon AWS. Threat actors may use public services so that their IP addresses look more innocuous and their use of such public services tends to be short-lived. It is important to look for these indicators, keeping in mind that communications with public service hosts could be another service reusing the same IP. Time frame is key to understanding the risk, and malware analyses frequently include discussions about the times in which that malware was active: With respect to SUNBURST, FireEye’s countermeasures list includes a “First Seen” and “Last Seen” time frame ranging from February to December 2020.

Visualize the attack

NetProfiler’s ability to visualize patterns of connections over time is another key feature that can be used to better understand cybersecurity threats. In a typical forensic investigation, changing patterns of network behavior analysis would be used once a host has been identified as having been potentially compromised—for example, after seeing a suspicious communication to a C2 server. In the example below, however, it is a particular appliance that has been compromised.

Using Riverbed NetProfiler, customers can filter communications involving the IP of the appliance in question, and then use the network graph to examine the connections the appliance makes within the customer’s network. Unusual external connections may represent new indicators of compromise or other assets belonging to the cyber-attacker. Unusual connections within the network may indicate behaviors such as reconnaissance, lateral movement or attempts to initiate secondary compromise.

NetProfiler lets you discover connections in your network
Fig. 1. NetProfiler lets you discover connections in your network.

Each suspicious connection illuminates a potential move within the network by the threat actor. Duration, size, and type can shed light on what purpose a connection might serve. Context, in the form of patterns displayed before infection, can help weed out ordinary connections and expose unusual ones. The availability of historical data in Riverbed NetProfiler means that customers never have to wonder if a pattern is usual or not: just go back further in the historical record to see.

Riverbed AppResponse: the truth is in the packets

For ground truth in network investigations, nothing beats actual copies of the packets being sent. There are many packets to trawl through, however, and so packets require forethought to make use of them. Alerts can be set up based on suspicious transactions with considerably more depth than is available in flow records. Capture jobs can be created to have full access to potentially malicious traffic.

Reading through technical descriptions of malware behavior can yield useful results. These analyses frequently uncover useful indicators, in particular: first, the domain names used by adversaries, and second, packet captures of the communications themselves.

Leveraging DNS in your security search

The domain names can be useful in a number of ways. In the case of the SUNBURST cyberattack, a particular domain name, avsvmcloud[.]com, was identified as important to the attack progression. First, of course, Riverbed AppResponse customers can look up what IP addresses it currently and previously resolved to and search for those IP addresses in past and ongoing traffic. Riverbed Packet Analyzer Plus customers have the additional option of starting a capture job on UDP port 53 to examine DNS queries. Looking to see who is requesting DNS resolution of malicious domains in this way can be a powerful tool for quickly identifying affected hosts.

It is important when examining domain names to understand some of the ways adversaries use them. FireEye identified some sandbox-detection behavior in the SUNBURST cyberattack, in which the malware generated domains in a loop and tried to resolve them to see if they resolved as local IP addresses (an indicator that the malware was in a monitored environment so that it could stop execution). While it might be possible to form a list of these names to watch for them, seeing randomized domain names in DNS requests is a red flag that is not always a feasible way to search for indications of a particular malware strain.

Just as web applications are enterprise-critical, they are critical to many malicious campaigns as well. HTTP is often used to transfer files, commands or other information. Although malicious actors can and do use custom or encrypted protocols, just as often, they use standard protocols for the same reasons that commercial developers do, including reliability and ease of development.

Saving a packet capture in Packet Analyzer Plus for off-line analysis
Fig. 2. Saving a packet capture in Packet Analyzer Plus for off-line analysis.

FireEye’s SUNBURST analysis provides several examples of the use of HTTP. For example, FireEye describes communication with C2 servers, including JSON payloads with a variety of fields. For example, the key “EventType” is hardcoded to “Orion” and “EventName” to “EventManager.”  Riverbed AppResponse Web Transaction Analysis (WTA) module is very useful here. Just as Riverbed AppResponse customers can analyze business transactions, they can analyze the adversary’s transactions and search for indicative fields like these key/value pairs. Another analysis, by GuidePoint Security, identified a set of HTTP requests including “logoimagehandler.ashx” and query parameters such as “clazz” that indicate potential webshell communications.

AppResponse customers can also look for web transfers of files in the same way or web requests to malicious domains. Reading through the details of how malicious actors communicate will reveal what to watch for in their traffic.

Summary

While this post outlines indicators of network compromise specific to the SUNBURST cyberattack, the important lesson to learn here is not the indicators and security analytics tied to any one malware campaign. Instead, it is to learn how to read reports and analyses on malware to quickly identify key indicators that can be leveraged using the tools Riverbed NetProfiler and AppResponse customers already have. More specifically, IP addresses and domain names can be simple and reliable indicators of which network hosts to examine. Watch for key information such as the devices targeted and the time frames in which the malicious campaigns took place. And when in doubt, please reach out to Riverbed for help and advice.

]]>
Modern Use Cases for Application Acceleration https://www.riverbed.com/blogs/modern-use-cases-for-application-acceleration/ Thu, 07 Jan 2021 17:18:00 +0000 /?p=16405 Riverbed recently had the opportunity to speak with a panel of industry experts—bloggers, analysts, hardcore in-the-weeds technical folks. It was an opportunity to spread the word about what we’re doing and where we’re headed with network and application visibility and performance.

We kicked off with the theme of “work from anywhere,” so it was interesting to see the Tech Field Day 22 delegates in home offices, in comfy living room chairs, or, like me, in cold, unfinished basements. We used Zoom for the event which made the theme of our presentation more palpable than even the most colorful marketing slide.

Not long ago, when we were in branch offices, we had the benefit of sophisticated network tech on the backend to make our applications perform the way they should. There was QoS on our switches and routers, MPLS with strict SLAs, high bandwidth commercial-grade internet links, direct connections to cloud providers, and WAN optimization appliances.

Rest assured, all that technology is still there. The only issue is that these days, very few people are in the office to make use of it. And, this is why Riverbed’s Application Acceleration portfolio is so relevant today.

Technology for the Way We Work Today

Application Acceleration solves the problems caused by low bandwidth broadband, DSL, satellite, LTE, and the typical connectivity we have outside the office. It improves application performance over any type of connection and for almost any application whether it’s on-premises, in the cloud, or delivered as a SaaS app.

Look at Application Acceleration as a single technology that is applied in different ways based on where resources are. Sometimes resources are in traditional private data centers, often they’re hosted in public cloud, and today many apps are delivered by SaaS providers like Microsoft, Dropbox, Slack, and Salesforce.

For years, Riverbed has made those applications perform extremely well for someone in a branch office. We used an end-to-end solution with a SteelHead appliance at the branch and another SteelHead in the data center. The results were—and still are—pretty awesome.

Branch SteelHead at the Client Level

Today, we can replace the branch SteelHead with an agent that lives right on a client computer. That means a software version of Riverbed’s SteelHead appliance is with someone no matter where they are and no matter what kind of internet connection they have.

The Client Accelerator agent is very similar to a branch SteelHead, though it’s optimized for a single computer. It’s managed by the Client Accelerator Controller—a virtual machine deployed on premises or in the cloud. This way, an IT department can manage acceleration policies all from one place.

Using the Client Accelerator Controller, we create application acceleration policies that tell the agent what to do with certain traffic. The policies look a little like firewall rules because they use source and destination IPs and TCP ports to identify traffic, though we also use URL learning and correlate local processes with network activity.

The Application Acceleration Ecosystem

Riverbed offers three Application Acceleration solutions: 1) Client Accelerator, 2) Cloud Accelerator, and 3) SaaS Accelerator.

1) Client Accelerator

With Client Accelerator, we’re not accelerating a client computer. We’re accelerating the data transfer that an application relies on. The local agent communicates with the remote SteelHead to reduce bandwidth consumption on the local link. It’ll also identify applications running on that link and apply whatever acceleration policies it receives from the controller.

2) Cloud Accelerator

In the case of public cloud, the local Client Accelerator agent communicates with a virtual SteelHead in Azure, AWS, or Oracle Cloud. A network operator can control both ends, so we still have a bookended solution that dramatically improves application performance even for cloud-hosted apps.

3) SaaS Accelerator

SaaS Accelerator leverages the same technology under the hood, but because we don’t own SaaS applications or the data centers they live in, we approach it differently. We host SaaS Accelerator in Azure and offer it as a managed service. That means Riverbed is responsible for deployment and backend management of the application acceleration service instances.

Going Under The Hood

As application traffic goes back and forth between the client and the remote server, regardless of where it is, we can pick out unnecessary packets that we don’t need to send anymore once the stream is established. We look for frequently accessed data that we can cache locally using byte-level data deduplication and data referencing. That way, we cache chunks of data and tag them with markers so they can be looked up when the client makes a request for it.

We also need to deal with the adverse effects of latency. We do that by regulating window sizing that provides a type of TCP flow control. This makes the transfer of data much more efficient. We also repackage TCP payloads to make that back-and-forth communication between a client and a server more efficient. And because we have a local agent on a client computer, we can correlate specific application processes with local network activity. Ultimately, this helps reduce round trips thereby reducing the effects of latency.

Application Acceleration is an ecosystem of components that solves the problem of poor application performance due to mediocre, sometimes outright bad quality internet connections when we’re not in the office.

In response to how we all work today, Riverbed has taken a technology we’re already experts in and brought it right down to an individual computer. And we’ve also expanded that functionality right out to the cloud—whether that’s a private cloud, public cloud, or one of today’s most popular SaaS providers.

Check out the overview of Riverbed’s Application Acceleration solution below, and make sure to watch all of our presentations from Tech Field Day 22. Watch Video

 

 

 

]]>
Transaction Search: A New AppResponse Feature https://www.riverbed.com/blogs/transaction-search-new-appresponse-feature/ Tue, 05 Jan 2021 13:30:00 +0000 /?p=16306 Riverbed AppResponse offers high-definition (HD) transaction data that complements the typical aggregated metadata, both of which are available inside a single AppResponse appliance. This high-definition data provides a full-fidelity copy of every IP conversation, every TCP connection, every user web transaction, etc. to give you the details you need before you drill into packets. Transactions are also saved so that they are always available when you need them. Then there are out-of-box Insights that let you view that data with input criteria that filters on specific transactions with options like which app, which IP, what browser, what return code, etc.

The new Riverbed AppResponse Transaction Search makes it easier to get the transaction data you need and provides more granular control over search parameters. In past versions of AppResponse, you chose from a limited set of pre-selected Insight filters using a classic report query workflow; you had to know what you wanted to search for.

The new Transaction Search works more like Google search and is a very natural way of searching for transaction data. You simply enter your query into the Criteria Bar. You can use any number of operators (and, >, >=, etc.) to refine your query. Just hit “search” to get your answer. One of the benefits of Transaction Search is that it also supports a bunch of new filters, including high-definition (HD) data and metric values.

Transaction Search results
Figure 1. Note the highlighted search query: “PageTime>=5 and NetworkBusyTimeNormalized >=0.01 and BrowserName in (Microsoft Internet, Explorer, Other)

 

Displaying the results

As you can see in Figure 1, the search results page is broken into multiple sections:

  • The very top section lets you limit your search X to the top 1000, for example
  • The top right section is the graph TruePlot, which plots every matching transaction. It supports Time Interval selection and Matching Counts.
  • Search results are in the transaction table below TruePlot.
  • The sidebar on the left tells you the relationship between results and what you searched for. You can click on any item in the sidebar to further refine/filter results.

If we click on the top selection in the sidebar “Page Families,” it will further refine the results so that you can explore them even deeper (See Figure 2).

 

Refined Search Results
Figure 2. When you click on a sidebar item, you refine the search results, as shown above. Notice in Figure 1 there were 20 matching transactions and in this search there are only 5.

 

If you are looking for Transaction Search, you can find it on the main menu. You’ll see a new menu item called Transactions when you upgrade your AppResponse to version 11.11. Currently, Transaction Search supports three data types:

  • Page View Search (WTA: Pages)
  • Web Request Search (WTA: PageObjects)
  • DB Query Search (DBA: Queries)

Another of the handy things about this feature is that you can “search with assist.” As you type a search term, the system will contextually auto-complete any search term. There also no need to know the AppResponse data model, like you did with the old way. Transaction Search supports top groups, group-paths and drill-down groups. And, it can see summary metrics for multiple groups or objects.

Here’s the complete list of both the WTA and DBA searches, just to show group and metric values that can be used as filters:

 

Web Page Search
Figure 3. The complete list of search terms for Web Page Search queries.
Web Request Search
Figure 4. The complete list of search terms for Web Request Search queries.
DB Query Search
Figure 5. The complete list of search terms for DB Query Search.

 

To summarize, Transaction Search simplifies the way you search for Web Page, Page View and Database queries by allowing you to use the new Criteria Bar to create free form search queries. It’s a whole lot more flexible and powerful. You can search any and every transaction using a combination of more than 50 filter criteria that range from basic IP and application names to transactions that exceed a performance threshold to those that exceed a number of HTTP errors, and more.

If you’re an existing customer, you can download Transaction Search and other new features in Riverbed AppResponse 11.11 on the Riverbed Support Site. If you are new to AppResponse, contact Riverbed sales.

 

]]>
Maximizing Network and Application Performance: A TFD22 Recap https://www.riverbed.com/blogs/maximizing-network-application-performance-tfd22-recap/ Wed, 23 Dec 2020 13:31:53 +0000 /?p=16351 The last three months may have been a whirlwind of activity for you. Many organizations are trying to wind down projects, wrap up the spending on their budgets, and start the new year off right with new projects and plans to take on the world. Simultaneously, we’ve all been facing the same old story of working from home and making the best of the current world situation.  

Here at Riverbed, it’s been no different. For the Technical Evangelist Team, we were busy in activities at online events like ONUG Fall 2020, the Riverbed Global User Conference, and Tech Field Day 22. If you missed any of these events, I recommend you look at the videos from them. We shared live demonstrations. We presented several technical sessions showing you how to maximize the visibility and performance of networks and applications. We also spent a few hours with the famed Tech Field Day delegates, digging into our Unified NPM solution to show how it can help you spot issues now and how Riverbed thinks the future of this space will unfold.

It’s very apparent to me that people still regard Riverbed as a WAN Optimization company. I hear people refer to having a “Riverbed” in their network when they mean that they have a SteelHead Appliance in their network. The truth is that we still Optimize the WAN, but that’s just a small fraction of the overall goodness that Riverbed provides. Our CEO, Rich McBee, is very clear in the video embedded below when he states that the Riverbed mission is to “help organizations maximize visibility and performance across networks and applications for all users anywhere they reside.” Watch Video

This mantra is the fundamental driver of Riverbed, and everything we do leads back to this.  

In this article, I want to highlight what we discussed at Tech Field Day 22 and why it’s crucial that organizations seriously consider their visibility posture going into 2021.

Understanding the portfolio

In the first two sessions, Phil Gervasi and I talked about our portfolio. If you don’t understand our product line, you need to watch these two videos. The following do not take you through specific hardware models, sizing and such. These videos cover at a high level how our solution fits together. Here’s why it’s essential to visualize the solution.

At a high level, I look at it like this: two areas impact network and application performance. Let me explain in the following section.

Latency, protocol shortcomings, chatty applications

Several factors contribute to latency, but a network operator can’t control all of them. The same is true with chatty applications and protocol shortcomings because you don’t necessarily control those attributes. For these scenarios, Riverbed provides acceleration solutions. Deploying these solutions in your branches, data centers, cloud, SaaS applications and endpoints, you get full coverage no matter where your users perform their work. Here’s Phil’s session:

Watch Video

Configuration, security, routing, hardware, and application issues

When it comes to configuration, security, routing, hardware, and application issues, we have more control. However, the catch is that we must identify these issues before preventing them or stopping them from impacting performance. For these scenarios, Riverbed provides a Unified NPM solution for the branch, data center, cloud, SaaS application and endpoint. Here’s my session:

Watch Video

Digging into Unified NPM

Now that you get what we are trying to do, let’s show you how we do it. For this, we turn to John Pittle, Vince Berk and Gwen Blum. In the following videos, we jump back and forth between John showing how we can identify issues right now, and Vince discussing how AI and ML will lend itself to the future. Sprinkled in there, Gwen does two demonstrations:

Watch Video

Takeaway from TFD22

So, let’s bring this back around to the point. Here at Riverbed, we do accelerate traffic, and that’s one way we help organizations maximize performance for their networks and applications. That’s not what we focused on at TFD22. No, at TFD22, we focused on the visibility aspect. So, the key takeaway from these videos is that here at Riverbed, we capture all the packets, all the flows, SNMP data, and more. By gathering all this data, we can then provide you with the best view of what’s going on in your network environment. This information allows you to fix configuration, security, routing, hardware, and application issues impacting your network and application performance. That’s why Rich opened the way he did. At Riverbed, we “help organizations maximize visibility and performance across networks and applications for all users anywhere they reside.”

 

]]>
Navigating the Lockdown Part 3: Back to the Office…or Not? https://www.riverbed.com/blogs/navigating-the-lockdown-back-to-the-office-or-not/ Mon, 21 Dec 2020 19:00:49 +0000 /?p=16355 In the third of a series of HR-focused blogs on Navigating the Lockdown, Riverbed’s HR Director for APJ, Ravi Abbott, looks at how the necessity to work from home has had some unexpected benefits. 

As the world carefully comes out of lockdown, many of us are seeing it in a different way. Do we go back to exactly how it was before the pandemic or do we take this opportunity to embrace lasting change?

If COVID-19 has shown us one thing, it’s our ability to adapt and respond during times of crisis. Projects that would normally take years to implement were rolled out in weeks. We achieved in days what would ordinarily take months.

Simply going back to our old way of life now would be a waste of that superhuman effort. So, how can we hold on to some of these changes for good?

WFH might be here to stay

Inhabiting office space versus working from home is currently a hot topic of discussion—but it is not a new concept.

A Gartner survey of 229 organisations found that 30% of employees were already working from home at least some of the time before the pandemic. Since COVID-19, that number has jumped to 80%. The world was already moving slowly towards a distributed workforce with more and more people working remotely. The pandemic just made it happen more quickly.

Riverbed CEO, Rich McBee predicts that 15-20% of employees previously working out of an office will work remotely in the future. He believes there will be more focus on flexible working hours and ‘results based’ work, instead of the number of hours spent in an office.

Companies are rethinking their investment in office space and instead looking at ways to enable employees with ‘at-office capability’ working from anywhere.

Physical versus virtual presence

At Riverbed, we drink our own champagne. When the pandemic hit and social distancing was enforced, our people continued to work remotely with the same capacity that they had in our offices. A survey of our employees taken at around two months into lockdown showed that the majority felt they were just as, if not more, productive at home than in the office.

In today’s world, collaborative technology is improving in leaps and bounds while domestic bandwidth is no longer a bottleneck. Increasing numbers of workers are from the ‘born-digital’ generations and perfectly comfortable with newer ways of socialising and working together in teams. All of this means that physical office space is becoming less and less relevant for progressive companies.

“The ‘individual cube’ of yesterday can be your home office,” says McBee. “It’s private, you’re working, you’re concentrated. Then, when it’s time to collaborate, the human-to-human interface will be done in a pseudo-office environment.”

A glimpse into the future

Despite all this, I think that the office will still have an important role to play in our post-pandemic lives. However, this time it’s going to look and feel different. Organisations will either move towards shared space options or redesign their current office layouts to allow for more collaboration and socialisation. Cubicles and closed offices will be a thing of the past.

Here at Riverbed, it’s a fundamental commitment to our people that we’ll balance the extraordinary work we do with their lives. Work life after lockdown may just be another way in which we can fulfill that promise to the exceptional people who work for us.

If you’d like to learn more about working at Riverbed, including current roles, visit our website.

]]>
Navigating the Lockdown Part 2: How Traditional Onboarding Has Changed https://www.riverbed.com/blogs/navigating-the-lockdown-onboarding-new-employees/ Thu, 17 Dec 2020 02:04:08 +0000 /?p=16335 In the second of a series of HR-focused blogs on Navigating the Lockdown, Riverbed’s Technical Recruiter for APJ, Mahesh Thyagaraj, looks at how onboarding new employees has evolved.

All organisations are currently facing unique challenges in their workplaces due to the outbreak of COVID-19. This said, it is critical that we continue to support and manage all new hires as normally and consistently as possible when they join the Riverbed family.

Our traditional onboarding process had new employees participate in a series of in-person meetings with HR, managers, leadership and team members to build their first impression of the company and its culture. Since March 2020, however, like many other businesses, Riverbed has had to onboard its new hires virtually. As a result, we’ve made a huge shift in our processes to adapt.

Going virtual

In order to successfully onboard new employees remotely, we pre-planned the virtual experience, making note of all the people they should meet, the tools and equipment required and the experiences each new employee must go through in order to fast-track their ramp up.

First, we ensured they had the hardware, software and information resources they’ll need on Day 1 by asking our IT team to set everything up in advance and deliver the equipment to the new employee’s home office. As soon as they’re on board, we make sure they understand how to use essential communication tools, online meeting solutions and file-sharing applications. We also brief them on who to go to with their different questions, and how to best contact those individuals whilst we’re all working remotely.

By preparing in advance, we can share our plan with the new employee and give them full visibility of their schedule for the first few weeks. We created a comprehensive resource page for new hires to access information on whatever they may need as they settle into working remotely in their new role at Riverbed.

We have numerous virtual social gatherings and the first port of call is to ensure that our new hires are added into these social groups so that they can get to know their colleagues on a more personal level.

Getting into the culture

As a new employee, understanding who you will be working with on a daily basis and how to develop those relationships, is critical. We have worked hard to adapt our onboarding processes to allow strong bonds to develop within teams, despite lockdown conditions.

Each new hire is made aware of their team culture by having department-specific onboarding discussions about values and expectations, including providing them with links to our employee handbooks and company policies and procedures. Their manager will brief them on their new job responsibilities and discuss their learning and development plan and they’ll have regular virtual meetings with the rest of their team so they can feel comfortable with their colleagues and become a part of the Riverbed family!

Managing under lockdown

As our new employees settle into their daily routine, frequent catch-up calls are scheduled by their managers and colleagues. These calls keep managers apprised of how their new team member is settling in and make them aware of any help they may need. During these calls, managers check in to understand what their new employee needs to be successful in their new role, whether that’s support, resources, or additional work and ensure that they provide for these needs. Each employee will have different needs, and being attentive to these needs is our top priority as we onboard our new hires.

Managers set specific goals and expectations for their new hires outlining short and long-term goals and scheduling 1:1 meetings to discuss upcoming tasks and resolve potential concerns.

Help is at hand

All new Riverbed employees are assigned a Riverbuddy; we believe a supportive, caring and helpful culture is very beneficial. Providing a Riverbuddy to new employees helps them to settle in quickly and gives them someone to go to no matter what help they need or questions they have.

COVID-19 has created a uniquely challenging time for anyone starting a new job. That’s why we’ve taken all the measures we can to ensure a smooth onboarding for all new hires. Our aim is to induct newbies into the Riverbed family with a warm and informative virtual welcome and have them thriving in their new roles as quickly as possible!

What they say

Success of any new process is in its implementation, and we’re delighted to have had some encouraging feedback! Here are some testimonies from some of our new employees.

“Onboarding is both an exciting and an anxious period, especially when the whole world is going through a pandemic. But, from the very beginning of my journey with Riverbed, everyone has made me feel welcome.” 

“From the time of my interview to virtual onboarding and finally understanding the workflow of the organisation … The whole Riverbed experience has been amazing and I wholeheartedly thank each and every one who made it easy for me to join the organisation virtually from the comfort of my own home!”  

“The management architecture of Riverbed is clean and smooth. Team coordination is good and transparent. I wasn’t sure how the whole process of hiring could be done virtually, however the interaction and support from all the departments made it easy for me.” 

“Training and induction were organised in a well-planned manner and done online. I was introduced to the complete team online and received a very warm welcome. Since then I’ve had a number of conversations with everyone whether during training, team meetings or case troubleshooting help. My Manager, HR and team have been in continuous contact with me and provided all the support and guidance required.”

If you’d like to learn more about working at Riverbed, including current roles, visit our website.

]]>
Navigating the Lockdown Part 1: Maintaining a Positive Mindset Whilst Working from Home https://www.riverbed.com/blogs/navigating-the-lockdown-maintaining-a-positive-mindset/ Mon, 14 Dec 2020 17:18:39 +0000 /?p=16289 In the first of a series of HR-focused blogs on Navigating the Lockdown, Riverbed’s HR Operations Manager for APJ, Venkatesh Subramanian, looks at how employees can make the most of their time working from home.

The COVID-19 pandemic has all of us standing at a crossroads wondering how to get through the unprecedented time ahead. Quite suddenly, we have moved into a new workplace culture of ‘working from home’ away from our familiar office environment and colleagues. And not just for an occasional day avoiding distractions from a major project with a tight deadline—but for the foreseeable future. This calls for resilience, and procrastination must be avoided.

When it comes to maintaining a positive mindset, the choice has always been ours, individually. We either surrender to the global negativity and let it pile up on top of us, or take a positive stance by upping our commitment to contribute to the business we work for and support its customers through difficult times—with the bonus of developing invaluable experience as a result.

Avoid the negative

Believe firmly that you are not a victim of the situation you find yourself in—it’s an opportunity for strengthening your work, family and community commitments and interactions.

Limit the negative news you hear, watch and read. Be open to the brighter side of lockdown—many news outlets and social media sites are full of positive stories and ideas. Start evaluating yourself and your reactions, as self-awareness helps you discover a new part of yourself. Learn to walk away from distressing news.

Set your own limitations on how aware you should be of the news. We have to learn to not get caught up in the constant hourly and daily cycle on the developing pandemic with the potential anxiety that it can bring.

Practicing yoga and meditation keeps your body healthy and the mind calm—as does a physical exercise regime, even if it’s just walking in the local park or down the street with a neighbour, your children or dog. The state of your body and mind is key to conquering dread and anxiety.

Embrace the positive

Begin to look at the situation from a new perspective. Have the courage and the determination to lean towards the positive. While you might miss the physical companionship of your team, WFH also helps you learn and practice valuable time management skills.

Start prioritising your daily tasks and learn to give yourself some ‘self-time’ away from your busy work schedule and endless video conferences. Also factor in the time you spend on home—schooling your children, as well as necessary household tasks. In this dedicated self-time, focus on a hobby or activity that adds joy to your living and avoid electronic devices and media. The more you shift your focus to positive thoughts, the more potential fears and worries will dissipate.

Gratitude builds a feeling of joy within oneself. Be grateful for all that you have today. Your existence, momentary pleasures, work, social and family relationships are all part of your grateful list. When we are thankful, we release positive emotions, keeping us and those around us, positive and happy.

Last but not least, remember that we will only recover from the stress of this pandemic by constant effort and mindfulness. In the years ahead, when we look back on this uncertain time, we will pride ourselves on our ability to carve a positive path through tough times. So, let’s begin today!

Working at Riverbed Technology

The truth is, with IT teams across the world now supporting innumerable WFH employees and shifting more workloads into the cloud, there has never been a greater need for Riverbed’s solutions to maintain performance and manageability. With organisations expecting to see a 25% increase in employees working remote event after COVID-19, the need for Riverbed’s technology will only increase in the months and years to come.

If you’d like to learn more about working at Riverbed, including current roles, visit our website.

]]>
Speed CAD File Downloads & Uploads for Your WFH Professionals https://www.riverbed.com/blogs/speed-cad-file-downloads-uploads-wfh-professionals/ Tue, 01 Dec 2020 13:25:44 +0000 /?p=16246 Why design professionals at architectural, engineering, construction and related firms struggle with running CAD applications remotely—and how to help them, fast.

Of the organizations I’ve worked with over the past few months, amongst those experiencing special difficulties with their people having to work at home come from the architectural, engineering, construction (AEC) and related sectors. With teams typically collaborating and sharing day-long with large Computer-Aided Design (CAD) files, network performance is business-critical.

The WFH factor

When working in the office, the performance of local network connections is closely monitored for reliability and performance. But, once large numbers of staff were forced to work out of their homes early this year, productivity reduced due to unpredictable ‘last mile’ connections. Many professionals are also sharing single internet links with housemates, working partners and homeschooling children. These factors can significantly increase the time it takes to download and upload large CAD files.

This, in turn, has a clear connection to their firms’ profitability and reputation. Slower project delivery time means reduced margins—because labor costs are increased by the reduced productivity of highly paid employees or contractors. Deadlines are missed, clients are unimpressed, and repeat business becomes less certain.

Besides the business risks, having skilled professionals ‘watching paint dry’ as they wait for their work to cross to and from their data center or those of business partners—which is frustrating and demotivating.

So what’s the solution?

The performance of ‘heavy’ CAD data for users working from home is dependent on three factors: network congestion, network latency and network unpredictability. Removing these inhibitors is the way to give users a great experience—wherever they are working.

Heavy design data can really clog networks and, depending on the distance between the CAD application server and the user, latency can significantly impact application behavior. Add the unpredictability when every user is working over unique last-mile conditions, and all of these elements can really slow things down.

Riverbed Client Accelerator software on user laptops combined, with Riverbed SteelHead on the application server side, significantly speeds up processes that they are used to performing in the office in seconds, or minutes, rather than hours. Essentially, some of the ‘interesting behavior’ that negatively impacts application performance over networks is eliminated, and users experience CAD as if the application is local.

When remote users are running Client Accelerator:

  1. Network congestion is reduced by eliminating up to 90% of the data which goes on round trips, backward and forward between the user’s device and the CAD application in your data center (or beyond to the cloud if you’re using SaaS).
  2. Network latency is mitigated to improve application performance by up to 33 times.
  3. Network unpredictability—not the least of challenges for WFH professionals—is thus reduced over last-mile connections.

Fortunately, Client Accelerator is a relatively simple and fast solution to trial then roll out to users for rapid results.

CAD file acceleration in action

One firm that has dramatically improved network performance for its WFH staff is US-based Landform Professional Services. The multi-disciplinary consulting firm delivers integrated site design services including civil engineering, landscape architecture, planning, urban design, and land surveying.

Wanting to enable staff to open large CAD files from home and remote sites as quickly as they do in its office, Landform deployed Riverbed Client Accelerator and experienced immediate improvements. After deploying a proof of concept in just 30 minutes, it experienced 92% faster opening of large CAD files from home (from 20 minutes down to 90 seconds). This resulted in daily savings of up to 2-3 hours per employee—achieving ROI in less than a month.

Another business benefiting from optimized performance for remote workers is British firm, Hilson Moran. With more than 250 people in five offices across the UK and the Middle East, this engineering consultancy plans, designs, manages and operates built assets for a range of clients.

Using Riverbed Client Accelerator (formerly called SteelHead Mobile) it has improved network efficiency by as much as 80%, encouraging greater team collaboration through productive remote working. According to Hilson Moran CFO Roger Waters-Duke, “staff can work out of the office, on-site, with a client. We can move data faster and with more resilience… even in locations with thin broadband.”

If you’d like to learn more and take a 60-day trial of Riverbed Client Accelerator, visit our website.

 

 

]]>
5 Key Takeaways from Riverbed’s Global User Conference https://www.riverbed.com/blogs/5-key-takeaways-from-riverbed-global-user-conference/ Thu, 19 Nov 2020 20:59:33 +0000 /?p=16223 Wow—what an event! In keeping with current norms of social distancing and remote work, Riverbed held its first virtual global user conference, and what a day it was! I want to thank everyone who attended and participated. It’s your interaction and knowledge sharing that made the conference a success. Additionally, I need to remind all attendees—and those who didn’t register—that more than 30 sessions and keynote replays from our conference are available on-demand, here.

The Riverbed Global User Conference theme “Maximizing Performance and Visibility for Any User, Network, App, Anywhere” couldn’t have been more timely given the impact the recent pandemic has had on organizations of every kind. With the reality of a work-from-anywhere world, and with it the expanding complexity of the network, the challenges IT teams are facing in our modern, digital era have never been greater.

Uniting subject matter experts from across many domains who all share a passion for delivering the best user experience and productivity for their constituencies, was core to the design of this conference. The SMEs took center stage and didn’t fail to deliver. Sharing the latest knowledge of how you develop a holistic, end-to-end view of exactly what’s happening from the user client—across hybrid networks—and to the cloud was foundational to truly understanding where we are now and where we must go. Building on that knowledge with AI and machine learning to provide actionable insights and forward visibility has become absolutely critical. Leveraging all of the above and providing guidance on how to apply numerous innovations in network and application acceleration—from client to cloud—brought the conversations full circle.

The day-long event was abuzz with energy, but there are five key takeaways that should be reinforced:

  1. Every business needs to accelerate its digital journey. Digital transformations were already well underway in 2019. With the new year, came COVID-19, a global pandemic that would challenge our norms and shift digital transformation into overdrive, overnight. IT teams pivoted quickly and responded in herculean fashion and there’s no turning back. The path forward for every operating model—from supply chains to service delivery to crisis management—is clearly to be led by digital transformation.
  2. Hybrid and multi-cloud adoption continue to grow rapidly. Collaboration tools and SaaS app usage are up significantly, including Office 365, Microsoft Teams, Zoom, and Slack. For example, Teams grew 70% to 75 million daily users in April, alone. The shift to the cloud continues as organizations move more workloads to IaaS and further adopt SaaS apps.
  3. With remote work growing at 50%+ on the heels of the pandemic, flexible work environments are the future. The ‘global experiment’ has made business leaders more comfortable with remote work (95%), and 80% of employees expect more work-from-anywhere flexibility. Overnight, nearly 1.1 billion people were working from home (up from ~350 million in 2019), and more than half of them aren’t returning to the office when the pandemic subsides.
  4. End-to-end visibility is required for our work-from-anywhere reality. Digital transformation, the expanding workplace and explosive growth in cloud/SaaS have only magnified the complexity of today’s network and extended the visibility challenge for IT organizations. And, the old adage has never been more appropriate: you can’t control what you don’t see. End-to-end visibility is more critical than ever. Riverbed has helped many of our customers through this time with the tools to provide better visibility across modern, complex networks. View any of the 15 sessions focused on Network and Application Visibility—ranging from a deep dive on packets and flows to machine learning and AI for NetOps and Troubleshooting—on demand, now!
  5. Performance must extend from the user client through the network and to the cloud—regardless of your team members’ location. Delivering optimization across the modern network includes acceleration for all environments and infrastructure—on-prem, cloud, SaaS, mobile/client—regardless of where the user is located. Riverbed saw 3x growth initially on Riverbed Client Accelerator, which boosts app performance on laptops for remote workers and approximately 100% QoQ growth in Q3 for our application acceleration solutions that include Client Accelerator and Riverbed SaaS Accelerator, which boosts performance of SaaS apps like O365, Microsoft Teams, Salesforce and ServiceNow. You can view any of the 10 sessions focused on Network and Application Performance—ranging from optimizing encrypted SSL traffic to best practices for O365 acceleration—on demand, now!

The Riverbed Global User Conference clearly reinforced just how valuable our peer experts can be (make sure to view the many great success stories customers shared during the event) and also how passionate the IT community is with respect to providing the best user experience and productivity for their teams.

This is exactly where Riverbed’s been focused for years, with you, and even more so today. Partner with us to help your business or government agency maximize visibility and performance—for any user, network, application. Anywhere your users reside.

]]>
New Riverbed Unified NPM Products Support Year of Change https://www.riverbed.com/blogs/new-riverbed-unified-npm-products/ Mon, 16 Nov 2020 13:15:00 +0000 /?p=16189 2020 was a year of change, much of it unanticipated as the COVID-19 pandemic swept across the planet.

The massive surge in work-from-home (WFH) affected hundreds of millions across the globe as more than half of the workforce was forced to shelter in place. IT teams toiled to get team members up, running, and secure in their WFH environments—wherever that might be.

Keeping their WFH teams productive became priority one—overnight. Only 67% of IT executives say this transition was smooth.[1] The smoothest transitions came from companies in technology, finance and business services sectors where they already had a high deployment of cloud apps and remote workers.

One of the surprising side effects of COVID-19 is that it is reinforcing the value of technology and potentially reshaping the work environment. In fact, 60% of IT executives believe COVID-19 will make their organizations more reliant on IT,[2] while 95% of business leaders report being comfortable with remote work and anticipating more of it in the future.[3]

Improvements Required for Remote Work

However, many IT improvements are still required for ongoing operations and the security of employees, wherever they decide to work. On-premises data centers will feel the long-term impact of COVID-19 and its emphasis on arms-length operations. According to the Riverbed Future of Work Global Survey, based on 700 respondents, plans include the greater use of public cloud services to replace data center infrastructure and improving visibility to manage hybrid application and network performance.

Also not surprisingly, organizations will increase their budget for cybersecurity products and services. Cybersecurity is always a top contender. However, this year they are responding not only to the increased number of remote workers accessing corporate data from offsite locations—raising concerns about network security, stolen devices, and data encryption—but also the increased threat level due to the rise in cybersecurity attacks attributed to COVID-19.

As a result of these trends, Riverbed announced new Unified NPM capabilities, which were showcased at the Riverbed Global User Conference 2020. Riverbed’s unified NPM solution makes it easier than ever to ensure application performance, network security and user experience across the hybrid IT landscape:

New Cloud and HA Deployments

Riverbed Unified NPM has long offered hybrid-normalized visibility, providing the same industry-leading visibility across on-premises, cloud, hybrid and multi-cloud environments. Our network flow monitoring lets you discover applications, hosts and conversations inside the cloud. It also helps you identify usage by VPCs, regions, and availability zones. This detailed visibility helps minimize costs by reducing inefficient or unnecessary traffic and lets you build more efficient cloud architectures.

What’s new is that you can now also deploy Riverbed NetProfiler, Flow Gateway and NetIM in the cloud (remember AppResponse Cloud is already deployable in the cloud). These solutions run in AWS, AWS GovCloud, Azure and Azure Government with all the bells and whistles of the on-premises version. Additionally, NetIM offers robust High Availability (HA) capabilities to ensure that real-time insight into the health and performance of IT infrastructure is reliable and always accessible.

Increased Insight into Encrypted Traffic

Nearly 90% of all Internet traffic is encrypted[4]—and it won’t be long before nearly all Internet in transit will be secure. While this is great for privacy, it creates significant security blind spots. By leveraging encryption, attackers can bypass most inspection tools to deliver malware into the network. In fact, 71% of malware uses encryption to communicate secretly to command and control locations.[5]

To solve this challenge, Riverbed has introduced two new capabilities:

  1. A new PFS (Perfect Forward Secrecy) API allows Riverbed symmetric key intercept integrations with two partners: Nubeva and The Load Balancer Crew (LBC). This technology allows Riverbed AppResponse users to gain visibility into TLS encrypted application traffic for use in performance and security analysis.
  2. New TLS Analysis Insight reports for Riverbed AppResponse lets your users track, report and validate the integrity of SSL and TLS sessions, certificates and cipher suites for easy key maintenance and improved security.
TLS Handshake Insight makes it easy to determine which versions of SSL and TLS are being used and in which quantity.
TLS Handshake Insight makes it easy to determine which versions of SSL and TLS are being used and in which quantity.

To read more about these capabilities, check out this blog Riverbed AppResponse Adds TLS Analysis and PFS API.

New Behavioral Analytics of Packets, Apps & Users

Lastly, a new set of powerful capabilities makes it easier for you to understand what is most important about your network by bringing relevant insights to the surface.

AppResponse Adaptive Thresholds use behavioral analytics to automatically detect and flag abnormal changes in server response times and total throughput. These new capabilities reduce the noise and overhead associated with unimportant or unactionable alerts so you won’t have to fiddle to find that perfect threshold. AppResponse does all the work for you, continuously. It’s always learning what’s normal, which means it proactively detects abnormal conditions, giving you early warning that something is amiss.

Adaptive Thresholds solves the problem of setting thresholds. It automatically learns the behavior and alerts on abnormal changes.
Adaptive Thresholds solves the problem of setting thresholds. It automatically learns the behavior and alerts on abnormal changes.

 To read more about AppResponse Adaptive Thresholds, try this blog New AppResponse Adaptive Thresholds Reduces False Positives.

AD Connector 3.0 extracts user identity information from an Active Directory source, pulls it into Riverbed NetProfiler and makes it available for use within reports. Being able to resolve to the user name is useful for troubleshooting both security and performance problems, which is especially helpful when monitoring work-from-home environments.

See the related blog: NetProfiler Users Are More Than a Number With AD Connector 3.0.

Finally, NetProfiler expanded its integration with Riverbed SteelHead with support for custom application groups and enhanced reporting of inbound Network QoS (in addition to the already supported outbound QoS). These help you find and fix performance issues with a unified platform.

There’s a blog on this topic, too: Add Visibility to Your SteelHead to Optimize Network Performance.

To learn more about the Riverbed Unified NPM solutions, you can go to www.riverbed.com/npm.

 

 


[1] ESG, The Impact of the COVID-19 Pandemic on Remote Work, IT Spending, and Future Tech Strategies, 2020

[2] ESG, The Impact of the COVID-19 Pandemic on Remote Work, IT Spending, and Future Tech Strategies, 2020

[3] https://lp.buffer.com/state-of-remote-work-2020

[4] F5, Detect Encrypted Malware, 2020

[5] F5, Detect Encrypted Malware, 2020

]]>
Maximizing Visibility & Performance for a Work-From-Anywhere World https://www.riverbed.com/blogs/maximizing-visibility-performance-for-a-work-from-anywhere-world/ Thu, 05 Nov 2020 14:29:11 +0000 /?p=16146 When I joined Riverbed in October 2019, one of my first priorities was to present a clear vision and strategy for the company going into 2020. And after weeks of learning and listening to our customers, partners and employees, the value Riverbed brings to the market and our mission became evident—we exist to help our customers deliver exceptional visibility and performance for any network, any application, to all users, anywhere.

Now as most IT professionals will attest, that’s not an easy thing to do, especially in complex hybrid cloud environments. But that’s where Riverbed has always proven to be most valuable—in the world’s largest, most sophisticated networks—helping IT teams effectively manage diverse and distributed infrastructure, multiple clouds and third-party services.

Fast forward to March 2020 when the global pandemic caused many executives, including myself, to revisit well-planned strategies for the year. For Riverbed, this meant pivoting all our efforts to helping our customers quickly scale work-from-home models with application acceleration and network performance management solutions that kept remote workers productive and networks running and secure. For our customers, it meant accelerating digital initiatives like never before. Projects that would have taken years to execute were completed in weeks or months as IT organizations worked to support 1 billion employees suddenly working from home.

With urgent needs met, our customers are looking ahead to a future that is increasingly hybrid—both from an IT infrastructure and workplace perspective. As a result of the pandemic, 61% of CIOs are fast-tracking digital transformation efforts[1] and 59% of enterprises are accelerating adoption of cloud services[2] and MPLS alternatives. In addition, 74% of companies plan to expand the number of remote workers[3], creating hybrid workplaces where employees will split their time between the office and working remotely. And regardless of location, these employees will require the very best network and application performance to do their jobs.

In this hybrid network, hybrid workforce environment, the same IT professionals who navigated their organizations through the initial waves of COVID-19 disruption are being called upon again—this time to lead the most critical priorities that have emerged since the crisis. These priorities, which are all IT dependent, include accelerating digitization, enabling work-from-anywhere models, and strengthening operational resilience.

With all eyes on IT leaders and their teams to deliver against these priorities, it’s absolutely vital that they have the tools they need to succeed. First and foremost, they need end-to-end visibility—from the client, to the network, to the application, to the cloud—because it’s impossible to manage what isn’t measured or control what can’t be seen.

Visibility provides insight into where performance and security problems exist, what they are, when they occurred and why. Insight, in turn, informs action. As issues are uncovered, IT teams need to be able to quickly apply network changes, including optimization and acceleration, exactly where it’s needed to improve application performance, bolster security and ensure end-user satisfaction.

With these capabilities in place, IT teams are better equipped to deliver the quality of service, resiliency and innovation their organizations, customers and end users expect. And in doing so, technology leaders will take their rightful seat at the table, alongside other business leaders who are empowered to make strategic decisions and effect change for their organizations. Because never before has the technology strategy and execution of the IT organization been so closely linked to the productivity and performance of organizations as a whole.

These are unprecedented times. But, I believe the value of Riverbed and the mission we set forth prior to the pandemic remains true and ever more relevant to our customers as they enter 2021 and beyond. If you are interested in learning how we help organizations deliver exceptional visibility and performance for any network, any application, to all users, anywhere, you’ll find more than 30 sessions and keynote replays from our Riverbed User Conference here. I hope you take advantage of one or more of these sessions offered to position yourself and your organization for future success. It will be time well spent.

 

[1] IDG Research: CIO COVID-19 Impact Study, April 2020

[2] Flexera 2020 State of the Cloud Report

[3] Gartner: COVID-19 Bulletin, Executive Pulse, 3 April 2020

]]>
Network Visibility Proves Critical as Feds Grapple with Explosion of New Endpoints and Cloud-Based Apps https://www.riverbed.com/blogs/fed-network-visibility-new-endpoints-cloud-based-apps/ Tue, 27 Oct 2020 12:30:00 +0000 /?p=16101 Every federal IT manager wants to deliver the best end-user experience for their employees and agency, including a reliable network infrastructure that provides connectivity and functionality, useful solutions that enable seamless collaboration and strong security. This is especially vital as the federal government continues operations under mass telework, which studies have indicated is likely here to stay even beyond the coronavirus pandemic.

Riverbed recently commissioned a study of 200 federal government IT decision makers and influencers to assess the state and impact of network visibility on performance, productivity and security in the federal government’s increasingly complex network environments. While there were many interesting results, three trends emerged that really stood out.

Network complexity is high

Federal agencies were already making strides in their IT modernization efforts and transition to the cloud under the imperative of Cloud First and Cloud Smart policies, but—as it’s been widely acknowledged—the massive push to telework accelerated the move to cloud to allow greater access and flexibility for staff no longer on-prem.

I was happy to see that 20% of respondents have completed their priority projects for SaaS, cloud and SD-WAN adoption, and another 47% have their priority projects in progress. However, that leaves more than 30% not even considering or still in the planning stage of these transitions. That said, of those surveyed:

  • 87% recognize that network visibility is an enabler of cloud infrastructure
  • 90% already consider their networks to be moderately-to-highly complex
  • 32% say that increasing network complexity is the greatest challenge an IT professional without visibility faces in their agency when managing the network

As cloud and as-a-service adoption continues to advance, federal networks will inevitably become more complex and require advanced network visibility capabilities to effectively manage, monitor and secure them. Network visibility can help expedite the evaluation process to determine what goes onto an agency’s cloud as well as what data and apps stay on-prem. It also allows clearer, ongoing management across the networks to enable smooth transitions to cloud, multi-cloud and hybrid infrastructures—something IT leaders clearly know.

Visibility is the key to security

Network visibility is the foundation of a strong cybersecurity posture. Attackers spend an average of 200 days inside a network before being detected and can do significant damage during this time, from stealing credentials and PII to exfiltration of other highly sensitive data.

Recent high-profile IG reports have pulled back the veil on several federal agencies’ lack of network visibility, but the truth is they’re not unique. My teams have had the experience of finding elements completely unknown, and oftentimes possible nefarious unknowns, on nearly every customer network we’ve put our Network Performance Monitoring (NPM) solutions on. There’s never been a time that our customers weren’t surprised by what we found.

The very first step to securing your network is identifying what is on it and what shouldn’t be there. Federal IT leaders recognize that network visibility is the foundation of a strong cybersecurity posture:

  • 93% say greater visibility facilitates greater network security
  • 96% consider network visibility to be valuable in assuring secure infrastructure
  • They consider cybersecurity to be the No. 1 priority that can be improved through better network visibility

What really stood out to me, though, is that respondents view automated threat detection as the most important aspect of a network visibility solution. Advanced reporting and automated alerting come in as the second and third most important visibility features. Federal IT teams need systems that reach out and alert them when something isn’t right. Riverbed’s NPM solutions allow IT to have a full view of their network landscape in real-time—we inform you of everything and everyone that’s on the network, who they’re talking to and what they’re saying—and we provide the highest-fidelity data to enable automated alerting and threat detection.

Telework cuts traffic and drives change

Forgive the IT pun there… the massive shift to work-from-home may have cut vehicle traffic, but it actually drove an explosion of new endpoints, cloud-based apps and traffic on federal networks—along with the adoption of visibility solutions as IT teams grappled with trying to get their arms around who, what, when and where their networks were being accessed. In fact, 81% of survey respondents noted that the increasing use of telework accelerated their agency’s use and deployment of network visibility. With 90% of federal employees currently teleworking and 86% indicating that they expect to continue to do so at least part-time after the pandemic ends, network visibility has never been more vital.

IT leaders agree that network visibility solutions play an important role in managing and securing their agencies’ increasingly complex and hybrid networks. Riverbed’s market-leading NPM solutions diagnose and troubleshoot network and application performance, and security issues, delivering full visibility to all networked devices, reduction in network and app blind spots and an overall 67% reduction of lost user productivity.

Please check out our website for more information on solutions to help your federal IT team with its network visibility and performance needs, or feel free to reach out to me directly.

]]>
Strengthening Operational Resilience: A Crucial Goal for Surviving the Next Threat https://www.riverbed.com/blogs/strengthening-operational-resilience/ Fri, 23 Oct 2020 00:10:07 +0000 /?p=16110 It’s an interesting moment in time, to say the least. Across industries, every company is looking inward at their own operations to determine how they can weather this period in history. At the same time, they’re looking outward at how they can support their customers in doing the same.

The reality is that in a changed world, we all have to look at our businesses differently. That’s why so many organizational leaders are rethinking their priorities to focus on what’s critical to maintaining both short- and long-term relevance: accelerating digital transformation, enabling work-from-anywhere models, and strengthening operational resilience.

If you try to run your business with its pre-pandemic focus and cadence, you’ll miss big. At best, the results will be off the mark and at worst, they’ll prove disastrous for your company, your customers and your employees. This is the time to be incredibly proactive in analyzing and addressing the operational challenges that are unique to your business in a pandemic landscape.

But it’s not solely about weathering this particular storm. According to global consulting firm PwC, the definition of operational resilience is “an organization’s ability to protect and sustain the core business services that are key for its clients, both during business as usual and when experiencing operational stress or disruption.” So it’s clear that business and IT leaders need to look with a long eye to a horizon that may have other pandemic-level disruptions and ask, “Do we have what it takes to survive the next big hit?”

This is about seizing the opportunity now to build operational resilience in real time to address this current crisis—and then evolve that resilience to keep your organization strong and flexible enough to absorb external shocks and keep on going.

At Riverbed, we’ve seen the interest in operational resiliency firsthand. As companies went from workforces tightly clustered in physical offices to a far-flung, work-from-anywhere model, the sudden hit to IT visibility into application and network performance was unnerving and unproductive. How were critical apps and systems running? Could employees connect with business-critical apps when and where they needed them? How were these applications performing across the network? Could there be a better experience? Could IT departments understand security threats to the network—or network performance at all, especially with the workforce going remote?

We’re fortunate that our innovations help customers stay ready and able to deliver their own innovative products and services. When you’re trying to keep things moving in a crisis, it’s important that employees are able to work efficiently using applications in complex hybrid environments. For example, that’s where our ability to deliver ten times the acceleration for SaaS applications is essential.

Our real-time visibility tools make it possible to understand network and application performance and resource utilization across these complex hybrid cloud environments. The way we manage network performance blends telemetry from every packet, flow, and device in context with the machine learning, AI-powered analytics and visualization to ensure action can be taken. This is the way IT teams can get to the bottom of issues faster, detect security threats before they become catastrophes, and automate remediation.

Moving forward, operational resilience will increasingly become a differentiator for companies large and small. Customers want reassurance that when disaster strikes, the companies they choose to engage with can still deliver on their commitments—from delivering products that inspire to helping them troubleshoot and solve problems to developing new services that address emergent needs.

If the pandemic has taught us anything, it’s that operational resiliency is paramount. And, this was certainly validated at the Riverbed Global User Conference, where more than 1,000 attendees gathered virtually to discuss every angle of operational resilience and more. If you were unable to attend the event, we’ve compiled more than 30 sessions and keynote replays from our conference to give you the essential capabilities and how-to advice needed to maximize performance and visibility of any network for any application to all users, anywhere. Register to access the full library of content, here.

]]>
Enabling Work-From-Anywhere Models https://www.riverbed.com/blogs/enabling-work-from-anywhere-models/ Thu, 15 Oct 2020 19:39:49 +0000 /?p=16062 In Part 1 and Part 2 of this blog series, we established that forward-thinking organizations are prioritizing technology investments to ensure business growth and long-term relevance. This includes getting prepared—and fast—to enable work-from-anywhere models.

The concept of remote work is not new. Yet, according to Riverbed’s Future of Work survey, 69% of business leaders said they were not completely prepared to support extensive remote work at the start of the COVID-19 outbreak. And technology performance issues amongst their remote workers impacted both the individuals and business as a whole through reduced employee productivity (37%); increased anxiety (36%); and increased difficulty engaging with customers (34%). As a result, 61% of business leaders plan to make investments over the next year to enhance remote work performance.

Wise investments given the widely-held belief that the office of the future will be increasingly hybrid and distributed. As employees become more comfortable in the post-pandemic world and begin to move about, work-from-home will transition to a work-from-anywhere model. Employers also realize there are benefits to remote work—cost savings, employee retention, talent acquisition—to name a few. In fact, many leading brands, including Twitter, Square and Nationwide, are already paving the way by expanding their remote work policies and/or extending them “forever.”

CIOs and their teams are at the heart of helping their organizations enable work-from-anywhere models. But for remote employees, the unpredictability of network and application performance dramatically increases. They face unique issues—poor network stability and saturated local connections due to simultaneous access of bandwidth-intensive collaboration apps like video streaming and large file sharing—all of which negatively impact workforce productivity. Riverbed helps enterprises address these productivity challenges, maximizing application performance through massive data reduction and latency mitigation. Workforces can stay productive anywhere, anytime with fast, consistent, and available applications they need to get work done.

In a highly distributed world, cross-domain visibility of the expanded network is a must for security and resiliency. This requires a network performance management (NPM) solution that captures telemetry from every packet, flow, and device in context and then leverages machine learning, AI analytics and visualization to empower action. This gives organizations the control they need to enable work-from-anywhere models and to proactively identify and quickly troubleshoot network and application performance and security problems.

Anywhere can be an office, but only with the right technology. If you are interested in learning how we help organizations deliver exceptional visibility and performance for any network, any application, to all users, anywhere, you’ll find a wealth of information at our global user conference site. Don’t miss this opportunity to take advantage of more than 30 sessions and keynote replays offered to position yourself and your organization for future success.

]]>
8 Keys to Choosing an Ideal NPM Solution https://www.riverbed.com/blogs/8-keys-to-choosing-an-ideal-npm-solution/ Thu, 15 Oct 2020 14:15:00 +0000 /?p=15949 I’m sure you’ll agree that cloud environments and new application architectures have drastically evolved over the past five years. With this evolution, network performance management (NPM) and application performance management (APM) solutions are pushed to the limits. Application migration from on-premises to the cloud, the popularity of SaaS applications, and the transition from virtual environments to containers have all contributed to fundamental and profound changes. As a result, there are significant blind spots that make it extremely challenging for IT teams to effectively monitor and manage the holistic hybrid infrastructure.

Siloed NPM Solution creates blind spots in a hybrid environment

Complicating the lives of IT operations teams further, their responsibilities have reached far beyond the corporate network boundaries. We all have witnessed an unprecedented shift to remote work as a result of the pandemic. Now, the responsibilities of IT operations are going well into home and work-from-anywhere environments. Not only do IT teams have to grapple with performance issues, they have to deal with increased security vulnerabilities as cyber attackers have stepped up their game against vulnerable home office safeguards. Ensuring remote workers remain productive while keeping corporate data and applications secure is imperative. Yet, Digital Enterprise Journal found that it takes 197 days on average to just identify that security was breached. That is too long for operations to fly blind.

NetOps and SecOps need common NPM solution

8 keys to selecting the best NPM solution 

In this new norm, how do business and IT leaders ensure their organizations operate at peak performance? To start, they should consider these 8 keys to providing NetOps and SecOps teams with an ideal NPM solution:

  1. Monitor digital experiences beyond the network
  2. Integrate packets, flow and device metrics
  3. Proactive capabilities should alert NetOps before users notice
  4. Auto discover applications
  5. Map application and network dependencies
  6. Enable NetOps and SecOps with common datasets
  7. Provide insights into end-user experience
  8. Gain enterprise-wide visibility

As you develop your short list of potential NPM providers, download The Essential Network Monitoring Solution Checklist and be sure to evaluate Riverbed’s Unified Network Performance Management solution. With Riverbed, you can:

  • Understand how network performance and security threats impact business initiatives
  • Proactively detect and fix network performance and security problems
  • Remove cloud and hybrid infrastructure blind spots
  • Eliminate the finger pointing among operations teams

What criteria do you use to choose your NPM solution? Share your thoughts in the comments below.

_______________

References:

  1. Digital Enterprise Journal “19 key areas shaping IT performance markets in 2020” — Apr 22, 2020
  2. Digital Enterprise Journal

 

 

 

]]>
Enterprise SD-WAN Trade-Offs Part 4: User Experience vs. Security https://www.riverbed.com/blogs/enterprise-sdwan-tradeoffs-user-experience-versus-security/ Tue, 13 Oct 2020 12:30:00 +0000 /?p=15907 Is it possible to meet user expectations and maintain SD-WAN security?

One benefit of SD-WAN is that it makes it easy to steer certain traffic from remote sites toward your on-premises data centers and steer other traffic from remote sites directly to the Internet. Once selective traffic steering is made easy, there’s less of a reason to backhaul Internet-bound traffic from remote sites through your data center. Doing so only adds latency between users and their Internet-hosted apps and adds unnecessary traffic on your network. Instead, steer Internet-bound traffic directly from the branch to the Internet. Less latency. Less overall network traffic. Better performance. There’s a catch, however.

SD-WAN security trade-offs - skydiver in air

The problem is that steering traffic directly from the branch to the Internet comes with it the cost of increasing the threat perimeter of your network. You’ve traded network security for app performance. In order to navigate this trade-off, let’s investigate the following:

  • What are the best ways to effectively protect the edges of my network without breaking the bank?
  • What if I have to continue backhauling Internet-bound traffic (e.g. due to regulatory compliance or corporate policy)?
  • Is there a way to overcome the negative effects of higher latency that may arise?

Protect the edges of your network without breaking the bank

A decision about which security solution(s) to use is a critical one for an IT department—and one which is rarely met with casual points of view. First of all, when considering network security services as part of an SD-WAN transformation, start by making sure your SD-WAN solution has you covered regardless of the path you choose. Namely…

  • Your SD-WAN solution should make it easy to service chain with 3rd party security services, AND
  • Your SD-WAN should offer a set of native security functions out of the box

Let’s double click on each of those statements to further explore why it’s important and what to look for in each.

Your SD-WAN security should make it easy to service chain with third-party security services

SD-WAN security must support service chaining - cogs

It’s important that your SD-WAN solution does not require you to abandon the use of security services from vendors that are already in use and trusted within your organization. It’s typical (and recommended) that an SD-WAN transformation project be done in collaboration with the IT security team. They’re a critical stakeholder. You want to offload Internet-bound traffic at the source—near the user. They see that as throwing a bomb into their traditional approach to security, which looks to limit the number of access points to the big bad Internet.

As a starting point, look for an SD-WAN solution that enables the network team to meet your security team. Be mindful of the following:

Does the SD-WAN solution integrate with ANY other third-party security vendor products?

You’ll find that with basic SD-WAN solutions, as well as those offered by vendors who began their life as a network security vendor, that there’s little choice about which security solutions integrate well with the SD-WAN functions. This is obviously the least desirable scenario.

Does the SD-WAN solution integrate with a specific but limited number of third-party security vendor products?

Obviously, this is better than nothing but only works well if the integration includes support for the security vendor required by your security team.

Does the SD-WAN solution provide third-party security service chaining in a one-box configuration?

As you evaluate different SD-WAN offerings this is what really separates the wheat from the chaff. Very few SD-WAN solutions provide one-box service chaining supporting the integration of virtual instances of third-party security services. This can make a big difference in both the capital and operational cost of managing the edge of your network. Multiply the number of boxes in each site by the total number of sites and the numbers can get really big, really fast.

Your SD-WAN security should offer native security functions out of the box

SD-WAN security must offer native advanced securityWhile it’s often wise and pragmatic to first focus on integration with third-party security functions (e.g. from a vendor your security team already knows and/or uses), there’s an opportunity to further reduce total costs by leveraging native security functions provided by your SD-WAN solution out of the box. Look for SD-WAN solutions that provide a complete set of capabilities to maximize your savings, including:

  • Next-Gen Firewall
  • Next-Gen IPS/IDS
  • Malware Protection
  • Antivirus Protection
  • Unified Threat Management

Deliver exceptional user experience for backhauled Internet traffic

While SD-WAN may unlock new opportunities to steer Internet-bound traffic from remote sites directly to the Internet, bypassing any backhaul to a centralized data center or hub, it’s unlikely this will happen all at once for all traffic types. It’s more likely that many sites will continue to backhaul for some time (e.g. those that haven’t yet migrated to SD-WAN). Even once a site has migrated to SD-WAN, it’s likely that certain Internet-bound traffic will continue to be backhauled. For example, a business application delivered via SaaS may be more trustworthy than recreational Internet traffic. In this case, it’s prudent to keep backhauling all Internet-bound traffic except for a specific whitelist of apps that are steered directly from the branch to the Internet.

Every site and/or app that leverages backhauling will continue to face higher latency vs. direct steering from the branch. And, if the backhauled traffic is traversing conventional circuits (e.g. MPLS), you may also be facing bandwidth constraints as well.

Your SD-WAN solution should overcome high latency and limited bandwidth for backhauled traffic

Most SD-WAN solutions use app-centric policies to determine when Internet-bound packets are steered directly from branch to the Internet or backhauled. But, once the packets are placed on the network, the user’s experience is entirely determined by circuit conditions of the chosen path.

Look for an SD-WAN solution that offers WAN optimization and app acceleration services, especially for SaaS and cloud-hosted apps.

SD-WAN security and user experience should not be a trade-off

As you modernize your WAN, you will face trade-offs between network security and user experience / app performance. There’s no question about that. However, you can break through these trade-offs so long as your SD-WAN solution provides the right set of capabilities. Ensure your solution supports: (i) extensible service chaining, (ii) advanced native security functions and (iii) app acceleration for SaaS/cloud-based apps.

- Sign posts - impossible and possible. SD-WAN security and user experience will not be a trade-off if you consider the capabilities carefully

With those capabilities in hand, you’ll have the freedom to transform your WAN over time. You can maintain SD-WAN security requirements AND meeting user expectations for fast and reliable app performance.

Resources:

  • You can find an SD-WAN solution that provides all of the functions described in this blog post here.
  • This blog is part of a broader series on breaking through important trade-offs you’ll encounter while modernizing your network with SD-WAN.
  • Learn more about the differences between SD-WAN and WAN optimization.
]]>
Accelerating Digital Transformation: The Race for Relevance https://www.riverbed.com/blogs/accelerating-digital-transformation/ Thu, 08 Oct 2020 14:01:01 +0000 /?p=16017 As established in Blog 1 of this series, three critical CEO priorities have emerged as a result of the pandemic. At the top of that list is accelerating digital transformation.

It’s not a secret that COVID-19 disrupted the very carefully planned digital transformation trajectory most companies were on. CIOs and internal technical organizations had mapped out a steady-state pace of investments in cloud services from IaaS to SaaS to PaaS while simultaneously adopting mobile capabilities and exploring technologies like artificial intelligence, machine learning, Internet of Things and Big Data to drive digital innovation. But the pandemic shifted dollars from longer-term priorities to address immediate needs: getting employees up and running, securely and productively, in their home environments. This became—and continues to be—about the race for relevance, the effort to remain competitive no matter the circumstances.

Today, across industries, businesses have adapted and the universal hope is that the pandemic environment will end quickly. But the reality is that the timing around the pandemic’s conclusion is unpredictable and the ripple effects will last much longer. In the meantime, there’s no path to a clean network transition that encompasses thousands of “sites” (employees’ homes) that are not on company-owned networks. Hence, the pressure most leaders feel to get at least some of their workforces back in the office when it’s safe to do so.

Think of the enterprise-owned premises as a castle; IT knows how to support and protect its inhabitants as long as they’re behind the moat and thick castle walls. But send them back to the village into their own places and IT’s typical mechanisms for support and protection no longer work. In a pandemic world, IT teams are blind. They can’t ensure consistency of experience and security also becomes much more difficult. Compounding that problem, there’s still a desire for digital transformation but that transition is not in the hands of a single group or person. Any major shifts that require buy-in from multiple stakeholders are inherently a slower proposition.

However, digital technologies can provide an immediate reprieve, solving the problems of today while company leadership sorts out the priorities and timelines for tomorrow. Many of our customers, for example, are turning to Client Accelerator with SaaS Accelerator to optimize the performance of critical productivity apps such as O365 to users anywhere, even when the traffic origination point is now in control of third parties. We continue to see the value such solutions have in sustaining remote workforce productivity and quality of experience.

We also see continued value in foundational capabilities every IT organization must have to support digitization. This includes next-generation, software-defined networks and most importantly, unified visibility and real-time insights into IT infrastructure—every packet, flow and device—that comprise an experience for the end user. Knowing the good and the bad as they happen is critical for CIOs and IT organizations to either stay the course or course correct as need be.

It’s clear that in a world where the vast majority of interactions are now virtual, there is an acute and immediate need to fast-track digitization to not only survive the crisis but to ensure long-term relevance. Companies need to select for forward momentum in every technical decision, policy, and purchase that’s made. Even those actions taken for the short term should still be evaluated against one primary metric: How does this accelerate our longer-term digital transformation efforts?

Riverbed is laser-focused on delivering the innovations that help companies generate real, lasting momentum on their digital transformation journey. We’ve compiled more than 30 sessions and keynote replays from our Riverbed User Conference to give you the essential capabilities and how-to advice needed to maximize performance and visibility of any network for any application to all users, anywhere. Register to access the full library of content, here.

]]>
Enterprise SD-WAN Trade-Offs Part 3: Cost vs. Performance https://www.riverbed.com/blogs/enterprise-sdwan-tradeoffs-cost-versus-performance/ Thu, 08 Oct 2020 12:30:00 +0000 /?p=15847 SD-WAN makes it easy to incorporate less-costly bandwidth options like Internet Broadband and LTE at remote locations. What are the performance-related SD-WAN trade-offs to consider? Here’s a question: What is the increase in capacity going to do to your app performance? In this third part of the Enterprise SD-WAN Trade-Offs blog series, we will examine the factors you should consider when incorporating inexpensive bandwidth options.

You might be thinking, “Wait! Doesn’t more capacity always equate to better app performance?” Well, like most things in life, it depends.

The reality is that more WAN capacity can lead to any range of possible effects concerning app performance:

  1. More WAN capacity could yield NO DIFFERENCE to app performance, or…
  2. More WAN capacity could make app performance BETTER, or…
  3. More WAN capacity could even make performance WORSE!

It all depends on the underlying bottleneck which is limiting app performance in the first place. If you don’t know the situation you’re in, you may be surprised to find your app performance is no better—or is even worse—with higher capacity bandwidth circuits in place.

SD-WAN trade-offs: performance factors to consider

There are three key bottlenecks to be aware of as well as how they map to the results mentioned above:

  • High Network Latency: more capacity will yield NO DIFFERENCE.
  • Low WAN Capacity: more capacity will make app performance BETTER.
  • Poor Link Quality: more capacity of lower quality can make performance WORSE.

Note that when it comes to maximizing your application performance, it’s an iterative process. You need to identify the current bottleneck, apply the appropriate remedy and then repeat the same process over again. As one bottleneck is alleviated, a different one may emerge. This means that you need to have a solution with a full complement of capabilities to overcome each bottleneck along the way.

Here’s a very common example: Let’s say that you’re dealing with the performance of large file transfers across your WAN using a file-sharing protocol like Microsoft CIFS/SMB. Each of the bottlenecks above can emerge, and increasing bandwidth only addresses one of the problems.

Network latency

The first factor in this SD-WAN trade-off is network latency that inhibits the performance and throughput of the network protocols (TCP) and application protocols (CIFS/SMB). One indicator of this situation is that available WAN capacity remains unused even while the file transfer occurs.

SD-WAN Trade-offs: Latency is one of the performance factors to consider.How is latency having this impact? In the case of network protocols, the TCP stacks residing in the client and/or server operating systems are configured by default to send a maximum amount of data (in IP packets) onto the network before receiving a response that the data has been received. Only after the data is transmitted across the WAN, and an acknowledgment of its receipt is transmitted back across the WAN, will the operating system send more data onto the network. Similarly, the file-sharing application protocol (CIFS) will only transmit a maximum number of data “blocks” and waits for an application-level acknowledgment before sending more.

To alleviate this bottleneck use WAN optimization that can accelerate the performance of BOTH network AND application protocols. If only one or the other, but not both is employed, latency will continue to limit the end-to-end throughput of the file transfer.

 

WAN capacity

SD-WAN Trade-offs: WAN capacity is one of the performance factors to consider.Next, WAN capacity has become fully utilized and is thereby limiting end-to-end performance. To alleviate this bottleneck, use network data compression and/or deduplication to virtually expand circuit capacity. You could also upgrade to a higher capacity WAN circuit, however, be mindful of the following common result.

Link quality

Finally, poor link quality causes end-to-end throughput to suffer. You’ve upgraded your MPLS circuit to a higher capacity Internet Broadband circuit, but surprisingly you see end-to-end performance degrade. The percent of network packets dropped during transmission has increased. (This is often due to internal congestion of the WAN itself. Unlike MPLS circuits, which come with higher SLAs and guaranteed performance, lower cost Broadband or LTE bandwidth may be oversubscribed. Essentially, you get what you pay for.)  Such dropped packets slow down the whole machinery of your data transfer. Each dropped packet must first be detected as “lost”. It then must be resent. And finally, its acknowledgment must be received. This entire process takes time and multiple roundtrips across the WAN. And all the while, it keeps the contiguous data stream from being delivered, in order, to its recipient. 

SD-WAN Trade-offs: Link quality is one of the performance factors to consider.The solution to this is to employ link conditioning, or forward-error correction (FEC) techniques, such as packet duplication or multi-packet parity encoding. When these techniques are used, a sender sends more information (alongside the data) onto the network that can be used by the recipient to reconstruct one or more packets that may have been lost along the way. The use of these techniques comes with one important warning: If the underlying cause of the dropped packets was network congestion in the first place, then such techniques can further exacerbate the problem, causing more congestion, more packet drops and further reducing the experienced “quality” of the circuit. (TIP: Look for solutions that automatically and dynamically turn on and off such techniques only when required, based on real-time network conditions.)

As this example illustrates, using SD-WAN to increase WAN capacity may do nothing to improve your app performance. And, if you adopt lower quality circuits, your performance can get worse.

In summary

To break through any SD-WAN trade-offs between cost and performance, make sure that your SD-WAN solution provides the following capabilities. Only then will you be able to overcome each and every bottleneck that will arise.

  • Network Protocol Acceleration (Eg. TCP/UDP)
  • Application Protocol Acceleration (Eg. CIFS/NFS/HTTP)
  • Network Data Compression
  • Network Data Deduplication
  • Dynamic Circuit Conditioning (Eg. Packet Duplication, FEC)

For more information on how to go about correctly diagnosing your current bottlenecks to app performance, also refer to the following:

And for more information about an SD-WAN solution that provides all of the necessary capabilities discussed in this blog entry, check out Riverbed SteelConnect EX.

]]>
Enterprise SD-WAN Trade-Offs Part 2: the Destination vs. the Journey https://www.riverbed.com/blogs/enterprise-sdwan-tradeoffs-destination-versus-journey/ Tue, 06 Oct 2020 12:30:00 +0000 /?p=15817 Preface: COVID-19 delays SD-WAN deployments in 2020

In between the first draft of this Enterprise SD-WAN Trade-Offs blog series and the present, the COVID-19 pandemic emerged, and with it a new crop of IT requirements and a shift in priorities to support work-from-home employees. In turn, many SD-WAN adoption projects have been put on hold, and analysts have forecasted that SD-WAN spending will be flat YoY in 20201. However, the same analysts have predicted a rebound of 40% YoY growth in 2021, as enterprises reimagine and reintroduce use of on-premises locations.

The pause-button we have experienced with SD-WAN is a perfect example of a common case for SD-WAN adoption. Namely, that SD-WAN adoption never happens all at once, which is the focus of this blog. Keep this in mind as you reanimate those SD-WAN projects that may have been temporarily put on hold. Now back to our regularly scheduled blog entry…

The journey toward successful SD-WAN adoption

We all want SD-WAN. But it’s impossible to transform the old into the new all at once. This means we have to traverse an intermediate phase—the brownfield—where some sites/circuits are managed via SD-WAN and others remain managed via conventional routers.

Beach with 'danger mines' warning signageThe difference between navigating this phase unscathed and bringing your network to a screeching halt has everything to do with the ability of your SD-WAN solution to effectively interface with your existing network and cope with its topological complexities, one-off hacks and special-case router configs that have built up over time. Those hidden network demons that have been lurking unnoticed will inevitably (thanks, Murphy!) rear their ugly heads once the transformation is underway.

This blog takes a close look at multiple phases you’ll likely encounter during SD-WAN deployment and what capabilities you’ll need in place.

The high-level and intuitive takeaway here is this: if you want to ease the migration from a legacy network to SD-WAN, it’s critical that your SD-WAN solution be as fluent in legacy routing technology (on the underlay) as it is with its own SD-WAN (in the overlay). During the transition, you’re going to have one foot in the old world and one foot in the new world. You need an SD-WAN solution that is fluent in both. From the old world, this includes capabilities such as the following:

  • Full routing stack
  • IPv6 support (overlay & underlay)
  • VRF segmentation
  • Multicast support (overlay & underlay)
  • Flexible topologies (full mesh, hub and spoke, spoke-hub-hub-spoke, hub-spoke-spoke-hub)

Here’s a closer look…

Transitionary (brownfield) phases and critical capabilities you’ll need

As you consider the phases in the table below, it’s notable that the hardest cases (on the right) are actually more common. They exist and persist as you phase in the adoption of SD-WAN at remote sites. Conversely, the easier cases (on the left) are the ones that are least common—only found at the tail end of a complete transition to SD-WAN.

Table comparison of various SD-WAN deployment approaches

In closing: is SD-WAN adoption more trouble than it’s worth? 

The answer to this question is simple (and hopefully now rather obvious).   

  • If your SD-WAN solution provides the capabilities needed to successfully get you from point A to point B, then YES. Go forth planning SD-WAN adoption with the confidence that your new network and your old network can co-exist seamlessly every step of the way. 
  • However, if your SD-WAN solution doesn’t provide these critical capabilities, then BEWARE. The cost, risk and effort associated with navigating the inevitable minefield of the brownfield could decimate the benefits you were seeking from SD-WAN in the first place. 

 


1Gartner: Forecast Analysis: Enterprise Network Equipment, Worldwide (24 July 2020)

]]>
3 Critical CEO Priorities Driving Post-COVID Growth https://www.riverbed.com/blogs/priorities-driving-post-covid-growth/ Fri, 02 Oct 2020 16:17:44 +0000 /?p=15903 2020 was a year of great change and uncertainty for our customers and indeed, the entire world. Seemingly overnight, organizations have had to quickly pivot to deal with the challenges of the global pandemic and at the same time, every operating model—from supply chains and service delivery to go-to-market and crisis management—has been put to the test.

But with change and uncertainty comes opportunity. Forward-thinking business and IT leaders have already begun to reevaluate their strategies, carefully balancing the need to manage expenses during the crisis with making investments that will drive post-COVID growth and position their organizations for future advantage. According to a worldwide CIO survey, there are three critical CEO priorities that have emerged:

1. Accelerating digital transformation

Prior to the pandemic, most organizations were on a steady and carefully planned digital transformation journey. They were adopting cloud services (IaaS, SaaS, PaaS) and making investments in mobile capabilities and technologies such as AI, ML, IoT, and Big Data to spur digital innovation. But in a world where the vast majority of interactions are now virtual, there is an acute and immediate need to fast-track digitization to not only survive the crisis but to ensure long-term relevance.

2. Enabling work-from-anywhere models

While the requirement to work from home will eventually be lifted, many organizations are planning to continue and even expand remote working models. Employers have realized that there are many benefits to remote work—cost savings, employee retention, talent acquisition—and that with the right set of tools and technologies, remote workers can be just as productive as their in-office counterparts. As a result, the office of the future will be increasingly hybrid, enabling employees to work and collaborate both virtually and physically anytime, anywhere.

3. Strengthening operational resilience

Operational resilience is a new imperative for organizations that struggled to uphold acceptable service levels when the pandemic hit. Times now demand a sharp focus on ensuring critical systems, applications and infrastructure are secure, accessible and performant for all end users regardless of where they are located or how they choose to connect. Redesigning operations to be more intelligent, automated and adaptive is the only way organizations can truly prepare for future waves of disruption.

Preparing for what’s next, now

CIOs and their teams play a vital role in helping their organizations address these priorities. But to ensure success, they must overcome the challenges of insufficient visibility, unpredictable network and application performance, and expanded cybersecurity risks—all while improving their ability to be agile and resilient to ever-changing conditions.

Riverbed is on a mission to help IT teams conquer these challenges. We’ve compiled more than 30 sessions and keynote replays from our Riverbed User Conference to give you the essential capabilities and how-to advice needed to maximize performance and visibility of any network for any application to all users, anywhere. Register to access the full library of content, here.

 

]]>
Riverbed AppResponse Adds SSL/TLS Analysis and PFS API https://www.riverbed.com/blogs/appresponse-tls-analysis-pfs-api/ Wed, 30 Sep 2020 12:30:00 +0000 /?p=15756 Keeping track of SSL and TLS security certificates is important. An expired certificate can erode trust in your organization in that customers may no longer want to do business on your website. In fact, Google looks at SSL/TLS configurations as part of its search ranking algorithm. Having an invalid certification can lower your search results quickly.

Even more importantly, according to Gartner, more than 70 percent of malware campaigns in 2020 used some type of encryption to conceal malware delivery, command-and-control activity or data exfiltration. Clearly, it is becoming essential to have visibility into encrypted traffic.

TLS stands for Transport Layer Security and it is responsible for encrypting data in transit over the network. TLS is an updated, more secure version of SSL or Secure Socket Layer. TLS performs data encryption, prevents eavesdropping by intermediaries by using symmetric cryptography, and allows the client to verify the identity of the server.

On the server side, there’s a private key and a public certificate that’s been signed by a trusted third party called a Certificate Authority (CA). Certificates are typically valid for up to two years, although some can be as short as 90 days. Because certificates are issued for a limited time, it is crucial to monitor their expiration date.

AppResponse’s new PFS API

Earlier this summer, Riverbed created a PFS API (Perfect Forward Secrecy) which allowed us complete integrations with two partners—Nubeva and The Load Balancer Crew (LBC)—on symmetric key intercept SSL/TLS decryption technology. This technology allows Riverbed AppResponse users to gain visibility into TLS encrypted application traffic for use in performance and security analysis.

LBC’s integration with AppResponse’s PFS API is an LBC-authored iRules LX script that runs on F5 load balancers and sends TLS 1.2 PFS crypto ephemeral keys to AppResponse. Nubeva offers cloud-hosted and software versions of their symmetric key intercept. It can discover symmetric keys from container and Kubernetes environments, intra-zone VPCs, cloud and pinned traffic. It requires three components:

  • A Key Sensor learns and extracts the symmetric keys inter-/intra-host;
  • Key Depots are an aggregation and key distributions buffer system, which enables scaling and multi-use;
  • A Controller that simplifies management and rule definition along with elastic and automatic deployment of sensors.

SSL/TLS Analysis

Just this week, the AppResponse team released version 11.10, which adds TLS analysis to the Application Stream Analysis (ASA) module among a slew of other great enhancements. AppResponse 11.10 keeps track of the TLS handshake metrics and certificate information. You can enable the new TLS Analysis on the configuration page just by checking the enable box. Once enabled, new rules filter the traffic for TLS handshakes and certificates. All traffic is filtered by default, so you may want to customize the rules to get just the traffic you need.

TLS Handshake Insight

First off, SSL/TLS handshake data is available as a new ASA Insight. It helps you answer important security questions like:

  • Which versions of SSL/TLS are being used on the network and in how many sessions?
  • Is anything in the network using an obsolete cipher suite?
  • Which clients and servers are using specific SSL/TLS versions and cipher suites?
  • Is anything in the network using expired X.509 certificates?
  • Is anything in the network using renegotiation?
  • How many sessions are experiencing SSL/TLS errors in the network?

TLS Handshake Insight makes it easy to determine which versions of SSL and TLS are being used and in which quantity.

TLS Handshake Insight makes it easy to determine which versions of SSL and TLS are being used and in what quantity.

Easily identify invalid, expired or unknown certificates to keep your network, employees and customer safe.
Easily identify invalid, expired or unknown certificates to keep your network, employees and customer safe.

New Certificates Tab

You can find all the certificates listed with information pertaining to expiration, hosts and servers in a tabular format under SSL Decryption for easy viewing and maintenance. It has five sections to help you maintain cert compliance: Installed Keys; Certificates with Installed Keys; Certificates with Missing Keys; Ignored Certificates and PFS. The tab lists all server IPs for a given certificate, and there is more information about the certification in the certificate details section.

The Certificates Tab shows detailed information about current SSL or TLS certifications and their status.
The Certificates Tab shows detailed information about current SSL or TLS certifications and their status.

In short, AppResponse’s new TLS Analysis Insight is essential for anyone who wants to understand their encryption situation. Enterprises are using a variety of encryption technologies and some of them are now obsolete and risky. Riverbed AppResponse can give you the visibility you need to keep your network secure.

For more information

To get TLS Analysis, you need to be running AppResponse version 11.10, which current customers on active maintenance can download free of charge at https://www.riverbed.com/support-overview/. Others should contact Riverbed Sales.

To learn more about our PFS integration partners and how to interact with them, go to the following Knowledge Base articles (login required to access this content):

]]>
Synthetic Monitoring: A Key Tool for Hybrid Enterprises https://www.riverbed.com/blogs/synthetic-monitoring-key-tool-hybrid-enterprises/ Fri, 25 Sep 2020 12:30:00 +0000 /?p=15735 While the benefits of cloud infrastructure and applications continue to drive enterprises to direct more of their IT investments in that direction, the cloud is certainly not a panacea—especially when it comes to maintaining visibility across an increasingly hybrid IT landscape.

Consider the experience of Jamie Halcomb, CIO of the U.S. Patent and Trademark Office. In a WSJ article (Dec 31, 2019) CIOs Share Their Priorities for 2020, Halcomb shares: “Part of my mission is to stabilize mission-critical systems and take our agile and DevSecOps practices to the next level while we move assets into the cloud.”

Halcomb seeks to increase agility while maintaining stability. But distributing applications across on-prem data center, cloud and SaaS adds new complexity. This fundamentally makes it harder to ensure the availability and performance of these apps. One reason for this is that blind spots increase as the IT landscape becomes more hybrid and complex.Stats representing that an "increase in cloud services means major increase in visibility gaps"

And so, while digital transformation has made technology a critical part of an organization’s success, increasing service disruptions can have a profound impact on user experience, brand value and financials of a company.

Statistics on how synthetic monitoring can help reduce service disruptions

In order to maintain a high-performing, reliable and secure network, you need a broad and complete view across IT domains—on-premises and in the cloud.  

Achieving a holistic view of your critical hybrid IT environment requires the integration of multiple approaches. There are two primary approaches to help you ensure availability and measure end-user experience:

  • Real-time user monitoring (RUM)
  • Synthetic testing/monitoring

What is real-time user monitoring?

Real-time user monitoring (RUM) measures one of the most critical metrics: actual user experience as, and when, they interact with their apps. RUM constantly observes the system in the background—tracking availability, functionality, responsiveness and other metrics. This approach leverages real user traffic to gauge performance.

What is synthetic testing / synthetic monitoring?

Synthetic monitoring and testing is a method used to monitor your applications or infrastructure running in the cloud or on-premises data center by simulating users. It is an active testing method and very useful for measuring availability and response time of critical web sites, system transactions and applications. It works whether you have user traffic or not.

How does synthetic monitoring/testing work?

Synthetic monitoring, or synthetic testing, uses distributed test engines to proactively evaluate the availability and performance of your applications and web properties—even when there is no real user traffic. With synthetic monitoring, scripts or agents are deployed across the globe at key user locations to simulate the path an end user takes when accessing on-prem or cloud applications. The applications can reside anywhere—in the data center, in the IaaS cloud or a SaaS application. As long as there is a path to the application from the testing location, synthetic testing can be used.

Benefits of synthetic monitoring for hybrid applications

  • Proactively identify issues before your users notice
  • Keep a pulse on availability and performance round the clock
  • Take monitoring where your applications go
  • Monitor complex interactions live or pre-release
  • Baseline and objectively measure application SLAs

Riverbed’s solution

Riverbed Unified NPM provides both synthetic and real-time user monitoring giving you a complete view of performance from the end-user perspective. Riverbed’s synthetic testing can simulate searching (database), adding items to cart (web application), logging in (identity validation), etc. in order to measure the performance of holistic application interactions. Riverbed NetIM, part of the NPM suite, offers a variety of synthetic tests, including Ping, DNS, TCP, LDAP, databases, HTTP, and external scripts for creating your own tests. It uses SNMP, CLI, Traps, Syslogs and API polling as well as synthetic testing to capture availability and performance information of network devices, servers and applications.

Are you using synthetic monitoring in your environment? How are you using it? Share your experiences in the comments below.

 


  1. Digital Enterprise Journal “19 key areas shaping IT performance markets in 2020” — Apr 22, 2020
  2. Market Guide for Network Performance Monitoring and Diagnostics. Published 5 March 2020 — ID G00463582
  3. Digital Enterprise Journal, March 2020
  4. Annual outage analysis 2020 — Uptime Institute (March 2020)

 

 

 

]]>
Ensure Efficient, Secure, Easy-to-Use Digital Government Services https://www.riverbed.com/blogs/digital-government-services-smarter-network-management/ Fri, 25 Sep 2020 03:20:00 +0000 /?p=15717 COVID-19 has offered a glimpse into the future delivery of a broad range of digital government services. Healthcare particularly has been at the forefront of an accelerated shift to digital, with the Australian Government investing $669 million to expand Medicare-subsidised telehealth services to provide quality care for those in need at home.

As an example, let’s take a look at the zero-touch model now in place for COVID-19 testing. Anyone who’s been worried about a sniffly nose or scratchy throat during the past few months starts by calling their medical centre. They then have a telehealth consultation with a doctor who refers them to a drive-through centre.

Without leaving the car, the patient has their temperature and breathing checked, and swabs are taken. If all is well, they receive an automated SMS within 48 hours confirming that the test was negative.

Many elements of this zero-touch model, which could transform access for people in regional and remote Australia as well as vulnerable members of our community, would also be suitable for use in other areas of federal, state and local government. Digital government services could transform everything from welfare and community services to business and financial support.

With COVID-19 increasingly looking like it will be with us for years rather than months, the government has another important reason to prioritise the digitisation of services in these areas. However, reliable service delivery is critical. An accelerated transition to digital delivery demands application and network performance that is reliable and easy enough that people will actually want to use these digital government services.

The New South Wales government acknowledged this long before COVID-19 reared its ugly head when it made a commitment that its digital transformation will be guided by six key customer commitments, including ease of engagement. This is fundamental, given that citizen expectations are increasingly defined by consumer apps with highly functional user experiences.

But the reality is that new apps and infrastructure can drive increasing complexity in terms of integration, visibility and network performance that make it a challenge for government agencies to ensure a good digital experience, supported by reliable performance.

Here are a few focal points for government agencies looking to ensure efficient, secure and easy-to-use digital government services in what we can expect to become an increasingly zero-touch world:

Visibility for digital government services

Understanding what is happening, and where, in the network, is the first step to delivering reliable, high performing and secure digital government services. Agencies need to be able to continuously monitor dynamic networks and infrastructure to ensure application performance and availability.

Deep and broad visibility and analytics will enable IT teams to fully optimise hybrid IT resources, ensure service quality and network security in the zero-touch, digital government service delivery model. With this capability, IT teams can proactively identify and resolve performance issues before citizens and agency reputations are impacted.

Accelerate performance without accelerating cost

Efficiency in digital government service delivery will become more critical going forward in the pandemic recovery phase as agencies are called upon to do more with less. By increasing data performance on the network while reducing bandwidth utilisation, agencies can achieve faster application performance and reduce cost at the same time.

Some believe that SD-WAN is the way to do this. While it’s true that SD-WAN is transforming the way networks are deployed and managed, SD-WAN alone can’t address enterprise application performance. In fact, the overhead assigned to SD-WAN reduces the available payload of each packet. In most cases, additional performance is still required for geographically remote locations such as those in the Middle East, Europe, Africa, the Americas and South Pacific just to mention a few.

WAN optimisation technology is able to make this additional improvement possible, allowing agencies to increase data transfer performance by up to 100 times while reducing bandwidth utilisation by up to 99 per cent across hybrid and software-defined networks. Users have been known to experience up to 33 times faster application performance for on-prem, SaaS and cloud-based apps while costs incurred by cloud egress are reduced by up to 99 per cent.

In other words, SD-WAN and WAN optimisation solve fundamentally different problems. They are  complementary when deployed together.

Scalable and ready for anything

As Australia battles its way through a second wave and beyond, it’s expected that COVID-19 will have a lasting effect on citizen expectations of public sector service delivery. The key is to be able to ensure high performance of citizen service delivery applications today while building in the ability to scale to successfully meet the demands of the future—which may arrive at short notice.

To make this less complex and costly, IT teams must ensure that they have access to the right diagnostic and network performance management software to simplify and automate the provisioning and management of secure network resources while maintaining and optimising application performance.

Armed with these capabilities, government agencies can ensure investments in digital services pay off for government and citizens alike at a time when they’ve never been more needed.

]]>
Public Sector App Performance Doesn’t Have to Be Unpredictable https://www.riverbed.com/blogs/improve-public-sector-app-performance/ Fri, 25 Sep 2020 03:20:00 +0000 /?p=15719 It’s fair to say that 2020 has been an unpredictable year for many, including those in the public sector. With millions of people now living at work—my preferred version of “working from home”—they have found that their day-to-day technology experiences and app performance can often fail to live up to what they were used to having in the office.

While the unpredictability we face outside of our work lives at present is, for most of us, out of our control, the technological reality we face on a daily basis while working from home isn’t. The key issue behind much of the unpredictability experienced by those in the public sector is latency—the amount of the time it takes to send a packet of data from one location to another. In other words, the amount of time between an action and a response.

Latency’s impact on application performance can make or break a user’s experience

There’s a common misconception that adding network bandwidth is a quick fix for what is more commonly a latency-related issue. And, while SD-WAN can give IT teams the power to improve network performance, it does not necessarily impact latency and therefore cannot guarantee an improved experience for staff members working from home.

So, here we have another theme: the importance of knowing where the real issue lies. There are tools that can significantly support a smoother and more predictable IT experience for public sector workers dispersed across the country accessing work systems via the public internet.

The first is Network Performance Management (NPM). This allows government IT teams to see what is happening on the network across servers, data centres and clouds. Rather than jumping to the conclusion that more bandwidth is the answer, you might find that something else is causing the issue. NPM enables faster and more cost-effective resolution of IT issues in a dispersed environment.

Improving public sector app performance and reducing network costs

Once you’ve drilled down into where the real issue lies with poor application performance, you will likely find that latency is a critical issue. This is where Riverbed’s SaaS Accelerator can help. This fully cloud-based service allows organisations to measure, monitor and accelerate top enterprise collaboration applications including Microsoft O365 apps (SharePoint, Exchange, Teams & Stream, Office WebApps); Salesforce; ServiceNow; Box and Veeva, among others. With SaaS Accelerator, IT teams are able to ensure the fastest, most reliable delivery of applications to any user, regardless of location or network type—all while reducing cloud egress costs.

In independent tests by the Enterprise Strategy Group, the reduction in average file transfer time when users in Sydney uploaded and downloaded files using SaaS Accelerator were 85 per cent and 75 per cent respectively. By saving minutes of time per day per user, large organisations will be able to reclaim thousands of hours of lost productivity each year.

And, it becomes more powerful with increased usage. That’s because information that has already been downloaded by someone within your network can be shared peer-to-peer among users rather than needing to be downloaded again. As more items are downloaded by users throughout the network, there are fewer things that have to be downloaded again for the first time at some point in the future.

This improves application performance and reduces network costs. Data reduction is an important component of Riverbed SaaS optimisation, particularly as enterprises move to more data-rich collaboration apps.

Ensuring that government IT teams have the right tools to efficiently support their staff working from home is critical, not only for productivity and staff morale today, but going forward. The next step is to ensure that work not only gets done but that it gets done well. While staff may be accepting of second-best app performance in a short-term crisis scenario, it’s clear that this pandemic isn’t going away anytime soon and neither are public sector app performance issues.

A full return to the office is not on the cards for many in the short to medium term. For some, not even in the long term. The future success of the public sector will require more flexible, hybrid ways of work, enabled by secure, high-performing IT. Fortunately, this falls into the category of things that we can control.

Get in touch if you’d like to learn more.

]]>
Securing Government’s Weakest Work-from-Home Links https://www.riverbed.com/blogs/securing-government-weakest-work-from-home-links/ Fri, 25 Sep 2020 03:16:01 +0000 /?p=15658 If asked who or what is driving your organisation’s digital transformation, how would you answer? Is it your CEO? Chief Technology Officer? For many, the answer is COVID-19. This certainly rings true for the public sector now that an estimated 70 per cent of the 1.2 million employees across federal, state and local governments work from home.

Of course, government IT professionals are used to managing complex and distributed IT systems and users, but COVID-19 really re-framed the IT risk management challenge.

The good news is that, as part of their business continuity plans, many government agencies had sophisticated teleworking systems and remote workplace collaboration software available when lockdowns struck. Staff could swiftly begin to work from home and continue with business-somewhat-as-usual. This was particularly true of the larger, more well-resourced agencies.

However, while the preparedness was commendable, it doesn’t necessarily mean that the new work-from-home environment is optimised for the performance and security that is so essential to government work.

In addition, this level of technological readiness and agility was more common among metro versus regional or local agencies. For smaller state and local agencies, enabling remote work was far more of a challenge—and in many cases, brought with it a notable increase in risk.

With workers spread across the country, and globally, connecting via the internet, remote work has brought with it many more connections that can fail. Support for thousands of new endpoints is now required. There are new challenges at an application, network performance management and security level. An evolved approach to IT risk management is more critical than ever.

Make working from home work better

Whether you’re a larger federal, or smaller local government agency, the next step is to optimise the remote work environment. Do staff have access to the technology they need to get their jobs done effectively? Are their home work environments fully protected?

Achieving this depends to a great degree on the functionality of your networks and applications—whether they are able to support the workload of your remote workforces, whether they are secure at a time when cyber-attacks are on the rise and whether they enable productivity. Poor network performance has the potential to create security, productivity and performance risk, as it can make or break application performance.

There’s somewhat of a negative feedback loop here, given that staff who do not feel adequately resourced to do remote work productively may turn to third-party applications—shadow IT—to get the job done. In just one example, the government banned the use of Zoom for federal politicians on 8 April 2020, and the Department of Defence has banned the use of this very popular video conference solution due to security flaws.

This can increase risks across government agencies at a time when high performance is particularly critical. On 19 June, Prime Minister Scott Morrison warned of a “sophisticated state-based” cyber-attack targeting all levels of government, industry and beyond. “It is vital that Australian organisations are alert to this threat and take steps to enhance the resilience of their networks,” said Morrison.

Seeing is optimising

Remote work is also creating new network visibility challenges that hinder the ability of IT teams to identify and resolve issues before they escalate. With additional complexity across the IT environment, it can be harder to detect where exactly issues were located—within external agencies’ infrastructure or somewhere else along the application deployment chain.

Understanding what is happening across your network, and where, is the first step to securely empowering remote employees and catching red flags before they turn into something that could land an agency in the news for all the wrong reasons.

By gaining a complete view of their agency, IT professionals can gauge performance everywhere, at all times, across a complex, hybrid web of legacy, mobile, cloud and shadow IT components. In this scenario, it becomes clearer exactly when, and where, there are improvements to be made or unusual activities to secure.

Riverbed’s unified Network Performance Management (NPM) makes it easy to monitor, troubleshoot and analyse what’s happening across your hybrid network environment. The integrated dashboard enables agencies to monitor, report and resolve operational issues throughout government operations. With end-to-end network visibility and actionable insights, any network-based performance issues can be proactively resolved.

A few examples of red flags that can indicate productivity or security issues lie ahead:

  • Users are spotting issues unknown to the IT team
  • Productivity is down
  • Apps are taking a long time to load or there are regular site outages

Given what is now being forecast about the future, remote work will become more common than ever. As such, the visibility and optimisation that network performance management platforms like Riverbed NPM provide become increasingly important components of IT risk management.

Conclusion

While it is unlikely the government shifts to permanent work-from-home models like some Silicon Valley tech giants have proposed, early analyses show strong support to make remote work an accepted practice, rather than the exception. It is increasingly likely that the ability to support flexible work arrangements will become foundational to retaining and attracting the best talent.

In other words, as McKinsey and Company put it, “Since the world is unlikely to ever return completely to its pre-pandemic ways, the public sector should seek to rapidly change how it works, including improving its agility and productivity, in lasting ways.”

The future of the public sector will require more flexible, hybrid ways of work, enabled by secure, high-performing IT—whether its staff are gathered in offices in a city centre, or spread out across homes throughout the country and across the globe. Additional vigilance is critical. As organisations, we’re only as strong as our weakest link.

]]>
5 Key Benefits of Synthetic Monitoring for Modern Apps https://www.riverbed.com/blogs/5-key-benefits-of-synthetic-monitoring/ Wed, 09 Sep 2020 20:40:00 +0000 /?p=15673 Before we get into the benefits of synthetic monitoring, let’s start by defining it. Synthetic monitoring is a method used to monitor applications by simulating users. It’s different than real-time monitoring, which requires user traffic and measures the actual user experience. Real-time monitoring reactively identifies problems or issues after they occur, whereas synthetic monitoring proactively measures application health using synthetically-generated traffic.

There are many benefits of synthetic monitoring, but here are the top five:

1. Monitor Proactively

Synthetic monitoring does not require users to monitor the performance and communication health of an application. You can determine how packets flow between potential users and on-premises or cloud-hosted applications. EMA’s survey* found that 39% of all network problems are reported by end users before network operations is aware. Synthetic monitoring is that holy grail for NetOps, DevOps and SecOps—being proactive and identifying issues to fix before users notice.

2. Know Global User Satisfaction 24×7

Modern applications are spread across cloud data centers such as Azure, AWS, GCP and others. Add to this mix, the unabated growth of SaaS applications such as Office 365, Workday, Zendesk, Zoom, SFDC and the list goes on. How do you ensure your users will get the performance you want to provide them? By having synthetic agents distributed across the globe, you can know if your users will be satisfied or not 24×7. You can run continuous simultaneous tests and always know the state of your user experience.

Synthetic monitoring can help deliver great experience for remote workers
Proactively know the experience of your remote employees

3. Supercharge Business Agility

Deploy your application infrastructure to meet seasonality, unplanned demands, roll out an app as a competitive response or respond to an event such as a pandemic. Roll out your apps at the pace your business demands and NetOps will be right there in lock step. Synthetic testing gives tremendous flexibility with lightweight infrastructure that can be turned on instantaneously. It can go anywhere your application goes.

4. Monitor Complex Application Interactions

Synthetic monitoring allows you to emulate business processes and user transactions between different business applications. You can understand critical infrastructure performance. You can test business-to-business web services that use SOAP, REST or other web services technologies to validate and baseline interactions. Synthetic testing can simulate searching (database), adding items to cart (web application), logging in (identity validation), etc. in order to measure performance of holistic application interactions.

5. Baseline and Objectively Measure Application SLAs

With synthetic testing, you can baseline around-the-clock network behavior. Baseline and benchmark data to analyze trends and variance between peak and off-peak hours and to plan for capacity. Managing SLAs is very important today as so many companies rely on third-party vendors to host all or parts of their applications. Synthetic testing affords you the ability to monitor performance of any 3rd party application at frequencies you want to validate and from locations you choose, at any time. It can be used to ensure quality service delivery, accelerate problem identification, protect customer experiences and report on the compliance of internal or external providers.

If proactive monitoring is the direction you want to take your IT organization, synthetic monitoring is a key capability you cannot afford to overlook. Synthetic monitoring enables NetOps to move from a reactive firefighting mode to proactive and around-the-clock visibility without depending on actual users.

Riverbed NetIM is a comprehensive solution for mapping, monitoring and troubleshooting your infrastructure components. It leverages multiple approaches such as synthetic testing, SNMP, CLI, WMI and more. Learn how Riverbed can help you expand infrastructure monitoring and deliver the benefits of a Unified NPM approach across packets, flows and devices.

 

*Enterprise Management Associates: Network Management Mega Trends 2020

]]>
C-Level Perspectives: Preparing for the Office of the Future https://www.riverbed.com/blogs/office-of-the-future-cxo-panel-discussion/ Thu, 03 Sep 2020 20:20:58 +0000 /?p=15671 I think it’s safe to say that nearly every organization in the world is currently thinking about workplace transformation. And, it’s not just about redesigning office space. The COVID-19 crisis has and will fundamentally change how and where work gets done and it’s incumbent on business and IT leaders to prepare their organizations for what’s next.

But, what is next? What does the Office of the Future look like? If you’re like many executives right now, you’re actively seeking answers to these questions. And, that’s why I’m looking forward to moderating an upcoming C-level panel discussion on the Office of the Future and how organizations can ensure digital performance and productivity in an evolving workplace.

I hope you will join me on September 17 as I tap into the minds of prominent CXOs from Ellie Mae, Sophos, Kofax, and Conga to explore the lessons they’ve learned as their organizations shifted to large-scale remote work. We’ll talk about how their priorities and investments have changed as a result of COVID-19 and the technologies and cultural factors that will determine whether work-from-home, and eventually, work-from-anywhere models succeed or fail. And, with the spotlight shining bright on digital capabilities these days, it will be interesting to hear their perspectives on what the future holds for the IT profession.

As the Chief Digital Officer for Riverbed, I remember the early days of COVID-19 and the amount of pressure my organization faced as our entire company began working from home. Fortunately, we were already leveraging cloud-based collaboration tools like Zoom, Office 365 and Slack, as well as our own application acceleration and network optimization solutions to provide our employees with the same experience, if not better, as working in the office.

But, there’s planning and work to be done. The pandemic has set a course for long-term remote/hybrid working models, where employees will expect to be able to work when and where they choose and where teams can collaborate both physically and virtually. This means reexamining models of redundancy, resiliency and security based on new ways of working and it means a renewed focus on IT visibility and performance to drive the best employee experience and business outcomes.

I’m optimistic about what’s next and the elevated role IT will play in shaping the Office of the Future. You’ll have to register to attend the panel discussion to see if my fellow CXOs feel the same way.

]]>
SD-WAN or WAN Optimization? https://www.riverbed.com/blogs/sd-wan-or-wan-optimization/ Wed, 22 Jul 2020 21:56:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15405 SD-WAN or WAN Optimization? I love that question. And, in order to answer it correctly, let me first dispel a common misperception. The question assumes SD-WAN and WAN Optimization are different solutions for the same set of problems. They are not. There may be some overlap between the two, but a lot less than you might think.

As with most questions, the answer depends on the problem and situation. Here’s a quick “decoder ring” that works 100% of the time to give you the correct answer:

Problem #1: Conventional Routers

Situation: “My fleet of conventional branch routers are too hard to manage, especially now that I have more apps in the cloud and different types of WAN circuits at remote sites.”

Solution: This one is easy. Get rid of your old routers. Invest in SD-WAN. Just make sure it’s an SD-WAN solution that’s equipped with an enterprise-class routing stack.

Problem #2: Latency 

Situation: “My app is running too slow even though there’s unused WAN capacity.”

Solution: This one is also easy. SD-WAN won’t help. More WAN capacity won’t help. The poor app performance — response time and/or end-to-end throughput — is likely being dictated by latency’s effect on underlying network and application protocols. Use WAN Optimization. Specifically, use one that accelerates BOTH networking AND application protocols over long distance.

Like traffic on a highway, distance, capacity, and congestion impact how quickly and efficiently apps reach their destination.
Like traffic on a highway, distance, capacity, and congestion impact how quickly and efficiently apps reach their destination.

Problem #3: Bandwidth 

Situation: “My app is running too slow and I’ve run out of WAN capacity.”

Solution: There are actually three distinct scenarios you could be running into here. We’ll cover the scenario and solution for each one, and then show you a bullet proof way to know which scenario you’re actually in.

1) Scenario A: You’re out of bandwidth. But, it’s a red herring. The performance of the app in question is actually being dictated by latency, in which case adding more bandwidth will just add more cost.

Solution A: Use WAN Optimization. SD-WAN won’t help.

2) Scenario B: You’re out of bandwidth. And, the lack of bandwidth is the true bottleneck of app performance. However, there’s no good option to increase raw WAN capacity. No carrier provides a larger circuit for that location and/or it’ll take too long to procure and/or it’ll be too costly once it’s there.

Solution B: SD-WAN can’t help. Use WAN Optimization. Look for one that provides byte-level deduplication AND compression. With both techniques, you can virtually expand capacity by 4x, 5x, even 10x and more.

3) Scenario C: You’re out of bandwidth. And, the lack of bandwidth is the true bottleneck of app performance and procuring more WAN capacity is a cost-effective and timely option.

Solution C: Use SD-WAN. With one BIG caveat. MAKE SURE YOU’RE NOT IN SCENARIO A (i.e., make sure latency isn’t your real problem).

Finding the Root Cause

There are tools that can analyze packet captures from your network and tell you if your bottleneck is bandwidth or latency. Riverbed Transaction Analyzer is one of them. It can even help you determine if the problem isn’t in the network at all (e.g., it’s a client-side problem or a server-side problem).

In a nut-shell, make sure you know which problem you’re facing before you ask “Do I need WAN Optimization or SD-WAN?” Because what you really need is the flexibility to use either or both in combination whenever it makes sense.

The real problem you might be facing is that there aren’t many solutions out there that deliver both. Here’s an SD-WAN solution that combines enterprise-class routing, advanced SD-WAN, industry-leading WAN Optimization and Application Acceleration, and next-generation security. Like I said, I love that question.

 

]]>
Webinar: Microsoft and Riverbed on Work-From-Anywhere Challenges and Exciting New Cloud Innovations https://www.riverbed.com/blogs/webinar-microsoft-and-riverbed-on-work-from-anywhere-and-new-cloud-innovations/ Sat, 18 Jul 2020 01:51:09 +0000 https://live-riverbed-blog.pantheonsite.io?p=15413 Are you using collaboration apps, joining video events and streaming more video? Of course you are — we all are!

With the onset of COVID-19, businesses responded almost immediately with policies to protect their workforces and maintain business continuity. Remote workers, mobile workers and traditional office workers all became work-from-home employees — almost overnight. In the process, global enterprises quickly realized that their teams could remain productive provided they had the right tools and technology in place to connect their teams and business workflows. As a result, demand for collaboration and communication services exploded. The already popular Office 365 has grown to 258M monthly active users, and Microsoft Teams has ballooned to 2.7B (yes, billion) meeting minutes every day in March, up 200% from just the month prior.[i]

As regulations have begun to ease, organizations are coming to grips with what the new norm looks like. Most are still working through the details, but it’s clear that they won’t be returning to business as usual. The global crash-course in remote work has taught us all that we really can work from anywhere — and still be productive.

What happens when you can’t?

72% of companies report that network performance is a key concern[ii]. And, it’s no surprise. With billions of work-from-anywhere teammates sharing files and joining video meetings, the networks to support them continue to grow in complexity, and the sheer volume of data traversing is taxing, too. All of these factors impact the performance of applications like Office 365, Teams, and Stream lowering the ROI enterprises expect from these modern application investments.

Teammates can have the best collaboration apps, but their work-from-anywhere networks are often less up to the task. In fact, Riverbed’s recent Future of Work survey (July 2020), found that 37% of business leaders feel remote performance issues result in weaker employee performance and productivity. The most common network inhibitors impacting application end user experiences productivity are:

  • Latency – which creates network bottlenecks, increases load time, and is multiplied significantly with chatty cloud and SaaS applications
  • Congestion – from the massive amounts of data from heavy file sharing and live and on-demand video causing delay, packet loss, and blocking
  • Unpredictable Last Mile Performance – which IT doesn’t control but must accommodate when delivering applications to remote employees

How can you ensure work-from-anywhere productivity?

The reality is that your remote teams don’t need to be impacted by these factors. Your work-from-anywhere teams can turn on 10X faster O365 experiences, 33X faster file sharing and up to 99% reduction in bandwidth — in a matter of minutes — with application acceleration solutions from Riverbed.

Want to know more? View this on-webinar with David Totten, CTO, US One Commercial Partner at Microsoft and Dante Malagrinò, Riverbed Technologies, CDO as they discuss:

  • Work-from-anywhere networking challenges
  • The rise of video and SaaS collaboration apps
  • Exciting application acceleration solutions — with live demos — for enhanced networking that help maximize the value of enterprise investments in Office 365 and Microsoft Teams and Stream productivity

It’s a virtual event you won’t want to miss!

 

___________________

[i] Microsoft Work Trend Index, April 2020

[ii] Tech Target, February 2020

 

 

 

 

]]>
The Next Norm: Improving Network Resiliency and Security to Support Work-From-Anywhere (Part 4) https://www.riverbed.com/blogs/next-norm-work-from-anywhere-network-resiliency-and-security-part-4/ Wed, 15 Jul 2020 19:57:09 +0000 https://live-riverbed-blog.pantheonsite.io?p=15394 In Part 1 of this blog series, The Next Norm: Prepare to Work-from-Anywhere, we looked at the recent, explosive growth in work from home and the transition to the new norm, work from anywhere. Part 2 of the series, Next Norm: Work-from-Anywhere Performance Management, reviewed the need for Network Performance Management (NPM) and key considerations for evaluating NPM solutions. In Part 3, The Next Norm: Work-from-Anywhere Application Delivery for Productivity we discussed the challenges and opportunities in ensuring fast, consistent application delivery to your work-from-anywhere teams. In this, the final blog of the series, we’ll offer a leader’s perspective and guidelines to improve network resiliency and security to support work-from-anywhere models.

The threat: work-from-anywhere means potential threats from everywhere

Just as quickly as the enterprise went home to work, so did the cybercriminals. Cybercriminals are constantly looking for new ways to beat your defenses. You build them; they find the chinks in the armor. Recently, security experts have reported an increase in phishing and compromised VPNs. In fact, per Google, phishing attacks increased 350% from Jan 2020 to Mar 2020.

And, it’s not just phishing. COVID-related DDoS attacks are up, leading to inaccessible apps for your end users. DDoS attacks take down websites and VPNs, which means your customers can’t do business with you and your users are unproductive. Time down equates to lost revenue, and without proper visibility, security breaches go undetected for longer periods and are more difficult to mitigate.

DDoS can also hide other more insidious attacks. While you are busy trying to recover from the DDoS attack, the cybercriminal may be launching a second more dangerous attack hidden in the noise. This attack may be designed to exfiltrate data, passwords, or just stay hidden until needed.

With a work-from-anywhere workforce, data breach concerns are also heightened – and rightfully so. With IBM reporting a mean time for breach detection of 197 days (and another 69 days to contain them), it’s no wonder that 75% of security experts are not satisfied with the speed and capabilities they have in responding incidents.[i]

The challenge: securing the complex, work-from-anywhere network

As the workforce has become more mobile and applications have expanded to SaaS and cloud, enterprise networks have grown increasingly complex. Digital businesses need secure, reliable networks to support their distributed employees wherever they work while minimizing risk to the business. Organizations also need to manage a mix of legacy infrastructure and application models in conjunction with modern applications distributed across on-premises data centers as well as in multiple public clouds.

Add to this the increased dependency on unpredictable last-mile networks for remote workers and the challenge becomes even more painfully obvious: detecting and responding to the increase in cyberattacks and the broader attack surface due to the growing number of remote endpoints is growing increasingly more difficult.

As result, most organizations rely on 3-6 tools to monitor their network, but the multiple, disjointed data streams often add their own analysis complexity to the issue instead of providing advanced insight and quicker mitigation. And, looking to cloud for the latest in protection only helps so much, as the point solutions provided by cloud vendors are insufficient. They only provide insight into cloud elements of their network, not hybrid or multi-cloud networks.

So, what should enterprise securers do?

The right approach: unified visibility to see everything and intelligence to take appropriate actions

At Riverbed, we agree with the EMA on this: “Integrated platforms are more effective at performance monitoring than standalone, best-of-breed tools.”[ii] Why take our word for it? Well, for starters, we’ve been recognized by Gartner as a Leader in every MQ on Network Performance Monitoring and Diagnostics (NPMD) since 2012 – and, we deliver the only unified NPM solution in the market.

Securing your work-from-anywhere network is no place to cut corners. Make sure that your organization is fully prepared by confirming that your solution does the following before you buy:

  • Improve overall network performance by 59%.[iii] Provides comprehensive visibility across hybrid networks, applications and infrastructure in a single solution to support modern, work-from-anywhere teams.
  • Reduce MTTR by 65%.[iii] Leverages full-fidelity data: captures all packets, flows, and infrastructure metrics, 100% of the time to identify and respond to threats due to data exfiltration, password brute force attempts, blacklisted sites, DDoS attacks, etc.
  • Reduce network and application blind spots by 53%.[iii] Applies machine learning and AI to network flow, packet and device data to detect anomalies, respond to network security threats faster, mitigate risks, and avoid exposure by identifying unknown threats that lurk in your environment using network threat intelligence.
  • Improve IT collaboration by 41%.[iii] Delivers integrated end-user experience, application, network and infrastructure performance into a single dashboard as well as role-based views to improve visibility of hybrid environments.
  • Improve user experience by 59%.[iii] Insights into device and interface health, configuration monitoring, and path analysis to ensure high-performing apps.

Riverbed’s unified NPM solution does all of this and more. If you’re looking to improve your network resiliency and security to support your work-from-anywhere workforce, the safe bet is Riverbed!

________________

[i]  Forbes, The Speed Of Business: How Automation Improves Operations And Security, June 2019

[ii] Enterprise Management Associates, Network Performance Management for Today’s Digital Enterprise, Shamus McGillicuddy, May 2019

[iii] The Benefits of Riverbed Unified NPM, TechValidate, July 2020

]]>
Future of Work Survey: How Companies Are Planning for a ‘Work from Anywhere’ World https://www.riverbed.com/blogs/future-of-work-global-survey-2020/ Tue, 14 Jul 2020 16:59:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15384 As the “new normal” becomes just normal, companies are preparing for a large-scale, long-term shift to remote work, where increasingly employees will ‘work from anywhere.’

Although we all wish the impetus for widespread remote working was different, the new way of working—one that’s distributed, technology-enabled, and aligned with meaningful digital transformation goals—should have long-term positive effects for business and people. Today, we’re releasing the Riverbed Future of Work Global Survey 2020 and the results paint a clear picture of where companies are, and where they intend to go.

The abrupt shift to remote work caused some major initial challenges

It’s no surprise to anyone that at the very beginning of the pandemic, many companies were caught flat-footed. Although 95 percent of leaders were comfortable with the idea of remote work, 69 percent said they were not completely prepared for such a jarring transition. That sudden shift produced some substantial problems.

For instance, 40 percent flagged increased technical difficulties as a major disruptor while 37 percent cited weaker employee performance and productivity. Another 36 percent indicated stress and anxiety were big issues for employees. These are all predictable outcomes for a pandemic that upended both personal and professional norms. Fortunately, all these issues are surmountable with the right technology.

Business leaders have a better sense of performance barriers

The sudden shift to remote work gave business leaders a better sense of the biggest barriers to success for ensuring the performance of a remote workforce. According to the 700 global respondents, the biggest barriers are: technology to optimize or improve remote performance (39% globally, 50% in the U.S.), spotty or unreliable home Wi-Fi (38%), and the need for better visibility into network and application performance (37%).

Riverbed Future of Work Global Survey 2020 reveals current barriers to remote workforce performance
Riverbed Future of Work Global Survey 2020 reveals current barriers to remote workforce performance

The office of the future will be different

Forward-thinking organizations are investing for performance in this remote work reality. Of those surveyed, 61 percent of leaders will be making additional technology investments in the next 12 months, with 31 percent describing this expansion as significant. Anecdotally, we’ve heard this same theme from our customers, who are deeply interested in taking a more proactive posture.

There’s no question that hybrid work environments are on the roadmap for many businesses across a wide variety of industries. In fact, the survey found that on average globally, businesses expect 25 percent of employees will work remotely after COVID-19, nearly a 50 percent increase versus prior to the pandemic. Employees will increasingly “work from anywhere” (#WFA)—and technology will be the enabler that breaks down barriers to performance and security.

Conclusion

Tools that maximize the performance and reliability of apps and remote workers or that drive enhanced network visibility regardless of location will be absolutely fundamental to high-functioning organizations in this new paradigm. This is an area Riverbed is very focused on with our customers—with solutions such as Client Accelerator, SaaS Accelerator, and unified Network Performance Management.

See what else business leaders globally had to say about the future of work and what they’re doing to help their people navigate this new working normal here. Learn more about our remote workforce productivity solutions and join us in the conversation around the #FutureofWork #WFA #remotework.

]]>
The Next Norm: Work-from-Anywhere Application Delivery for Productivity (Part 3) https://www.riverbed.com/blogs/next-norm-work-from-anywhere-application-delivery-part-3/ Thu, 09 Jul 2020 17:51:58 +0000 https://live-riverbed-blog.pantheonsite.io?p=15374 In Part 1 of this blog series, The Next Norm: Prepare to Work-from-Anywhere, we looked at the recent, explosive growth in work-from-home and the transition to the new norm, work-from-anywhere. In Part 2 of the series, Next Norm: Work-from-Anywhere Performance Management, we reviewed the critical need for Network Performance Management (NPM) and the top considerations for evaluating NPM solutions. In Part 3 of the series, we’re going to discuss the challenges and opportunities in ensuring fast, consistent application delivery to your work-from-anywhere teams.

Keep your work-from-home teams engaged and productive with fast and consistent application delivery
Keep your work-from-anywhere teams engaged and productive with fast and consistent application delivery

The Goal: Ensuring fast, consistent application delivery to your work-from-anywhere teams

For years, enterprise IT buyers, and more recently LOB leaders, have been looking for the best communication and collaboration applications to keep their teams as productive as possible. The quest to find the best application available never seems to stop – or let up. The recent surge in work-from-home and the post-pandemic evolution to work-from-anywhere, have only increased this demand and hastened its immediacy. In fact, 74% of companies plan to permanently shift to more remote work even after the COVID-19 restrictions subside.[i] As a result, companies are increasingly looking to SaaS and cloud offerings to provide quick, cost-effective services to keep their teams productive, regardless of where they work.

Investing in collaboration tools that connect team members and business workflows are clearly a top priority as can be seen by the growth in popular services like Microsoft Office 365, which has grown to over 258M daily users and Microsoft Teams (for video collaboration) ballooning from 32M daily users to 75M active users daily since just March of 2020.

But, selecting the right applications isn’t always enough.

The Problem: Application delivery for remote working is still a challenge

While IT (and some LOB leaders) continue to introduce new communication and collaboration tools to the enterprise, 54% of HR leaders say poor technology and/or infrastructure for remote working is the biggest barrier to effective outcomes.[ii] Despite other advantages, the shift to SaaS and cloud hasn’t proven to be the panacea with 42% of enterprises reporting that at least half of their distributed/international workers suffer consistently poor experience of the SaaS apps they use to get their jobs done.[iii]

The Challenge: Unpredictable network performance

Although the ubiquity challenge of the network has been resolved and bandwidth is generally plentiful (albeit not always cheap, depending on location), the quality and consistency of network performance is every bit as challenging as it has been in the past.

With the rise of hybrid networks, SaaS, cloud, and on-prem/off-prem, the network has actually grown more complex, and in many cases, less reliable. Companies often experience network-related SaaS slowdowns on a regular basis – even for their most critical business applications. In fact, a full 72% of companies report that network performance is a key concern with Office365 [iv], impacting end-user experience and productivity directly.

And, the increase in remote work only adds additional challenges for IT teams trying to meet SLAs and deliver applications with high productivity value and the desired end-user experience. Remote users too often experience unacceptable performance due to consumer-grade Wi-Fi, bandwidth saturation and contention (oversubscribed connections from heavy usage of collaboration and enterprise applications), and disruptive latency when connecting back to corporate networks, the cloud, and SaaS applications. ESG’s recent study during the COVID-19 crisis revealed that 40% of remote workers in North America still struggle with subpar internet connectivity. [v]

Unfortunately, IT has limited control of the remote network. Connectivity to the data center or cloud must be optimized to account for unreliable, last-mile access over public Wi-Fi, cellular data networks, and home DSL/cable modems.

The Opportunity: Innovations in application acceleration can make a world of difference

Despite all the challenges, IT must ensure that users can reliably and securely access high-performing applications and tools. They need to find efficient ways to rapidly connect those users back to the corporate network and their apps. Fortunately, there are recent innovations in the market that do just that!

Riverbed Acceleration Services address the unpredictability and poor performance of business-critical applications. Riverbed Client Accelerator and SaaS Accelerator optimize application traffic for work-from-anywhere models, which leads to productivity benefits of an additional 7 hours/year per employee as shown in a recent ESG technical validation study. These solutions can be deployed quickly leading to rapid time to value and:

  • Drastically reduce user traffic by up to 99% and extend optimization for staff working outside the office (on their laptops) to ensure they can be equally productive, regardless of location with byte-level de-duplication methods that work across all of your applications
  • Optimize networks and deliver best performance (10X faster) for the most popular enterprise SaaS applications (Office 365, Salesforce, ServiceNow, etc.) to users anywhere
  • Intelligently accelerate the TCP conversations across the WAN by prioritizing the way data is sent over distance
  • Reduce the number of application round trips across the WAN, which directly applies to minimizing the impact of latency on application performance

Conclusion

There’s no doubt that work-from-anywhere will be the next norm. And, in order to ensure business resiliency and growth in the months and years ahead, IT teams need to consistently deliver performance and visibility across networks and applications regardless of how complex and distributed their IT environment. Riverbed offers solutions that can help you optimize remote user connectivity, accelerate business-critical application performance, and improve network resiliency and security. Learn more about our work-from-anywhere solutions.

_____________________

[i] Gartner, COVID-19 Bulletin: Executive Pulse, 3 April 2020

[ii] Gartner, Coronavirus in Mind: Make Remote Work Successful, 5 March 2020

[iii] ESG, The Impact of Poor SaaS Performance on Globally Distributed Enterprises, March 2019

[iv] TechTarget, Office 365 Survey, February 2020

[v] ESG, The Impact of the COVID-19 Pandemic on Remote Work, ​2020 IT Spending and Future Tech Strategies, May 2020

 

]]>
New AppResponse Adaptive Thresholds Reduces False Positives https://www.riverbed.com/blogs/appresponse-adaptive-thresholds-reduces-false-positives/ Wed, 01 Jul 2020 12:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15328 Performance monitoring is typically based on comparing measurable values against a set of threshold values. In theory, the IT operations team determines what the thresholds for warnings and alerts should be and sets them. In practice, they usually have no idea what the appropriate values should be.

For example, “response time” usually varies based on the time of day and day of week. At night, when the network load is negligible, response times would likely be minimal, too. In the middle of the day, when the network loads increase, the thresholds should be a bit more tolerant. 

Adaptive Threshold Analytics

Riverbed AppResponse 11.9 has fixed this problem by using the machine learning technique known as “adaptive thresholds.” Adaptive thresholds help deal with the problem of setting thresholds when you don’t know what they should be.

Adaptive thresholds work by analyzing historical data to determine what normal should be. In AppResponse, you can select a historical comparison interval (1 hour, 1 day or 1 week) and the tolerance factor. The alerting engine compares the current traffic to the historical and creates alerts if necessary. The historical data updates constantly with the latest data so it’s always current.

AppResponse adaptive threshold analytics proactively alerts on problems while reducing false alerts.
AppResponse adaptive threshold analytics proactively alerts on problems while reducing false alerts

AppResponse offers both user-defined and built-in adaptive thresholds. You apply user-defined adaptive thresholds on any metric for a specific network entity (i.e., an individual host, a host group, or an app, most commonly). Built-in policies apply an adaptive threshold to a set of network entities. There are two built-in adaptive policies:

  • Application Response Time
  • Host Group Traffic

The Application Response Time analytics builds a response time profile for every application defined on the system, while the Host Group Traffic analytics tracks total throughput for each defined host group on the system. The user is limited in how these two policies can be configured; for example, you can’t change the metric being measured, but can change the deviation factors and comparison interval. The user can also choose a subset of objects to monitor for a built-in policy, rather than all of them (the default).

In summary, user-defined adaptive policies let you monitor a broad set of metrics, but for a specific network object. The built-in policies are monitoring a specific metric but for a class of network objects (apps and host groups.)

Setup and configuration details

When first setting adaptive threshold policies, there’s a delay that is approximately equal to the chosen historical interval before alerting starts. For example, if you choose a threshold of one week, then a week must pass before the system collects enough historical data to be able to make a comparison to current data.

Another handy tidbit about configuring an adaptive policy is that administrators can do “what-if” analysis. This lets you see the approximate number of alerts that would be generated over a period of several hours, before the policy is actually configured. It also lets you adjust the tolerance parameters and see how the tolerance bands and detected anomalies adjust accordingly.

Benefits of Adaptive Thresholds

I think you’ll find that using AppResponse’s new adaptive threshold capabilities will reduce noise by reducing false positives. In addition, you won’t have to fiddle with live data anymore to find that perfect threshold. AppResponse does all the work for you, continuously. It’s always learning what’s normal, which means it proactively detects abnormal conditions, giving you early warning that something is amiss. Often you can detect impending trends before users feel the impact.

]]>
Top 4 Reasons to Optimize Your SD-WAN https://www.riverbed.com/blogs/top-4-reasons-to-optimize-your-sd-wan/ Mon, 29 Jun 2020 12:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15320 I often get the question “When should I enable WAN optimization with my SD-WAN?” It’s a good question, especially since it is a common mistake to either conflate the two or view them as mutually exclusive. They really address different challenges. And the best results come when you use the two together in the right way.

Here is a list of the top four situations when enabling WAN optimization/application acceleration with SD-WAN will help you achieve the best results:

1. SD-WAN assures operational agility and optimization assures app performance

No amount of bandwidth can address the negative effects of latency on app performance. Organizations are adding Internet broadband to the branches to meet capacity demands cost-effectively. Often branches have multiple paths across MPLS, Internet broadband and LTE. SD-WAN brings tremendous network agility with application intelligence to solve problems such as multi-link utilization, path selection, zero-touch provisioning and policy-based management. However, SD-WAN doesn’t help mitigate the negative effects of latency that often exist between users and their apps. Once the packets are on the wire, SD-WAN’s job is essentially done. WAN optimization is the necessary ingredient to dramatically reduce the number of round-trips required to transfer data or complete a transaction. Make sure, however, that the WAN optimization solution addresses the behavior of network and application protocols over long distances. Solving just half of the equation won’t assure end-user performance.

 

SD-WAN needs WAN Optimization
SD-WAN selects the best path and optimization makes the app perform better over that path

2. Migration to the cloud adds latency

The migration of applications to the cloud often increases the distance between users and their apps. It does not matter whether the traffic is backhauled or sent over direct internet access (DIA). This additional latency degrades the performance of the apps and negatively impacts user experience. Look for a WAN optimization solution that is capable of accelerating apps hosted in SaaS and cloud environments. A common misstep is to assume steering packets directly from a branch to the Internet will guarantee exceptional performance. Only when you layer in WAN optimization and SaaS/Cloud app acceleration will you see performance boosts of up to 3x, 5x, even 10x and more.

3. Data reduction can save big in the cloud

With increasing data in the cloud and traffic to and from multi-cloud infrastructure, the egress charges from the cloud providers can quickly add up. For example, egress charges for 25TB of cloud data can cost over $2,000. Classic WAN optimization data reduction techniques offer significant savings for organizations by lowering egress charges. Make sure your WAN optimization is capable of securely intercepting and optimizing SSL/TLS/HTTPS protocols as the vast majority of the traffic to and from the cloud is encrypted.

4. Many business-critical apps continue to be hosted in on-prem data centers (DCs)

Apps will continue to be served from the DC for the foreseeable future (read the blog “MPLS is obsolete”). These are applications like file sharing (CIFS, SMB, NFS….), video streaming (live and on-demand), storage replication, on-prem web applications, etc. Organizations may be reducing MPLS bandwidth as they adopt DIA from the branches. This situation makes it even more critical to optimize traffic on constricted WAN links.

Networks need application acceleration technologies in today’s cloud-first world to address impacts of increased distance between users and the applications. Therefore, it’s critical that organizations choose a SD-WAN solution that offers application acceleration capabilities. SD-WAN and WAN optimization are complementary solutions solving distinct problems. You get the best of both worlds—the best WAN path to route the traffic and the best app performance over the chosen path.

Learn how you can combine SD-WAN and application acceleration with Riverbed Software-Defined WAN.

]]>
The Next Norm: Work-from-Anywhere Performance Management (Part 2) https://www.riverbed.com/blogs/next-norm-work-from-anywhere-performance-management-part-2/ Fri, 26 Jun 2020 15:16:24 +0000 https://live-riverbed-blog.pantheonsite.io?p=15336 In Part 1 of this blog series, The Next Norm: Prepare to Work-from-Anywhere, we reviewed the recent, explosive growth in work-from-home and the transition to the new norm, work-from-anywhere. We discussed how enterprises are proactively addressing the productivity challenges of work-from-anywhere. In Part 2 of the series, we’re going to focus on one area that can positively impact all of the above, Network Performance Management (NPM).

NPM is a timely discussion. Per the recent Gartner survey, 50% of network operations teams feel that they will be required to rearchitect their network monitoring stack by 2024. This is a significant increase from just 20% in 2019. What’s driving the spike in demand? Well, it’s a number of things, but more than anything, it’s the complexity of hybrid networks.

As organizations continue to invest heavily in technologies and services that fuel their digital strategies, the supporting network has grown more complex. Adopting cloud services, supporting mobile workers, leveraging AI, IoT and Big Data have put tremendous strain on enterprise networks—and on the teams who manage them.

What can IT do? Get the upper hand on what’s happening across your network—and what’s going to happen! Three core areas where you should be engaging right now are: 1) ensuring that you have cross-domain visibility of the expanded network, 2) leveraging new technologies that can help you in the process, and 3) guarding your flank with integrated security.

1. Greater cross-domain visibility is a must!

As discussed, network demands have evolved. No longer do they simply serve to connect corporate-owned facilities and a limited number of road warriors accessing services via the VPN. They are hybrid and complex, combining on- and off-premises infrastructure, connected by private and public transport types. They connect a high percentage of the modern work-from-anywhere workforce and are accountable for ensuring high productivity across the full range of applications that are distributed, dynamic, increasingly delivered as a service, and run in data centers and clouds.

To be able to fully monitor what is happening and troubleshoot any anomalies on the network, you need cross-domain visibility. Your NPM solution should be collecting and analyzing all the data whether its source is on-prem or in SaaS or cloud extensions. It’s not enough to have point products that are tapping into a few spots or just sampling data. There are better options. To ensure complete visibility, you should seek network performance management solutions that collect and analyze all the packets across your many applications, all flows across the complete hybrid network, and telemetry from all the devices in play. Choosing an integrated platform provides peace of mind from knowing there are no gaps in information or dropped handoffs between standalone components. In fact, research from Enterprise Management’s May 2019 report, Network Performance Management for Today’s Digital Enterprise, shows that “integrated platforms are more effective at performance monitoring than standalone, best-of-breed tools.”

2. Leverage AI and machine learning technology

Once you have collected all the data, you have a treasure trove for mining in times of need. However, with the complexity of modern networks, the volume of data they produce is almost unmanageable. When your teams are reporting slow application response times or the inability to participate in critical video meetings, how quickly can you root-cause the issue and respond? There is just no way to analyze it all manually in any acceptable window of time.

As the demands on your network expand and user expectations rise, your modern NPM solution should be leveraging advanced technologies to deliver insights much faster than human analysis. Network performance management solutions should leverage AI and machine learning to track trends, surface anomalies and identify the root cause of potential problems before they are impacting your users.

A perfect, real-world example is OneMain Financial. By capturing and analyzing down to the packet level OneMain is able to quickly pin slow performance directly to the network or application, eliminate finger pointing, slash troubleshooting literally from days to just minutes, and fix problems before users across their 44-state network ever notice.

3. Integrate NPM and security to guard your flank

With cross-domain visibility and eyes on all the data, it’s no wonder that network performance management and network security solutions have become inextricably linked. In light of the latest increase in cyberattacks, the partnership has become even more important. With the recent surge in the number of endpoints tied to remote work due to the pandemic, cybercriminal activity has seen explosive growth with “phishing and counterfeit web pages increasing by more than 265% daily from January 2020 to March 2020,” per the Bolster analysis of over 1 billion websites.

Choosing a network performance management solution with advanced security capabilities that works in conjunction with your VPNs and leverages every network flow are critical to performing forensic investigation, cyber threat hunting, threat intelligence and DDoS detection to keep up your guard.

Riverbed’s unified NPM measures every packet, every flow and all device metrics, all the time. This gives organizations the control and the insight needed to enable work-from-anywhere models and to proactively identify and quickly troubleshoot network and application performance and security problems. 

]]>
The Next Norm: Prepare to Work-from-Anywhere (Part 1) https://www.riverbed.com/blogs/next-norm-work-from-anywhere-part-1/ Thu, 18 Jun 2020 20:13:55 +0000 https://live-riverbed-blog.pantheonsite.io?p=15308 With the onset of the recent pandemic, countries across the globe reacted in unprecedented fashion to ‘flatten the curve’ by implementing shelter-in-place guidelines. Businesses responded almost immediately with new or expanded policies to protect their workforces and society at large. Remote workers, mobile workers and traditional office workers all became work-from-home employees – effectively overnight. With nearly 30 million employees in just the Fortune 500 alone, the impact and scale of this movement are quickly evident.

While it was bumpy at first, many organizations quickly realized that their teams could remain highly productive provided they had the right tools and technology in place to connect their teams and business workflows. And, along the way there were many benefits recognized for both employees and the business by having a larger remote workforce.

As regulations have begun to ease in certain countries and regions, organizations across the globe are coming to grips with what the new norm looks like for them. Most are still working through the details, but it’s clear that they won’t be returning to business as usual. In fact, 74% of companies plan to increase the number of remote workers and nearly a quarter will move 20% of their workforce to permanent remote work.[i] As individuals become more comfortable in the post-pandemic world and begin to move about, work-from-home will undoubtedly become work-from-anywhere (WFA). And, many leading brands, including Twitter, Facebook, Square and Nationwide, are already paving the way by expanding their remote work policies and/or extending them “forever.”

But unlike flipping the switch to work-from-home, the shift to WFA is being made with more time, planning and consideration regarding the technology and processes to empower the new norm workforce. Three focal areas you should be considering as you prepare your work-from-anywhere future are your technology, security and people.

What investments are needed to support the work-from-anywhere model?

Connecting people and business workflows is always a challenge, but even more so when teams are geographically dispersed. In fact, 54% of HR leaders say poor technology and/or infrastructure for remote working is the biggest barrier to effect remote working.

As a result, many of them are increasingly looking to SaaS and cloud offerings to provide quick, cost-effective services to keep their teams productive. Investing in collaboration tools that connect team members and business workflows are clearly a top priority as can be seen by the growth in popular services like Microsoft Office 365, which has grown to over 258M daily users and Microsoft Teams (for video collaboration) ballooning from 32M daily users to 75M active users daily since March, 2019.

To provide the best end-user experience and ensure high productivity despite the extended challenges of serving work-from-anywhere teams, IT leaders are investing in innovative acceleration technologies that are proven to overcome latency and increase network capacity. Ensuring these investments are paying off and that business-critical applications and networks perform as expected is also driving increased organizations to deploy network performance management solutions that provide the visibility, analysis and insights needed across geographically-dispersed teams and hybrid networks. As the old adage goes, “You can’t manage what you don’t measure!”

Increased vigilance to manage increased security threats

As the surge to work-from-home took shape, IT teams were faced with massive overnight challenges: get teams the gear they need, get them onto the network with access to services from home, and get them secure.

Of course, bad actors didn’t wait while IT worked feverishly to put new systems in place. In fact, they went into overtime mode as well, resulting in a 667% increase in phishing attacks in just the first month of work-from-home. While 34% were brand impersonation attacks, thousands were financial scams and business email compromise (BEC). Organizations need to stay wary of this and put the right safeguards in place to protect customer data, corporate data and brand reputation.

Collaboration between network and security teams to reduce time from breach to detection and mitigate data exfiltration is critical to a speedy response. Investing in the right visibility solutions allows you to transform network data into cybersecurity intelligence, providing essential visibility and the forensics needed for broad threat detection, investigation, and mitigation.

New approaches to manage work-from-anywhere teams

Just as there are many technology concerns to address in support of the new norm, the changes that impact our work-from-anywhere team members and managers need to be considered as well.

Organizations should be identifying best practices, benchmarking and putting processes in place to measure and optimize work-from-anywhere engagement. To keep your best team members – and keep them engaged and productive – managers will need to be flexible and share their discretion for remote work with team members. Mutual trust is the foundation of distance relationships and a requirement for work-from-home success between employees and employers.

Policies must be developed regarding who is needed in the office (or specifically not), when and why, and who should work remotely. Similarly, there will likely be policy changes for compensation (often impacted by geography), work-related expenses, expected hours of operation, flexibility for external environmental situations, etc.

Shelter-in-place and work-from-home came in a flash and was empowered by best effort heroics from IT. Hopefully these constraints will soon be gone. Work-from-anywhere is right on the horizon and it is expected to last. Riverbed can provide you with industry-leading application and network visibility and performance to ensure work-from-anywhere success. Learn more about our remote work solutions.

]]>
How to Solve Performance Issues with SSL Encrypted Traffic https://www.riverbed.com/blogs/solve-performance-issues-with-ssl-encrypted-traffic/ Thu, 11 Jun 2020 21:08:27 +0000 https://live-riverbed-blog.pantheonsite.io?p=15173 With the security concerns we face these days it’s ever so important for organizations to use encryption to secure their data in transit. And since the HTTP protocol is so widely used as a means to transfer various types of data, like MAPI over HTTP, a mechanism is needed to secure it. That mechanism is SSL or TLS. There are several reasons you might experience performance issues when using HTTPS sessions between two hosts. In this article, I’ll show you how to address these performance issues using Riverbed SteelHead technology and SSL optimization. Before getting into the nuts and bolts of SteelHead, let’s talk briefly about SSL. This will aid in understanding the configuration requirements once we get to that point.

SSL Overview

SSL, or really TLS these days, uses both symmetric and asymmetric encryption. Symmetric encryption is commonly used for real-time data transfer. The keying is smaller than that of asymmetric encryption and the same key is used for both encryption and decryption. Asymmetric encryption uses two keys, a public key and a private key. A sender uses the recipient’s public key to encode and send a message; the recipient uses its private key to decode the message and within this communication, a symmetric session key is calculated. Asymmetric encryption isn’t often used for real-time data as the key size is much larger, often 2048 or 4096 bit. As mentioned, asymmetric encryption is used to send a message that we then calculate the symmetric key with. The symmetric key is random and is only used for the current conversation. This key is known as the session key. Once the session key is established both parties encrypt and decrypt using the session key.

For a moment, let’s look at the SSL negotiation process.

SSL Process
SSL Process

As you can see in the figure, the process begins with a client sending a hello to the server. In response to this, the server sends its public key. The client then sends the random material that will be used to create the session key. The server’s public key is used for this and the data can only be decrypted by the server using its private key. The server generates keys and responds back to the client with the “Change Cipher Spec” message, switching further communication to the use of the generated session keys.

So, now that we’ve reviewed the SSL process, let’s talk about what we need to do to configure our SteelHead environment to optimize SSL traffic. I do want to note here that we can certainly optimize ALL SSL traffic since really it’s just a TCP session. But what we really want to get at is the different types of traffic inside there so we can perform additional optimization techniques as needed.

Optimization of SSL

So here’s how the overall process of SSL optimization works:

1. Server-side SSL Certificates and Private Keys are copied to the SteelHead appliances.
2. The SteelHead appliances use their own identity certificates to establish a secure connection between one another proactively or on-demand.
3. When the client sends the initial “hello,” it is intercepted by the server-side SteelHead appliance.
4. The server-side SteelHead establishes a connection with the server.
5. The server-side SteelHead then establishes an SSL connection with the client. This comes in the form of the server-hello.
6. A temporary session key is migrated from the server-side SteelHead to the client-side SteelHead. This moves the SSL session between the client and the client-side SteelHead.
7. Transfers over the WAN are now accelerated and optimized between the client-side SteelHead and the server-side SteelHead using all of the Riverbed RiOS mechanisms.

For all this to happen, there must be a trust between the two SteelHeads. The client must trust the server-side SteelHead and the server-side SteelHead must trust the certificate it receives from the server.

So let’s configure SSL optimization. I’ll take you through each step, but I also recommend you watch the video where I walk through each of these steps.

To begin, here is the topology I’ll be using in this configuration.

SSL Optimization Topology
SSL Optimization Topology

Our first step is to obtain and install SSL licenses on the client and server. The license if free and should be included. You can verify that you have it by navigating to Maintenence>License. You can see what you’re looking for in the image below. You’re going to want to make sure that both the client- and server-side SteelHeads have the license. If not, you’ll need to contact Riverbed Support.

Verify License
Verify License

Your next step is to enable SSL optimization on both SteelHeads. You’ll find this checkbox in the SSL Main Settings. When you enable SSL optimization you must save and restart services on the SteelHead.

Enable SSL
Enable SSL

Now, recall that the server-side SteelHead intercepts the initial request from the client and it’s the server-side SteelHead that then creates its own SSL session to the server. The server-side SteelHead then uses the server’s private key and certificate to then create a session with the client. In other words, the server-side SteelHead responds to the client’s SSL request to the server, as if it were the server. For this reason, you need to get the server’s certificate and private key on the server-side SteelHead. You also need the CA certificate so that you read and trust the imported server certificate. First import the CA certificate under SSL>Certificates.

CA Certificate Import
CA Certificate Import

Then, import the server’s key and certificate back in SSL>Main Settings. Import both the key and the certificate.

Import Server Cert and Key
Import Server Cert and Key

Now, on the client-side SteelHead create an in-path rule to allow optimization of the desired SSL servers. In the image below, I am looking for ANY IPv4 traffic headed specifically to the server. Also, make sure this rule is added above the default rules or it won’t be matched and the traffic will bypass optimization and be passed through.

In-Path Rule for SSL
In-Path Rule for SSL

At this point, you can send traffic. When you do this, you’re going to notice that traffic will be matched and it will have some of the RiOS techniques applied to it. But also notice the red triangle in the image below. What’s that all about?

SSL No Peer
SSL No Peer

By expanding to see the details, you will note that the inner channel is not secure. Why not? Well, the client-side and server-side SteelHead don’t trust each other now. By navigating to SSL>Secure Peering(SSL) you’ll find an entry on the gray list. Use the Actions drop-down to move the SteelHead to the white list by selecting Trust.

White List Peer
White List Peer

Once the peering is established, we can try a download again and we’ll see that everything is in order and that all of the power of the RiOS Optimization techniques are now able to be applied to SSL traffic.

SSL Optimized
SSL Optimized

Wrap Up

Well in this short post we’ve covered the need for SSL optimization as well as an overview of how SSL works and how to configure both the client-side and server-side SteelHeads to handle the optimization of this traffic. By giving attention to these types of technical aspects in an enterprise network, we can enhance the user’s experience by eliminating many of the common performance issues they experience. SSL optimization is just one of many capabilities in the Riverbed WAN Optimization arsenal. Head on over to the WAN Optimization solutions page and learn more about what Riverbed can do for your organization.

]]>
How to Configure SD-WAN Security Zones with SteelConnect EX https://www.riverbed.com/blogs/configuring-sd-wan-security-zones/ Wed, 10 Jun 2020 10:45:30 +0000 https://live-riverbed-blog.pantheonsite.io?p=15201 Enterprises are leveraging both Intranet and Internet to bring remote offices, mobile workers, and business partners into their trusted network environments. Once an attacker gains access into this trusted internal network, it opens the door to serious threats. Enterprise security is becoming a critical component of enterprise security solutions around the globe. Segmenting the network helps to control data traffic in an enterprise and limits the access to attackers by preventing lateral movement between these networks. This segmentation of data traffic is termed as Zone.

Zone in a branch can be defined as:

  1. A logical group of network access points provided to end users.
  2. A network for corporate applications.
  3. Guest Wi-Fi hotspots, etc.

In this blog, we are going to cover Zone Protection Profiles, which help to protect your network from attacks including common flood, reconnaissance attacks, and other packet-based attacks. Zone Protection Profiles provide a secure mechanism against certain types of traffic to enter a zone.

SteelConnect EX Zone Protection

Interfaces are networking communication points. In a given network, interfaces can share the same or different security configurations for traffic flow. In SteelConnect EX, you can group all the interfaces with the same security configurations into a security zone.

Each security zone can be associated with a security profile, i.e., Zone Protection Profile. So, whenever a Zone Protection Profile is defined for a security zone, it will automatically be mapped with all the interfaces in that security zone. Interfaces can be associated with a single security zone.

Zone Protection Schema
Zone Protection Schema

Zone Protection Profile allows stateful inspection of TCP UDP & ICMP data traffic flows. Based on the Zone Protection Profile, traffic can either be passed or dropped between the zones.

SteelConnect EX Zone Protection Configuration

For Zone Protection configurations:

  1. Define a zone before assigning interfaces to a zone.
  2. An interface can be assigned to only one security zone.
  3. By default, traffic can flow among interfaces that belong to the same security zone.
  4. For traffic between zones, a policy must be configured.

Please follow the steps below to configure SteelConnect EX Zone Protection:

Step 1: Create Zone Protection Profile

Zone Protection Profile provides a mechanism to detect and prevent malicious traffic from entering the network. To protect a zone, define a Zone Protection Profile and associate it with a security zone. To configure Zone Protection Profile, follow the below steps:

  1. Select Administration > Appliance and select the appliance to navigate to the appliance context.
  2. Select Configuration > Networking > Zones Protection Profiles from the left panel.

Zone Protection Configuration Tab
Zone Protection Configuration Tab

  1. Click ‘+’ to add Zone Protection Profile.

Zone Protection Profile General Tab
Zone Protection Configuration Tab

All the tabs have been explained in detail below:

The General tab has all these mentioned fields:

  • Name: Name of the Zone Protection Profile.
  • Description: Brief description of the interface and its purpose.
  • Tag: A keyword to filter the Zone Protection Profile.

You can use the Flood tab to configure flood thresholds for this protection profile.

Zone Protection Profile Flood Tab
Zone Protection Profile Flood Tab

  • Protocol: The supported protocols have been shown in the above snapshot. One can enable flood monitoring for this protection profile by clicking on the checkbox against the protocol.
  • Alarm Rate Packets/sec: An alarm will get generated when the number of packets received per second matches the value defined in the field.
  • Active Rate Packets/sec: Packets will get randomly dropped when the number of packets received per second matches the value defined in the field.
  • Maximum Rate Packets/sec: All the packets will get dropped when the number of packets received per second matches the value defined in the field.
  • Drop Period Seconds: Duration of the packet dropping.
  • Actions: Action for data packet spoofing. Options are:
    • Random early drop
    • Cookies

You can use the Scan tab to configure scan protection for this protection profile.

Zone Protection Profile Scan Tab
Zone Protection Profile Scan Tab

  • Scan: Different types of scans have been shown in the above screenshot. One can enable a scan for this protection profile by clicking on the checkbox against the scan profile.
  • Actions: Action when an abnormal scan is detected. Options are:
    • Allow – Allows to run the scan.
    • Alert – Generates an alert.
  • Interval: The time interval at which the scan occurs.
  • Threshold: The threshold value, after which an alarm will get generated.

You can use the Packet Based Attack Protection tab to protect the network from invalid packets.

Zone Protection Profile Packet Based Attack Protection Tab
Zone Protection Profile Packet Based Attack Protection Tab

  • UDP/TCP/IP/Discard:
    • IP Frag – Drop Packets with a fragmented IP address.
    • IP Spoof – Drop spoofed packets.
    • Reject Non-SYN TCP – Drop packets if the first packet in a session has a Non-SYN flag.
    • UDP Malformed – Drop packets in case of a checksum error.
    • One can select from Different IP options for this protection profile.
  • ICMP:
    • Ping Zero ID – Drop packets with zero ID.
    • Fragment – Drop fragmented packets.
    • Large Packet (length > 1024 bytes) – Drop packets if the size is greater than 1024 bytes.
    • Error Message – Drop packets if error messages are generated on a ping request.
    • Malformed Packet – Drop malformed packets.
  1. Click OK to create and configure Zone Protection Profile.

Step 2: Define Security Zone

A security zone can be configured on a per-tenant basis. Security Zones in a given tenant can be identified by a unique name. The same name can be used in different tenants.

To configure the Security Zone, follow the below steps:

  1. Select Administration > Appliance and select the appliance to navigate to the appliance context.
  2. Select Configuration > Networking > Zones  from the left panel.

Security Zone Configuration
Security Zone Configuration

  1. Click ‘+’ to add Security Zone.

Add Zone
Add Zone

  • Name: Name of a security zone.
  • Description: Brief description of interfaces.
  • Tags: A keyword to filter the Security Zone.
  • Zone Protection Profile: Profile to protect the zone. Please refer to the previous step for more details.
  • Log Profile: Log Profile to be used with this Zone.
  • Interfaces & Networks: Select this to add interfaces and networks with this security zone.
    • For interface, click and select an interface from the list.
    • For networks, click and select a network from the list.
  • Routing Instance: Select this to add a routing instance with the security zone.
  • Organization: Select an organization with the security zone.
  1. Click OK to create and configure a new zone for an appliance.

Verification

Alarms or events will be generated when there is a policy match in the Zone Protection Profile. Trigger the command below to verify stats via SteelConnect EX CLI.

versa@SC-EX-BRD-1-cli> show orgs org-services <org-name> security profiles zone-protection zone-protection-statistics <zone-protection-profile-name>

CLI Verification
CLI Verification

Summary

In this article, we went through a common security use case for SD-WAN branches. We secured a network zone by using the Zone Protection Profile. You can create one or more Zone Protection Profiles and associate any of them with a security zone. Traffic will be dropped based on configurations in the Zone Protection Profile. With the use of templates, these security policies can be deployed at scale and with consistency on the whole SD-WAN network.

]]>
MPLS is Obsolete https://www.riverbed.com/blogs/mpls-is-obsolete/ Fri, 05 Jun 2020 18:40:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15220 Is it? Is MPLS fast approaching its demise as it is portrayed in many industry articles and blogs? I beg to differ. For the foreseeable future, I do not anticipate the end of MPLS in enterprises. Jokingly I say, at least not until I retire. As networks go through modernization with SD-WAN, MPLS will be an integral part of that transition. The managed MPLS market is not shrinking. Instead, it is growing at a CAGR of 6.5% between 2020 to 2025 according to a report from Research and Markets.1

MPLS continues to be the predominantly used WAN technology today and into the foreseeable future
MPLS continues to be the predominantly used WAN technology today and into the foreseeable future

4 Reasons Enterprises Will Continue To Utilize MPLS

1. Decreasing price differential between mpls and broadband

Often, the price differential between MPLS circuits and Internet broadband has been proposed as the catalyst for MPLS decline. A few years back, the differential of MPLS vs broadband was considerable in the order of 100+x. However, within the last few years the prices of MPLS have come down by orders of magnitude. Now the average differential between Internet broadband and MPLS is 20-30x. Widespread availability of Internet broadband has given enterprises considerable leverage in negotiating MPLS prices during contract renewals.

2. Many applications are deeply intertwined with business processes

There is tremendous momentum to move applications to cloud infrastructure and SaaS applications. A wide gamut of applications are moving to the cloud –productivity apps, collaboration apps, HR apps, monitoring tools, security services, etc. Yet, there are myriad of business critical applications hosted in on-premises data centers. Think of IT/OT applications used in manufacturing plants or assembly lines. These applications take multiple years to redesign, migrate data, and to establish new business processes while driving revenue.

3. Businesses are highly risk averse

With mobile phones we have traded quality for convenience. How often did we ask “Can you hear me now?” when using a landline? Fixed line phones operated as a utility – dependable and always available when needed. MPLS circuits provide the same level of connectivity with guaranteed application services across different tiers of QoS. Businesses are inherently risk averse, especially large global corporations, to depend on the best case connectivity of the Internet for mission-critical applications.

4. Performance of legacy applications and latency

Home-grown applications weren’t designed for the Internet age. These applications were written years ago for platforms like DB2 and mainframes using legacy programming languages. The architecture of these applications, the protocols used, the chatty handshakes all assumed a highly reliable underlying network with low latency. No amount of bandwidth can overcome the inherent latency introduced over Internet connectivity.

 

Hybrid WANs are the Future

Internet broadband, cloud technologies, and SaaS applications deliver tremendous benefits for enterprises to ignore. Corporations will invest in cloud infrastructure and Internet connectivity. However, MPLS is not finished. It is not going away anytime soon. By 2023, 30% of enterprise locations will use Internet-only WAN connectivity, up from less than 10% in 2019, to reduce bandwidth cost.2 Conversely, 70% of enterprise locations will continue to rely on other WAN technologies, of which MPLS has the lion share.

Corporations with a complete dependency on Internet-only connectivity across all locations will be exceptions. Hybrid WANs will be the norm. Although slow to adopt compared to their mid-market brethren, SD-WAN will take enterprises through the next wave of network modernization. MPLS vs. SD-WAN, which is it? It is both. You will see MPLS alongside Internet broadband to implement SD-WAN overlay networks. Enterprise SD-WAN with WAN Optimization and Application Acceleration technologies will catapult enterprises as they continue on their cloud journey.

 

[1] https://www.researchandmarkets.com/reports/4557775/managed-mpls-market-growth-trends-and

[2] Source: Gartner Report, Forecast Analysis: Enterprise Networking Connectivity Growth Trends, Worldwide, 2019. By Gaspar Valdivia, Lisa Unden-Farboud, To Chee Eng, Grigory Betskov, Susanna Silvennoinen, 20 September 2019

 

]]>
NetProfiler Users Are More Than A Number With AD Connector 3.0 https://www.riverbed.com/blogs/netprofiler-with-ad-connector-identifies-users/ Mon, 01 Jun 2020 20:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15140 Imagine you’re at an elaborate costume party. You talk to people but you don’t really know who they are because they’re behind masks. You just refer to them as “superhero girl” or “rabbit guy.”

Most NPM tools treat end users just like masked guests, or even worse, numbers!  IP addresses are known, but the actual user names are not.

AD Connector helps make the personal connection

Riverbed’s AD Connector extracts user identity information from an Active Directory source, pulls it into NetProfiler (Riverbed’s enterprise flow monitoring and reporting solution), and makes it available for use within reports. Being able to resolve to the user name is useful from multiple perspectives including security, performance, and troubleshooting.

Case in point: troubleshooting

When viewing a traffic report, you notice a spike in utilization that is attributable to BitTorrent traffic coming from a specific IP address. You’ll want to know which users are logged in at this particular time as well as which computer originated the traffic. With data and name in hand, you can talk with the individual user, stop the offensive action, and take immediate corrective action. The integration with Active Directory makes this quick and easy.

This Top Apps screenshot shows BitTorrent is consuming nearly 29% of the bandwidth.
This Top Apps screenshot shows BitTorrent is consuming nearly 29% of the bandwidth.

 

Here are a couple of other reports in NetProfiler that help you understand your user data:

  • The Users List shares users by log-in time, log-out time, or log-in failures. You can also filter by host, time duration, and other criteria to help you quickly understand the impact of specific individuals on the network.
  • Host information reports with added user information shows which host is talking to another and provides a clearer picture by knowing which user was logged in at that time.

The User List report shows exactly who is logged in by leveraging the user information obtained from the AD Connector. Note the IPv6 addresses.
The User List report shows exactly who is logged in by leveraging the user information obtained from the AD Connector. Note the IPv6 addresses.

Additional information

AD Connector 3.0 is now available on Windows Server 2016, Windows Server 2019, and Windows Server 2012. It supports IPv6 and encrypted communication support between AD Connector and NetProfiler.

You can download a copy of the Riverbed AD Connector 3.0 for use with your NetProfiler at no charge from the Riverbed Support site.

]]>
What’s Your Zoom Performance? https://www.riverbed.com/blogs/maximize-zoom-performance/ Fri, 29 May 2020 13:15:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15133 Remote workforce productivity is critical to successful business continuity. Is the performance of your collaboration tools, such as Zoom, keeping pace?

Collaboration applications have evolved over the past few months from being a nice-to-have tool to become the go-to means for connection and communication between remote teams. Organizations across the globe are using video collaboration apps, like Zoom, for a wide variety of interactivity – including everything from video chats and one-on-ones to team meetings, webinars, and even virtual conferences.

We've all been in Zoom meetings where you could barely understand someone. Wouldn't you like to know if you could fix it?
We’ve all been in Zoom meetings where you could barely understand someone. Wouldn’t you like to know if you could fix it?

As a result, it should come as no surprise that Zoom usage has exploded from 10 million daily users in December 2019 to more than 300 million daily participants (paid and free) in April 2020.1  

With 74% of companies planning to permanently shift to more remote work post COVID2, it means collaboration apps like Zoom are here to stay.

Zoom is a cloud platform that combines video meetings, voice, webinars, and chat across mobile and fixed environments. Like traditional VoIP applications, Zoom performance is highly sensitive to network latency, and despite using modern compression algorithms, it consumes massive bandwidth when compared to most apps.

Zoom performance is critical to the success of virtual teams

Consistently providing a high-quality end-user experience is critical to the success of virtual teams and is core to driving the productivity enterprises need now more than ever. Here are several must-do’s to maximize your Zoom performance:

  1. Understand how Zoom is being used and how much bandwidth it is consuming on your Internet links. You do not want to over subscribe your Zoom links. At the same time, if you limit Zoom’s bandwidth, your users will likely experience jitter, which causes choppy audio and blotchy, pixelated video. You’ll want full-stack real-time and historical analysis that gives you visibility into the H.323 and SIP protocols that Zoom uses.
  2. Monitor quality of service (QOS) and ensure your Zoom traffic is appropriately classified so that it receives appropriate bandwidth and prioritization. Flow monitoring gives you visibility into all your DSCP markings and is your choice for QoS analysis.
  3. Monitor and understand the network performance for interrelated components like the session border controller, the routers handling the video traffic, and the external connection to the Zoom cloud service. Make sure they don’t get overwhelmed. Infrastructure management can help here by monitoring the availability of devices and interfaces.

How can you be sure? Well, we’re already helping many of our existing customers with their on-prem and VPN-based Zoom performance, including a global financial services firm. We’re helping them work through these exact steps to ensure their Zoom environment is optimized for end-user experience and productivity while also managing its impact on their broader network. We accomplished this using a collection of flows and packets, which provides integrated and seamless monitoring and troubleshooting.

 

1 https://blog.zoom.us/wordpress/2020/04/01/a-message-to-our-users/

2 https://www.gartner.com/en/documents/3982949/covid-19-bulletin-executive-pulse-3-april-2020

]]>
Top 5 Traps That Can Ruin Any SD-WAN ROI Analysis https://www.riverbed.com/blogs/5-traps-that-ruin-an-sd-wan-roi-analysis/ Fri, 22 May 2020 17:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15109 The dynamic application workloads of today’s organizations are aggressively moving from on-premise data centers to “cloud data centers.” This migration demands highly agile underlying infrastructure and SD-WAN is becoming crucial to support these web-scale hybrid applications. Network organizations fall into key traps when performing SD-WAN ROI analysis that may be detrimental to choosing the right solution. Beware of these five traps that ruin SD-WAN ROI analysis:

1. Desiring to justify with MPLS cost reduction

The global average cost of 1 Mbps of MPLS can range from 20-30X the cost of 1Mbps of Internet broadband. The high cost differential can lead IT organizations to justify SD-WAN projects with the cost savings. To support IaaS and SaaS migration, organizations need higher capacity. Organizations can be disappointed by the offsetting cost of increased Internet broadband capacity against the cost savings of the MPLS circuits. A better approach is to justify the increased capacity to enable the cloud migration while off-setting the cost of circuits.

SD-WAN ROI of WAN Circuits
Cost reduction of MPLS vs cost management with increased Internet capacity

2. Failing to quantify uptime benefits

A key benefit of SD-WAN is the level of automation and managing policy-based access with a network-centric approach as opposed to a device-centric view. When managing hundreds of locations and controlling the optimal application traffic flow, the automation can eliminate significant amount of downtime caused by manual operational workflows. Various industry reports estimate the cost of downtime to a company can range from $300,000 to about $4,000,000 an hour. Do not overlook the benefits of uptime improvements in your SD-WAN ROI analysis.

3. Ignoring benefits of high-impact IT initiatives

The operational efficiency gains of software-defined WANs over traditional router-based networks free up high-value engineers from mundane operational tasks to drive high-impact corporate initiatives. The quantifiable benefits will at least be equal to the run rate of these senior IT staff and can have a multiplying effect on the overall benefits to the company.

SD-WAN ROI must account value of critical projects
Benefits of freeing up staff to work on high-value projects should not be overlooked

4. Expecting 1-to-1 cost replacement

IT organizations can often fall in the trap of taking a simplistic approach of comparing cost of hardware. Replacing a traditional router with an SD-WAN appliance can give a false sense of 1-to-1 hardware cost comparison. SD-WAN brings with it a slew of benefits – automation, operational gains, hybrid infrastructure enablement, application-level intelligence, integrated application acceleration, flexibility of integrated or service-chained security, and branch IT sprawl reduction to name a few. The hardware cost comparison fails to comprehend the value of these benefits in an ROI analysis.

5. Overlooking user productivity gains

One of the core advantages of SD-WAN is the ability to steer application traffic intelligently across multiple underlying technologies. With workload spread across on-premise data centers, private cloud, public cloud and SaaS applications, this policy-based application access is paramount to the digital journey. SD-WAN is a key enabling infrastructure to successfully adopt applications such as Office365, Salesforce, Workday, and others. As a result, the pace of user adoption of cloud and associated productivity gains should not be overlooked in the ROI analysis.

Download the ROI guide by Enterprise Management Associates on Riverbed Steel Connect EX.

Are there other traps that you have come across in an SD-WAN ROI analysis? Share your thoughts in the comments below.

]]>
Enterprise SD-WAN Trade-Offs Part 1: Is SD-WAN a Piece of Cake? https://www.riverbed.com/blogs/is-sd-wan-a-piece-of-cake/ Fri, 15 May 2020 16:32:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14984 This blog is the first in a 4-part series that takes a detailed look at the SD-WAN trade-offs that commonly emerge during a network transformation project–and more importantly, how to avoid pitfalls.

To encourage you to read on, or take a detour for important background information, here are two things we won’t be covering (with quick links for more information):

  • This is not a What is SD-WAN? technology primer.
  • It’s also not an enumeration of SD-WAN benefits.

Indeed, at this point, it’s a foregone conclusion that the branch router we’ve known and loved (or loved/hated, perhaps) has outlived its primacy. It’s also generally understood that a Software-Defined WAN (SD-WAN) is much more apt to take you, your network, and your company where you need to go in the next decade.

Is SD-WAN all unicorns and rainbows?

SD-WAN sounds great in theory. But is there a catch? According to Gartner, only 20% of enterprises have successfully adopted SD-WAN in at least some of their remote sites to date. Why not more enterprises? Why not more sites? Doesn’t SD-WAN equate to network nirvana? And isn’t it supposed to be easy?

To cut the suspense, the answer to this question is an emphatic, “No!” SD-WAN alone won’t take you to network nirvana. There are major pitfalls, the most common of which come in the form of unfortunate trade-offs that all-too-often emerge and can reduce or even decimate the benefits you were seeking to gain with SD-WAN in the first place.

Here are the three most common trade-offs that you will undoubtedly face:

Enterprise SD-WAN trade-off #1: destination vs. journey

Is transitioning to SD-WAN more trouble than it’s worth?

The minefield of brownfield SD-WAN integration
The minefield of brownfield SD-WAN integration

We all want SD-WAN. But it’s impossible to transform the old into the new all at once. And so, you have to traverse an intermediate phase–the brownfield–where some sites are connected via SD-WAN and others remain connected via conventional routers. The difference between navigating this phase unscathed and bringing your network to a screeching halt has everything to do with the ability of the SD-WAN solution to interface with your existing network and cope with its topological complexities, one-off hacks, and special-case router configs that have built up over time. Those hidden network demons that have been lurking unnoticed will inevitably (thanks, Murphy!) rear their ugly heads once the transformation is underway.

Part 2 in this blog series will share important information about best practices and critical SD-WAN features that will increase your chances of success as you navigate the minefield of the brownfield.

Enterprise SD-WAN trade-off #2: cost vs. performance

Is it possible to maintain WAN capacity and increase app performance?

Some of you might be thinking, “Wait! I thought more network capacity equated to better app performance.” Well, like most things in life–it depends. Sometimes more capacity absolutely leads to better application performance. Sometimes more capacity does absolutely nothing to improve application performance. And sometimes, adding capacity actually reduces application performance!  Woah, not good.

Adding bandwidth doesn't always equate to better app performance
Adding bandwidth doesn’t always equate to better app performance

Part 3 in this blog series takes this topic head-on and will offer fresh insights into the following:

  • How can I tell if and when app performance will improve by adding more bandwidth?
  • Why on earth could adding more bandwidth actually reduce application performance?
  • If bandwidth isn’t bottlenecking app performance, what is? Latency? Link quality? How can I tell?
  • Is app performance being dictated by the behavior of networking protocols, or application protocols, or both?
  • And, most importantly, once I understand the true causes and conditions of insufficient app performance, what are the best tools, techniques and technologies available that can improve the situation?

 

 

Enterprise SD-WAN trade-off #3: user experience vs. security

Is it possible to meet user expectations and maintain network security?

One benefit of SD-WAN is that it makes it easy to steer certain traffic from remote sites toward your on-premises data centers and steer other traffic from remote sites directly to the Internet. Once selective traffic steering is made easy, there’s less of a reason to backhaul Internet-bound traffic from remote sites through your data center. Doing so only adds latency between users and their Internet-hosted apps and adds unnecessary traffic on your network. Instead, steer Internet-bound traffic directly from the branch to the Internet. Less latency. Less overall network traffic. Better performance.

Avoid trading network security for user experience
Avoid trading network security for user experience

The problem, of course, is that by steering traffic directly from the branch to the Internet comes with it the cost of increasing the threat perimeter of your network. You’ve traded off network security for app performance.

Part 4 in this blog series will investigate remedies for this situation, including some nuances that might not be so obvious:

  • What are the best ways to effectively protect the edges of my network without breaking the bank?
  • And what if I have to continue backhauling Internet-bound traffic due to regulatory compliance or corporate policy? Is there a way to overcome the negative effects of higher latency?

Summary

Let’s close out by returning to the title of this blog, “Is SD-WAN a piece of cake?” The answer, as you might expect is, yes … and no … and yes!

  • Yes – relative to managing conventional routers, SD-WAN is a quantum leap in the direction of simplicity and agility. However…
  • No – the benefits of SD-WAN do not appear magically on their own. Without careful planning and attention to the pitfalls that can arise during this transformation of your network, your project will not feel anything like “a piece of cake.”  And so…
  • Yes! – if you are mindful of the trade-offs, you can have your cake and eat it too. This is when you’re on the true path of wisdom that will ultimately lead to SD-WAN success.

Have your cake and eat it too!
Have your cake and eat it too!

We hope you enjoy this series and that it helps you tackle your SD-WAN project with greater confidence, even ease. For my part, I’m going to find a delicious piece of cake. And I’m going to eat it!

Nothing could be simpler.

]]>
Sales Leadership During a Pandemic: What Does It Look Like? https://www.riverbed.com/blogs/sales-leadership-during-pandemic/ Thu, 14 May 2020 00:58:02 +0000 https://live-riverbed-blog.pantheonsite.io?p=15030 Whether it’s with your team, family, company or friends, it feels like there’s just one conversation in the world right now, with good reason. COVID-19 has taken all of our plans–both personal and professional–and chucked them right out the window.

Call it pivoting, regrouping, recalibrating or whatever you like. The fact is, we are all in the same boat: rethinking our once well-designed plans against a fluid landscape that changes not by the day but seemingly by the hour. At times it feels chaotic, but it’s also true that the challenges posed by this pandemic are not insurmountable. The reality is that many companies will emerge on the other side of this, perhaps not unscathed but definitely unbroken.

For companies with sales staff now working at home–and customers that it’s no longer possible to visit–there’s one major question: What does sales leadership look like in a pandemic environment?

Here’s what I’m telling my staff.

1. Focus on what you can control

In a crisis, people seek order and stability. With so much that’s not remotely in your power to change, it’s reassuring–and productive!–to focus on the elements within your sphere of influence. For sales teams, that should really center on developing their pipeline, positioning for the future, and driving real-time results right now. All are doable.

2. Solve for the problems of today

There is no business as usual now. Your salespeople should never waste their customers’ time – or their own – having conversations about things that will have no impact at this moment or the foreseeable short term. They should already understand that big projects that require substantial investment will get back-burnered as CapEx and OpEx thins. The nice-to-haves that might have once enticed customers are now out of the question.

Instead, turn to identifying and solving customers’ immediate needs. For us, that’s helping companies ensure their work at home workforces have the network visibility and app acceleration they need to be successful. For you, it will be something else unique to your company and offering. Have candid conversations with your customers now. Where’s their pain? And how can you help them stop hurting?

3. Remember your ability to connect is paramount

Truly excellent salespeople can influence a customer no matter the medium. For these individuals, doing their jobs remotely is a complete non-event. They know how to leverage their ecosystem for support. They’re excellent writers, able to get in touch and connect with customers over email. They know how to ask the right questions and listen to what’s said (and what’s unsaid) on a call. They can pull together a useful webinar, proof of concept or trial solution. They’ve got virtual demonstrations down cold.

But even more critically, a great salesperson can deftly cultivate trust to forge genuine connection. They are credible because they know how to connect customer pain to their company’s unique value proposition. Doing that well is more important than ever because customers are, frankly, facing quite a lot of challenges.

4. Enable your people in this new environment

Regardless of function, everyone is being asked to be more flexible. But we need to equip our sales teams to pivot more quickly than most because our usual go-tos could be off the table. There are no more lunches and golf outings or happy hours, onsite customer visits, and networking events. That’s the reality for right now.

The victorious teams will be the ones who quickly adjust to this new normal and move to help their sales executives with enablement designed for this virtual era. Are you quickly ramping up to provide them the tools to demo or present at a distance? How about teaching them how to have a productive customer conversation when you never actually sit face to face with them in real life? As a sales leader, have you mastered having the executive conversation in our new environment? You should.

5. Watch your email messages for tone and relevance

Open your email inbox and there are probably more earnest emails on “Our response to the COVID-19 crisis” than you can count, all from companies you maybe did business with one time (if ever).

If you’re relying on email as one tool in your inside sales arsenal, that’s fine. But make sure you’re crafting a message that is sticky, specific and solves the problems of today. I do open inbound emails, sometimes from genuine interest and occasionally from morbid curiosity. Marketing messages with generic, tone deaf subject lines like, “CAN WE HELP YOU MAKE BETTER CONNECTIONS WITH CUSTOMERS?” have a one-way ticket to the trash bin.

It’s clear to me that, as with so many things, this crisis should change how we measure the sales organization. If your team can’t sell a technology that’s clearly hyper-relevant for this time, it means you don’t have the right sales talent on your bench and your messaging isn’t hitting the mark. But if your organization excels at selling in this new remote paradigm, just imagine how powerful they’ll be once the crisis diminishes. Because whether at home or in the office, you’ll know they’re capable of creating authentic relationships and delivering messaging that works.

That is a gamechanger.

]]>
Riverbed and Gigamon: A Great Network Visibility & Analytics Partnership https://www.riverbed.com/blogs/riverbed-and-gigamon-partnership/ https://www.riverbed.com/blogs/riverbed-and-gigamon-partnership/#comments Mon, 11 May 2020 20:28:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14961 Great partnerships make a difference! Just think of Michael Jordan and Scottie Pippen, Elton John and Bernie Taupin, Bill Gates and Paul Allen. These relationships are known for their amazing success in sports, music, and business. Each partner brought a different skill set to the relationship. Without Pippen’s defense, Michael Jordan would not have six NBA championships. Without Bernie Taupin, Elton John wouldn’t have sold 300 million records. If Paul Allen hadn’t negotiate the deal to purchase the QDOS operating system, Microsoft would not have changed the PC industry forever.

Just as Bill Gates and Paul Allen make a great partnership, so do Riverbed and Gigamon. The basis of a good partner is that they complement each other.
Just as Bill Gates and Paul Allen make a great partnership, so do Riverbed and Gigamon. The basis of a good partner is that they complement each other.

A great partner consists of many things. Top of mind would be shared vision, mutual contribution, and solid relationships. As the leader of a sales organization, I am always looking for a great partner that will help me enable my sellers and more importantly, our customers be successful in their digital journey. Gigamon is a partner that brings these characteristics to Riverbed, our partners, and to our customers. Together we are able to meet the needs of our customers. Together we empower our customers to maximize their investments in their digital infrastructure. We do this by assembling data across the hybrid infrastructure to ensure the critical services the business depends upon are performing to their maximum potential.

Here’s how it works: The Gigamon Visibility and Analytics Fabric captures all network data, processes it and sends it to Riverbed Network Performance Management solutions. Digital teams can leverage advanced capabilities for optimizing network loads, analyzing applications, and detecting and responding to threats. Together our solutions can scale to the needs of the largest networks in the world and quickly pinpoint gaps in IT performance that could disrupt business performance.

Additionally we share a common partner ecosystem where our customers can engage the experts for their industry who share the Riverbed and Gigamon joint vision to maximize digital performance and business performance impact.

We would welcome the opportunity to come together to discuss your goals and determine how our partnership can help ensure that you achieve them.

We also invite you to our upcoming joint webinar on “Network Resiliency and Security Tips for a Remote Workforce”​on May 21 at 2:00 PM ET to learn more about this great partnership and how we can help you ensure remote workforce productivity.

]]>
https://www.riverbed.com/blogs/riverbed-and-gigamon-partnership/feed/ 1
Riverbed Application Acceleration for AWS FSx https://www.riverbed.com/blogs/riverbed-application-acceleration-for-aws-fsx/ Fri, 08 May 2020 18:29:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14799 Amazon Web Services (AWS) continuously adds new services and features to enhance the cloud experience. Amazon FSx delivers that experience for Windows file shares so it’s critical that applications accessing FSx perform well. In this post, I will cover both the features and benefits of using Riverbed’s Application Acceleration solutions to enhance the user experience for AWS FSx. 

What is Amazon FSx?

Amazon offers a fully-managed native Microsoft Windows file system for Windows called FSx. Built on Windows Server, FSx provides administrative features such as Microsoft Active Directory (AD) integration, user quotas, end-user file restore, and is accessible via SMB3. Windows-based applications that require file storage in AWS can access this file server, which is cost-optimized for short-term workloads.

Accessing windows files via SMB3 on Amazon FSx can be challenging because branch offices are spread across continents. Because SMB3 is a chatty protocol, transferring data on an Internet link may take a long time. For example, copying a 2.6 MB AutoCAD folder with design files takes a minute and 33 secs from Mumbai to AWS, California. Average AutoCAD files are in the range of a few GBs, which may take hours and sometimes even days to copy, resulting in lost productivity. My measurements show that average speeds at work range from 5 Mbps to 10 Mbps; at home, average speeds are 700Kbps to 900 Kbps.

  Mumbai to California ( AWS) measurements
Latency 236 ms
Bandwidth 121.9 Mbps (Uplink), 29.3 Mbps (Downlink)

With many employees working from home due to the Coronavirus–and potentially staying at home as remote work becomes more popular–enterprises need to ensure consistent performance of SaaS, cloud, and on-premises applications to any user, regardless of location or network type.

Riverbed delivers remote work solutions built for today’s dynamic and distributed workforce. Through a combination of the following WAN optimization and application acceleration offerings, Riverbed can ensure end-to-end acceleration with help of:

Riverbed WAN Optimization and Application Acceleration

Application acceleration for Amazon FSx

Riverbed accelerates Amazon FSx for remote/mobile users, branch office users, and data center applications using a combination of Riverbed products such as SteelHead, Client Accelerator, and Cloud Accelerator. Client Accelerator offers SteelHead benefits for mobile/remote workers using laptops to optimize applications across branches, data centers, and cloud services. Client Accelerator is configured by SteelCentral Controller for SteelHead Mobile (SCSM) using centralized policies deployed by IT administrators.

Cloud Accelerator is an infrastructure-as-a-Service (IaaS) environment running on leading IaaS platforms such as Microsoft Azure, AWS, and Oracle Cloud. User productivity is enhanced because Cloud Accelerator optimizes and accelerates applications to deliver maximum cloud value to the business.

To accelerate Amazon FSx, deploy Cloud Accelerator for AWS in the same VPC that hosts the FSx server. To deploy FSx, please refer to the AWS deployment guide at https://docs.aws.amazon.com/fsx/latest/WindowsGuide/getting-started.html.

The FSx server connects to the Active Directory Domain of the enterprise, so users/applications would use the FSx server.

How to install Riverbed Cloud Accelerator

There are three ways to install Cloud Accelerator (Cloud SteelHead virtual appliance), as described below.

1) Riverbed Community Cookbook

You can use the Riverbed Community Cookbook for installing Cloud Accelerator on AWS because it offers a single-click launch facility with few configurations and it is easy to set up. It is configured in two modes described below.

  • To configure an existing VPC and create Cloud Accelerator, you need to input details such as VPC ID, security group, subnet details, and more

    Deploying Cloud Accelerator in an Existing VPC
  • To create a VPC and set it up in a new VPC, you need to input VPC details such as Zone, CIDR blocks, and EC2 key pair to enable SSH, IAM role, and more. Cloud Accelerator gets created in the new VPC.

Create VPC and deploy Cloud Accelerator

2) Manual deployment (requires a Riverbed support login account)

Here are the steps required to create a Cloud Accelerator for AWS:

AMI that Riverbed Support shared with you

Configure instance details – advanced details – user data

Configure Instance Details

ds=/dev/xvdq
passwd=<Your preferred password>
appname=<your org Name ManuallyDeployedSteelHead>
lshost=cloudportal.riverbed.com
rvbd_dshost=cloudportal.riverbed.com
lott=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

where:

  • ds – The device node in which the Cloud Accelerator expects the data store EBS volume to appear. Due to changes in EC2 architecture, set this to /dev/xvdq.
  • passwd – The password hash for the admin user.
  • appname – Name of the Cloud Accelerator.
  • lshost – The fully qualified domain name of the licensing server, and generally, this name is usually the Riverbed Cloud Portal.
  • rvbd_dshost – Fully qualified domain name of the discovery server, and generally, this name is often the Riverbed Cloud Portal.
  • lott – You can obtain a token from the Cloud SteelHead license on the Riverbed Cloud Portal, and hence to redeem the license.

Add storage

  • Add and configure two volumes in addition to the root volume. One of these volumes stores the Cloud Accelerator software, so it serves as the configuration and management services disk. The other serves as the data storage disk.
  • Click Add a New Volume
  • Under the Device column, select /dev/sdk for the configuration and management services disk, and select /dev/sdm for the datastore disk.
  • Under the Size (GiB) column for each drive, specify a size based on the Cloud Accelerator model. See Cloud Accelerator models and required virtual machine resources.
  • Under Volume Type, you can choose Magnetic unless the Cloud Accelerator model you are deploying requires a solid-state drive (SSD).

    Add a New Volume

Configure security group

  • Choose a security group for the virtual appliance.
    • To connect the Cloud Accelerator, the Discovery Agent, and the client-side SteelHead, configure the security group to allow:
      • UDP port 7801, so connections coming in from the Discovery Agent work.
      • TCP incoming ports 7800, 7810-7850, so connections coming in from the client-side SteelHead work.
      • TCP incoming ports 22, 80, and 443, so CLI and UI connections coming in from the client-side SteelHead work.
      • Click Review and Launch.

        Select security group

 

3)  Riverbed Cloud Portal deployment (requires a Riverbed support login account)

Cloud Accelerator needs to be configured with the Active Directory domain services so that it joins the same domain as FSx. The Active directory could be an external AD or AWS-managed AD. Client Accelerator is managed and configured by SteelCentral Controller for SteelHead Mobile. Client Accelerator automatically connects to cloud services, so the connections are accelerated to the Amazon FSx server. See Riverbed Cloud Portal deployment (requires Riverbed support login account).

Typical FSX use case setup

Testing methodology

Performance tests are concentrated on the transaction response time and compared under three different conditions (when possible):

  • Baseline transaction – without application acceleration setup
  • Cold transaction – with application acceleration setup (the first transaction)
  • Warm transaction – SteelHead cache is not empty (second-and-above time transaction).

For our test, we set up a standard set of reference MS Office files (Word and PowerPoint), PDF files, and AutoCAD design files, so that different sizes are used for the Windows file sharing test. The test ran in a setup similar to the graphic above (from Mumbai to AWS, California).

We observed unique benefits with Riverbed application acceleration of FSx. The below-given measures are in seconds and in X for the improvement factor.

Optimization ratio highlights the benefit of Riverbed SteelHead on user experience. It shows how application acceleration divides application response time.

Each transaction was played two times under each of the three conditions so to avoid any artifact effects. We took the BEST case of baseline values (lowest transaction time), and the worst case of cold and warm transactions (highest transaction time). The optimization ratios were computed as per the below formulas:

  • Cold Transaction Improvement over baseline = Baseline value/Cold Transaction value
  • Warm Transaction Improvement over baseline = Baseline value/Warm Transaction value

Test results

Windows File Sharing

 Copy PDF file: 100MB
Baseline value 37.43 Seconds
Cold Transaction value 30.14 Seconds
Cold Transaction  Improvement over baseline 1.241X
Warm Transaction value 7.98 Seconds
Warm Transaction 4.69X

 

 Copy AutoCAD Folder structure: 1.95 GB ( 1992 files)
Baseline value 11340.12 Seconds
Cold Transaction value 2411.47 Seconds
Cold Transaction  Improvement over baseline 4.70X
Warm Transaction value 1583.78 Seconds
Warm Transaction  Improvement over baseline 7.16X

 

 Copy of  word file: 99.5 MB
Baseline value 54.62 Seconds
Cold Transaction value 30.31 Seconds
Cold Transaction  Improvement over baseline 1.80X
Warm Transaction value 7.76 Seconds
Warm Transaction  Improvement over baseline 7.038X

For this transaction, cold cache measurement was not taken into account since the file is already transferred and working on it.

Save of word file: 99.5 MB 
Baseline value 20.69 Seconds
Warm Transaction value 16.69 Seconds
Warm Transaction  Improvement over baseline 1.239X

For this transaction, cold cache measurement was not taken into account since the file is already transferred and working on it.

 Open  word file: 99.5 MB
Baseline value 19.64 Seconds
Warm Transaction value 13.59 Seconds
Warm Transaction  Improvement over baseline 1.445X

LAN Vs. WAN Peak rate ratio (218 Mbps Vs. 14.6 Mbps ~ 15X), and excellent average ratio (8.7 Mbps Vs. 1.4 Mbps ~ 6X)  on encrypted SMB3 connections:

LAN Vs. WAN Peak rate ratio (218 Mbps Vs. 14.6 Mbps )

66% data reduction on encrypted SMB3 connection for the above operation on the cold transaction:

66% data reduction on Warm Transaction

93% data reduction on SMB3 encrypted connection over Warm Transaction on FSx:

93% Data Reduction

106.7 times capacity increase (Lan throughput of 981.5MB translated to 9.2MB of WAN throughput):

106.7 time capacity Increase

Conclusion

Riverbed application acceleration provides tremendous benefits to the workforce, hence phenomenally improving user productivity. It saves high costs by lowering bandwidth requirements and reduces egress traffic cost in AWS because it saves several GB traffic. The user experience is dramatically enhanced.

]]>
SaaS Accelerator Configuration Walkthrough https://www.riverbed.com/blogs/saas-accelerator-configuration-walkthrough/ Thu, 07 May 2020 00:40:06 +0000 https://live-riverbed-blog.pantheonsite.io?p=14839 Today’s work environment is certainly not what anyone had anticipated it would be at the beginning of this year. Today, most of the world is forced to work from home (WFH) unless deemed an essential employee. This can add several challenges to an organization. While organizations have scrambled to implement company-wide VPN and issue laptops to employees that are normally sitting in an office, one aspect that’s proving to be a challenge is dealing with the latency seen in home broadband networks. Now perhaps your organization has migrated to cloud apps such as Microsoft Office 365, still, one thing rings true; from the time your application traffic leaves the laptop to the time it reaches your cloud provider, IT organizations have little control of the latency that users will experience. So in this article, I’m going to walk you through the process of configuring Riverbed SaaS Accelerator, Cloud Controller, and Client Accelerator so that end users can immediately benefit from our application acceleration and optimization techniques. Let’s get started.

Solution components

There are three components to the solution I’m covering in this article.

  1. Riverbed SaaS Accelerator
  2. SteelCentral Controller for SteelHead Mobile
  3. Client Accelerator

I’m going to cover the configuration of each of these, but let me give you a bit of an overview of the three first, beginning with SaaS Accelerator.  SaaS Accelerator is a Riverbed Hosted SaaS offering. You’re given access to SaaS Accelerator Manager, which acts as your cloud-hosted management interface to configure and monitor SaaS Acceleration. SteelCentral Controller for SteelHead Mobile is a virtual appliance you can deploy in your data center or in the cloud, for example in Azure.

Azure Marketplace
Azure Marketplace

Finally, Client Accelerator, formerly known as SteelHead Mobile, is a client application for Windows or macOS that performs client-side application optimization between the user and a cluster of Riverbed Cloud SteelHeads. Each of these three components are required to accelerate application traffic. With that said, let’s get into the configuration.

Configuration walkthrough

If you prefer to watch the configuration, I invite you to view the following video where I walk and talk you through each of the following steps. If you’re the reading type, you can skip it and continue on.

Now, if you’re still with me let start by logging into SaaS Accelerator Manager. You can see this in the image below. What you’re looking at is the first step in the configuration. A Root Certificate needs to be generated. The only requirement is the Common Name. I’ve set that to TE-Lab.

Create Cert
Create Cert

The next step is to enable acceleration for an application. In the image below, there are no applications being accelerated right now. By clicking the “Accelerate Application” button, you’ll get a configuration page that requires our attention.

Enable Service
Enable Service

To enable the service, we need to select which application we want to accelerate. We support Office 365, Salesforce, and Box to name a few. Next, we select the Region. Finally, we select the number of users that we will be accelerating for at any given time. These steps are important because of what needs to happen in the background after you click submit. So what’s that? Well, in short, a cluster of SteelHeads along with load balancers and additional network plumbing is brought up in the cloud, right in front of the Office 365 tenant in the United States (based on the region we selected). This lets us accelerate traffic from end-to-end, dropping your accelerated traffic off at the front door of the service.

Selection Options
Selection Options

You can see that process happening in the background in the image below. The service status will provide updates until the service is up and ready. This process usually takes less than 10 minutes.

Automation Runs
Automation Runs

While we wait for the service to be ready, we now jump over to the SteelCentral Controller for SteelHead Mobile. Our first action is to tie the SaaS Accelerator Manager (SAM) to the controller and vice-versa. To do this, we need to provide the FQDN of SAM and the Registration token that you retrieve from SAM under the Client Appliances page.

Enter FQDN
Enter FQDN

This immediately puts us on the Gray List.

On Gray List
On Gray List

To get us off the gray list, we need to head back to SAM. We now see the serial number entry for the controller in SAM and we can click it to move it to the white list.

Change Gray List
Change Gray List

In the following image, you can see that we have moved the controller to the white list. At this point, the two will communicate with one another.

Set White List
Set White List

Moving back to the controller, we enable acceleration. When we apply you will see a list of applications and the service endpoint for acceleration. This comes from SAM and the information is based on the cluster that was created when we enabled the service.

Enable Service
Enable Service

Now we need to create a policy on the controller to tell our client to accelerate Office 365 traffic. We’re going to apply this policy to an MSI package later on.

Create Policy
Create Policy

Once the policy is created, we need to create the rules. Here I’m creating one of four rules that are required for Office 365 traffic. Most of the rule is left at its default values, however, we set the Destination Subnet or Host Label to SaaS Application and then select the app from the dropdown.

Add Rules
Add Rules

After this has been done for all four of the Office 365 traffic types, we can see each of the rules.

Review Rules
Review Rules

In addition to the In-Path rules, we need to enable MAPI of HTTP optimization.

Enable MAPI
Enable MAPI

Then we need to enable SSL Optimization.

Enable SSL
Enable SSL

And finally, we enable SaaS Acceleration. Clicking “Update Policy” finalizes the policy creation.

Enable SaaS
Enable SaaS

Next, we create a package that ties the policy to the client. Here I am creating a package called TE-LAB. The group is TE-LAB and the endpoint settings will come from the TE-LAB policy where we created the in-path rules. You can also grab the MSI from here. On a side note, the controller requires you to save the configuration. You can see the save button in the menu bar in the image below. Make sure you click it!

Create Package
Create Package

At this point, the service is ready to rock but we need to throw some hosts at it. We’ve downloaded the client installer to the machine you see in the following image. Let’s run that file and work through the installation wizard. This is pretty basic stuff here so I’ll spare you the multiple screenshots, however, one thing I should mention is that you probably wouldn’t install this manually on all your clients. You can use whatever your software management tool of choice is to push to multiple clients at once.

Start Install
Start Install

After the install completes and the client is running, it’s time to send some traffic. In the image below, I’ve logged into my SharePoint site and I’m downloading a 500MB file. The first time I did this without Client Accelerator installed, it took me about two minutes to download.

Begin Download
Begin Download

After Client Accelerator was installed, it took me about 30 seconds.

Download Speed
Download Speed

Why such a difference? Well, have a look at the image below. Here we are looking at Client Accelerator and you’ll notice the Performance Statistics showing 98% data reduction.  happening here? Well, this is just one of our techniques used to accelerate traffic. A local cache is created on the client. The default size of the cache is 10GB. As files are transferred. this information is cached. When a file is retrieved we don’t have far to go. Making changes to files means that only the changed data needs to be transferred, not the entire file.

Review Client
Review Client

There are of course other techniques that are used, for example, we also reduce the number of application turns by handling that locally between the client and the agent (Client Accelerator) rather than send all that protocol data over the network and waiting for a response from the service side when most of it is unnecessary and inefficient.

Wrap up

As you can tell by the walkthrough here, the setup is not too complex. There are a few areas you need to interact with: the SAM, the controller, and the Client Accelerator agent. Still, the benefits are immediate and quantifiable. I hope you found this article interesting and look forward to your comments.

 

]]>
Does Your SD-WAN Pass the Enterprise-Grade Litmus Test? https://www.riverbed.com/blogs/does-your-sd-wan-pass-the-enterprise-grade-litmus-test/ Mon, 04 May 2020 13:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14862 Are all enterprise SD-WAN solutions created equal? Although a rhetorical question, the answer is a definitive, “No.”  That begs the question … “What makes one SD-WAN different from another?” And in particular, since SD-WAN adoption to date has occurred predominantly amongst smaller- and medium-sized businesses … “What makes an SD-WAN solution truly fit for larger organizations and global enterprises?”

Analyst firm, IDC, surveyed enterprises to understand what they demand from an SD-WAN solution to scale from pilots to full-scale rollouts.

You can read some of their findings in the IDC Technology Spotlight sponsored by Riverbed: Crossing the Chasm: What Makes SD-WAN Enterprise Grade.

 

IDC paper on key components of an enterprise SD-WAN
IDC paper on key components of an enterprise SD-WAN

Here are 3 key messages about enterprise SD-WAN from IDC’s Technology Spotlight:

 

1. Look beyond connectivity

“Some advancements that would make SD-WAN solutions more appealing to a broader swath of large enterprises are as follows:

Core network services, such as enterprise-class routing, quality of service (QoS), dual-stack IPv4/IPv6, and segmentation, that seamlessly integrate with existing networks (Such services are critical during brownfield phases of SD-WAN deployment.)”

2. Importance of cloud-based application acceleration

“IDC survey respondents ranked the ability to connect to SaaS and IaaS providers and the ability to improve the performance of those connections as the top 2 use case criteria for adopting SD-WAN solutions.”

Use cases for enterprise sd wan

3. Value of network visibility and analytics

“It’s one thing to have optimized connections across the network; it’s another thing to have the tools in place to monitor those connections and ensure they’re operating the way they’ve been programed to.

SD-WAN solutions that have integrated performance visibility and analytics not only can help ensure performance but also can be a security benefit. Solutions that retain a rich history of packet-, flow-, and device-centric telemetry can help identify the root cause of attacks.”

Addressing the growing demand for enterprise SD-WAN

Application workloads are increasingly distributed across on-premises data centers and cloud data centers. The complexity and cost of WAN, coupled with insatiable demand for bandwidth are driving enterprises to consider SD-WANs.

Emergence of hybrid WAN to meet the dynamic application workloads
Complex connectivity needs require a new approach to networks

It’s one thing to act as an SD-WAN overlay network served by an underlay of conventional physical routers.  It’s another thing altogether to communicate with (and even replace!) conventional routers to bridge between the old and the new. Such capabilities are essential when navigating the complexities of brownfield network integration.

Simply steering packets over Internet broadband circuits to reach cloud applications do nothing to overcome fundamental laws of physics. Network latency will ultimately dictate user experience. To truly boost performance and user experience across high-latency hybrid WANs, application acceleration is key.

In addition to reading IDC’s Technology Spotlight, read this blog about Riverbed’s SteelConnect EX SD-WAN architecture, which has raised the bar and set a new standard for what an Enterprise SD-WAN should be.

Does your SD-WAN pass the enterprise-grade litmus test?

]]> Onboarding SteelConnect EX Branches with a Staging Script https://www.riverbed.com/blogs/branch-onboarding-using-a-staging-script/ Wed, 29 Apr 2020 13:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14710 [embedyt] https://www.youtube.com/watch?v=9zGajxUImvM[/embedyt]

After you’ve deployed a SteelConnect EX SD-WAN headend, common next steps are to onboard branches to the headend, place them into the SD-WAN fabric, and allow them to be managed via overlay by the Director. In this article, we are going to walk through the process of using a staging script to onboard an Internet-only SD-WAN site. To do this, we have to complete a bit of pre-work first. We need to update the topology to add Branch D (I’m using the same environment that I used in the above-referenced article and video), configure some templates, and then use the script to onboard the device. To begin, let’s have a look at the topology. I’m using GNS3 because it makes it simple to add and delete sites, links, etc.

Topology Start
Topology Start

In the above topology you can see that the headend is already deployed and that I have three sites that are MPLS only. We will migrate those in other articles but in this one, we are going to add a brand new branch called Branch D and connect it to the Internet cloud. You can see this in the image below.

Topology Complete
Topology Complete

Now that the topology is ready, I want to start the VM so that it’s ready when I get to the point of onboarding it. In the meantime, I need to provision things in the Director interface. Let’s start there.

Creating a workflow template for branch onboarding

When you onboard a branch you do so using a template. There are several types of templates available to you in the SteelConnect EX SD-WAN solution, however what we want to start with is a Workflow Template.

  1. First, navigate to Workflows>Templates>Templates
  2. Then click the + button to add a template.

Adding Template
Adding Template

We need to populate a few values here. These include providing a name, selecting the type of template we are using, the organization, controllers, subscription information, and analytics cluster. Some of these values are required to move on. You’ll note that this is a multi-tabbed template so we have a few pages that will require us to provide configuration data. You can see the first page below.

Basic Config
Basic Config

After clicking continue, you now provide the Interfaces configuration. Take note of how the device is physically wired and also that there are no device-specific values here. This is a template after all. Multiple devices can be deployed using this template. Below is what my interface configuration looks like for this deployment.

Interface Config
Interface Config

One thing to note here is that the interfaces I’m using are port 1 and 2. This is because port 0 is reserved for management. Therefore, port 1 is mapped to vni-0/0 (which is the WAN interface) and port 2 is mapped to vni-0/1 (which is our LAN side interface).

Interface Definitions
Interface Definitions

The next tab is the Split Tunnels page, where you map your VRF to a WAN VR and define that we will use DIA (Direct Internet Access) for devices onboarded with this template. DIA ensures that Internet-bound traffic is sent directly to the underlay and not backhauled to the data center. There’s actually a lot that goes on behind the scenes here. Not only is NAT configured, but BGP connectivity is established between the WAN VR and the LAN VRF so that routing between the two can take place.

Split Tunnels
Split Tunnels

Be sure to click the + button or the DIA will not be applied.

On the next page, we need to configure our DHCP values for the LAN side. This will allow onboarded branches to allocate DHCP addressing to devices in that branch.

Services
Services

On the last page, you can click the Create button and your template will be committed to the Director. Once this is done, it’s time to add a device. Before adding the device, ensure that the template was committed successfully. You can check this by clicking the refresh icon to see if the template shows up in the list.

Verify Deploy1
Verify Deploy1

Next, we add the device and attach it to a device group. The device group has the template defined that was created earlier. You can add the Device Group as part of the Add Device workflow. We do this Under Workflows, by clicking on Devices and then the + symbol.

In the form fields, you need to provide all the required values. This includes providing a name, selecting an organization, providing a serial number that you create (I use the device name here for simplicity), and then clicking the +Device Group Link.

Basic Device Info
Basic Device Info

When the Create Device Group page appears we can provide a name, select the organization, select the template you previously created as the Post Staging Template and then click OK. You can see this in the image below.

Create Device Group
Create Device Group

We then proceed to the Location Information page. On this page, the only mandatory field is the Country field. However, make sure you click the Get Coordinates button and that the latitude and longitude populate or you will need to manually enter these as well.

Latitude and Longitude
Latitude and Longitude

The next page we need to work with is the Bind Data Page. This is where we tie the variables in the template to the actual device.

Bind Data
Bind Data

We can click on the Serial Number link to see the variables on one page. From here, we can enter the required values and then deploy the device.

Populating Bind Data
Populating Bind Data

As we did with the template, we should verify that it shows up in the device list now.

Device Verify
Device Verify

Now comes the part we’ve all been waiting for: staging the device using ZTP. We do this from the CLI of the SteelConnect EX that we put in our topology at the onset of this article.

Stage the device using ZTP

Once on the CLI of the SteelConnect EX appliance, we need to navigate to the scripts directory.

cd /opt/versa/scripts/

Once there we run the deployment script. Here is an example of the staging script that I ran in the topology I’m working in. Essentially we set the local and remote IKE identities, define the serial number, set the controller IP address and then use DHCP on Wan 0. Now, a few things to clear up here. First, the controller IP is a “Public” address. In my lab that’s the “192.168.122.x” address space. Because this is an Internet-only branch, I need to onboard through the firewall at the edge of the data center. I’ve already configured static NAT and access rules on the firewall to allow this to happen. The second thing to clear up is that I said to use Wan 0 in the script, but I’m really plugged into eth1. That’s because eth0 is dedicated to management so the first physical port is port 0 according to the SteelConnect EX software. This maps to vni-0/0.

sudo ./staging.py –l SDWAN-Branch@Riverbed.com –r Controller-1-staging@Riverbed.com -n SC-EX-BRD-1 –c 192.168.122.25 –d –w 0

Once the command is executed, the FlexVNF instance will initiate an IPSec connection to the controller. You will also see the following output on the command line.

=> Setting up staging config
=> Checking if all required services are up
=> Checking if there is any existing config
=> Generating staging config
=> Config file saved staging.cfg
=> Saving serial number
=> Loading generated config into CDB

After you run the script, things sort of happen in the background. We can go to the CLI of the SteelConnect EX and use the show interfaces brief command to confirm:

[admin@versa-flexvnf: scripts] $ cli

admin connected from 127.0.0.1 using console on versa-flexvnf
admin@versa-flexvnf-cli> show interfaces brief | tab
NAME       MAC                OPER  ADMIN  TENANT  VRF     IP                  
-------------------------------------------------------------------------------
eth-0/0    0c:cb:e5:f9:c1:00  up    up     0       global                      
tvi-0/1    n/a                up    up     -       -                           
tvi-0/1.0  n/a                up    up     1       mgmt    10.254.33.3/24      
vni-0/0    0c:cb:e5:f9:c1:01  up    up     -       -                           
vni-0/0.0  0c:cb:e5:f9:c1:01  up    up     1       grt     192.168.122.157/24  
vni-0/1    0c:cb:e5:f9:c1:02  down  down   -       -                           
vni-0/2    0c:cb:e5:f9:c1:03  down  down   -       -                           
vni-0/3    0c:cb:e5:f9:c1:04  down  down   -       -                           
vni-0/4    0c:cb:e5:f9:c1:05  down  down   -       -

What happens next is expected. During the process of checking the tunnel, the device is being provisioned so we see messages that a commit was performed via ssh using netconf. This is the Director provisioning the device with the values we defined when we deployed the device in the Director GUI. Once provisioned, the device will reboot. You should see the following:

admin@versa-flexvnf-cli> 
System message at 2019-08-04 22:24:34...
Commit performed by admin via ssh using netconf.
admin@versa-flexvnf-cli> 
System message at 2019-08-04 22:24:40...
Commit performed by admin via ssh using netconf.
admin@versa-flexvnf-cli> 
Broadcast message from root@versa-flexvnf
        (unknown) at 22:24 ...

The system is going down for reboot NOW!

System message at 2019-08-04 22:24:50...
    Subsystem stopped: eventd

System message at 2019-08-04 22:24:50...
    Subsystem stopped: acctmgrd
admin@versa-flexvnf-cli> admin@versa-flexvnf-cli> 
System message at 2019-08-04 22:24:50...
    Subsystem stopped: versa-vmod
admin@versa-flexvnf-cli> 
versa-flexvnf login:

Digging a bit deeper into the provisioning output, note the following:

Tunnel Interfaces

Once successfully connected, the FlexVNF appliance will automatically go for a reboot and load the correct config. If that’s the case, you can skip the next step.

After the reboot of the FlexVNF appliance, you should see the different virtual routers:

We need to go back into the CLI of the SteelConnect EX at Branch D and issue the command to view the interfaces.

admin@SC-EX-BRD-1-cli> show interfaces brief | tab
NAME          MAC                OPER   ADMIN  TENANT  VRF                    IP
--------------------------------------------------------------------------------------------------
eth-0/0       0c:cb:e5:32:9c:00  up     up     0       global
ptvi1         n/a                up     up     2       Riverbed-Control-VR    10.254.24.1/32
tvi-0/2       n/a                up     up     -       -
tvi-0/2.0     n/a                up     up     2       Riverbed-Control-VR    10.254.17.44/32
tvi-0/2602    n/a                up     up     -       - 
tvi-0/2602.0  n/a                up     up     2       Internet-Transport-VR  169.254.7.210/31
tvi-0/2603    n/a                up     up     -       -
tvi-0/2603.0  n/a                up     up     2       global                 169.254.7.211/31
tvi-0/3       n/a                up     up     -       -
tvi-0/3.0     n/a                up     up     2       Riverbed-Control-VR    10.254.25.44/32
vni-0/0       0c:cb:e5:32:9c:01  up     up     -       -
vni-0/0.0     0c:cb:e5:32:9c:01  up     up     2       Internet-Transport-VR  192.168.122.149/24
vni-0/1       0c:cb:e5:32:9c:02  up     up     -       -
vni-0/1.0     0c:cb:e5:32:9c:02  up     up     2       Riverbed-LAN-VR        10.0.13.254/24
vni-0/2       0c:cb:e5:32:9c:03  down   down   -       -
vni-0/3       0c:cb:e5:32:9c:04  down   down   -       -
vni-0/4       0c:cb:e5:32:9c:05  down   down   -       -

We should also verify that we can reach addresses on the Internet from the CLI:

admin@SC-EX-BRD-1-cli> ping 1.1.1.1 routing-instance Internet-Transport-VR
Bind /etc/netns/Internet-Transport-VR/resolv.conf.augsave -> /etc/resolv.conf.augsave failed: No such file or directory
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=50 time=5.35 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=50 time=2.23 ms

And finally from the Ubuntu client that we placed in the branch we want to see if it received a DHCP address.

Check DHCP
Check DHCP

Since we have an IP address, we should try to ping and when we do we can see that DIA is working as expected.

Check DIA
Check DIA

Wrap up

Well, this is just one example of how to onboard branches in the SteelConnect EX SD-WAN solution. There’s also the URL-ZTP method, but we can save that for another article. Either way you choose the result should be the same. The device will become part of the SD-WAN fabric, establish an overlay to the controller, and then overlays between other sites once others are onboarded as well.

]]>
Enterprise-Grade SD-WAN: SteelConnect EX Advanced Routing Capabilities https://www.riverbed.com/blogs/steelconnect-ex-advanced-routing/ Mon, 27 Apr 2020 08:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14736 Advanced network routing is one of the most powerful features of Riverbed’s enterprise-grade SD-WAN solution SteelConnect EX – definitely one of my favorites. While other vendors took a different path offering the minimum feature set, SteelConnect EX implements all the advanced routing capabilities Enterprise Network Architects need to get full control of their infrastructure, at scale.

In previous posts, I gave an architecture overview of SteelConnect EX as well as provided general principles to integrate SteelConnect EX in a data center. In this blog, I will provide a deep dive into the routing and SD-WAN mechanisms of SteelConnect EX. I will not detail how to configure static routing, BGP, or OSPF, but will focus on the internal mechanisms of Riverbed’s SD-WAN solution.

So buckle up and let’s proceed.

Virtual Routers

When you consider a SteelConnect EX branch appliance, it’s not simply an SD-WAN router; it’s a system that runs multiple virtual routers (VR). Why multiple routers? That’s what we are going to address right now. Trust me, it makes our solution one of the most elegant and powerful SD-WAN solution for attaining maximum control.

So what is a virtual router in the first place?

By virtual router, I don’t mean a virtual appliance that you would deploy on a hypervisor. The architecture we are going to review is the same on any type of SteelConnect EX appliance: hardware, virtual and cloud images.

Virtual routing instances allow administrators to divide a device into multiple independent virtual routers, each with its routing table. Splitting a device into many virtual routing instances isolates traffic traveling across the network without requiring multiple devices to segment the network.

Virtual routing and forwarding (VRF) is often used in conjunction with Layer 3 sub-interfaces, allowing traffic on a single physical interface to be differentiated and associated with multiple virtual routers. Each logical Layer 3 sub-interface can belong to only one routing instance.

Besides the global routing instance, which is the main one and used for management, there are three types of instances:

  • Transport VR: each circuit has a separate VR with its routing table and routing protocols. You can create a Transport VR for MPLS, one for Internet, another one for 4G/LTE. The Transport VR is part of the underlay network; it interacts with the rest of the network and it owns a network interface (or sub-interface if you use VLANs). The system allows up to 16 uplinks.
  • The Control VR is tied to an organization (tenant). It has no physical interface attached to it. It is the entry point to the SD-WAN overlay. It forms tunnels with remote sites and with the Controller. It forwards “user” traffic through the overlay to other SD-WAN equipped sites. Several LAN VRF can be attached to one Control VR.
  • The LAN VRF is also tied to an organization because it is paired with a Control VR (and only one). Multiple LAN VRF can be created to segment the traffic.

SteelConnect EX – Virtual Routers

What is the benefit of having three types of instances? Let’s have a look at how we are using those VRs for SD-WAN.

Roles of the Routing Instances for SD-WAN

A simple way to summarize the role of each instance would be the following:

  • Transport VR is the underlay
  • Control VR is the overlay
  • LAN VRF is the LAN traffic

Routing instances roles

Let’s consider connecting to a server hosted in another site across the WAN. This site is also equipped with a SteelConnect EX gateway.

Our workstation will send traffic to its default gateway and will eventually hit the LAN VRF. The first thing that the appliance will do is a route lookup. Since the other site is also part of the SD-WAN overlay, the Control VR will advertise the server subnet to the LAN VR. Thus the packets will be routed to the Control VR, which is going to encapsulate in the overlay tunnel.

The tunnel is going over the Transport circuits. Depending on the SD-WAN policies, the uplinks will be bonded (by default) or App-SLA based path selection rules will kick in and steer the traffic in a particular uplink.

The overlay is a tunnel built on several layers of encapsulation:

  • On top of each transport domain (Internet, 4G/LTE, MPLS, etc.), a stateless VXLAN tunnel will be created between gateways.
  • Between Control VRs of two gateways are formed one (and only one) stateful IPSEC (over GRE) tunnel, which is transported on the VXLAN tunnels formed on the underlay (remember the Control VR has no physical interfaces).

SteelConnect EX overlay tunnels

Wait! Why do we have so many encapsulation happening? What is the impact on performance? I know these questions popped up in your head as you were reading the previous section.

Overlay Efficiency

Let’s rewind a bit and discuss the VXLAN piece first. Within a transport domain–by default and unless specified otherwise like creating Hub&Spoke topologies–all gateways will automatically form VXLAN between each other. As a result, two sites with an MPLS-A uplink will have a VXLAN tunnel between each other. If one site is Internet-only and the other MPLS-only, they won’t form tunnels; the only way for those two sites to communicate with each other will be to go to a hub connected to both transport domains.

VXLAN is a well-known technology in data centers that build Layer 2 networks on top of Layer 3. It uses flow-based forwarding and is known as being more efficient than a traditional Layer 3 routing that routes packets separately. Furthermore, VXLAN can scale much better than other tunneling technologies like IPSEC with an address space that can go over 16M entries.

On top of VXLAN, various IP transport tunnels can be implemented. In the case of SteelConnect EX, the Control VR will build IPsec over GRE for untrusted networks (by default) or simply GRE for the trusted ones.

Other SD-WAN solutions on the market form IPSEC tunnels on each uplink–most of them are always-on and rarely on-demand, otherwise performance is penalized during switchovers. In a full-mesh network, the complexity is O(n^2), in fact, O(n^2 x L^2) where n is the number of sites and L the number of uplinks, which becomes very quickly resource angry on a system.

Since Control VRs are creating only one IPSEC tunnel with remote sites, no matter how many uplinks there are, we have a much more efficient system that can very quickly failover in case of a WAN outage whilst consuming less resources.

Overlay encapsulation

All the encapsulation happens in the Control VR.

As you can see, an MPLS (VPN) label is attached to each LAN VRF. MPLS? Yes! We are leveraging MPLS technologies, too: Control VRs are forming a Virtual MPLS Core network.

In total, the overhead is 58 bytes for encrypted traffic hence the MTU would be 1342 bytes by default.

To be exact, enabling each path resiliency feature (like FEC, packet replication or packet stripping) would add 12 bytes of overhead each.

Split Tunnels

Now that we have a better understanding of the system architecture and the overlay mechanism, let’s have a look at the routing between VRs. Split tunnels refer to the menu that will be used to pre-configure the inter-VRs routing using Workflows on the Director.

When I teach a class on SteelConnect EX, I usually ask engineers in the room what they would need to do to have a packet routed between LAN and WAN with the following diagram:

Routing – Primer

The first thing we need to do is interconnect the routers with a cable. We also need to set an IP address on each of the routers’ interfaces. Finally, we need some sort of routing: static routes or a dynamic protocol like BGP.

It may sound obvious, but bear with me, this approach is super helpful to picture how the system works. On SteelConnect EX, the creation of all of those items is automatic and the configuration is pushed from the Director:

  • IP addresses will be automatically set on the VRs for internal use (LAN and WAN interfaces will need to be configured though)
  • The “virtual wire” is a tunnel to interconnect the routers that the system builds for us
  • BGP peering is configured to exchange routes

By default, a tunnel is created between the LAN VRF and the Control VR. BGP peering is established on the routing instances. The LAN-VRF advertises its direct connected subnets to the Control VR so they are visible on the SD-WAN overlay. The Control VR advertises all subnets from the SD-WAN fabric to the LAN-VRF. When you leave the split tunnel configuration empty, this is what happens.

“Passthrough”

During the template creation using Workflows, when the split tunnel is configured between the LAN-VRF and the Transport VR (say MPLS) with no options ticked, this is what we call the passthrough mode.

Split tunnel configuration – Passthrough

What happens when we implement that?

A tunnel is created between the LAN VRF and the Transport VR (here MPLS) to directly interconnect them. BGP peering is established between the two routing instances, which allows the LAN VRF to be aware of underlay subnets as well as the LAN VRF subnets to be advertised on the MPLS network. This is helpful in a hybrid deployment where SD-WAN and traditional routers will coexist.

Routing – Passthrough

DIA: Direct Internet Access

Again, leveraging the power of automation, when we select the option DIA in the split tunnel configuration, many things happen in the background to achieve your goal, which is to put in place direct Internet breakout.

In addition to the routes exchanged between the LAN VRF and the Control VR, a tunnel is created between the LAN VRF and the Transport VR (here Internet) to directly interconnect them. BGP peering is established between the two routing instances, which allows the LAN VRF to advertise its direct connected subnets to the Internet Transport VR. The latter will advertise a default route to the LAN VRF. Finally, CG-NAT is configured for all outbound traffic on the Internet.

Routing – Direct Internet Access

Gateway

Finally, the last option is to select “Gateway.”

Split tunnel configuration – Gateway

In this case, the subnets from the overlay will leak into the underlay (here MPLS) and vice-versa; subnets learned from the underlay will be advertised into the SD-WAN.

Routing – Gateway

This feature allows you to implement transit use cases between the SD-WAN fabric and underlay networks, as well as disjoint networks.

Conclusion

Today, we have learned that SteelConnect EX grants full control and flexibility to build the SD-WAN fabric on top of the traditional network.

There are three types of routing instances with different roles:

  • Transport VR is the underlay
  • Control VR is the overlay
  • LAN VRF is the LAN traffic

What we did not cover here is the multi-tenancy capability of the solution and this will be addressed in the next blog.

A question, a remark, some concerns? Please don’t hesitate to engage us directly on Riverbed Community.

Watch Video

]]>
Riverbed Introduces LTE/Wi-Fi Enabled Enterprise SD-WAN https://www.riverbed.com/blogs/riverbed-introduces-lte-wi-fi-enabled-enterprise-sd-wan/ Fri, 24 Apr 2020 15:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14770 The SteelConnect EX portfolio has expanded its power and flexibility with the addition of three new enterprise SD-WAN appliances: EX-385, EX-485, and EX-685. Ideal for Internet-only or hybrid WAN connectivity to small- and medium-sized branches, these xx85 platforms have LTE for cellular backup, the latest Wi-Fi 6 for branch user WLAN access, and Power over Ethernet (PoE) for video cameras and VoIP phones. These platforms can support a multitude of WAN transport technologies such as MPLS, private and public Internet broadband, and LTE.

Salient Capabilities:

  1. Enterprise-class SD-WAN
  2. Industry-leading Application Acceleration
  3. Advanced network and SD-WAN security

 

SD-WAN for branches
SteelConnect EX-685 SD-WAN Device for Branches: Front View

 

SD-WAN for Branches Port View
SteelConnect EX-685 SD-WAN Device for Branches: Connectivity Ports

Platform Details

Enterprise-class SD-WAN:

The SteelConnect EX xx85 platforms support the complete and power-packed enterprise SD-WAN feature set. These features include enterprise routing for overlay and underlay network communication, dynamic traffic conditioning for Internet links, advanced path resiliency, policy-based routing, and much more. Riverbed delivers IT agility with next-generation SD-WAN network architecture, moving from traditional packet-based networks to application-centric networks. Visit the What is SD-WAN? FAQ page to learn more about SD-WAN.

Industry-Leading Application Acceleration:

An industry leader in application performance over WANs, Riverbed extends its superior WAN optimization and application acceleration capabilities to enterprise SD-WAN. All three platforms seamlessly interface with SteelHead appliances for on-premises, cloud, and SaaS application acceleration. And SteelConnect EX-685 supports Universal Customer Premise Equipment (uCPE) with virtual SteelHead to accelerate applications in a 1-box configuration.

Advanced Network and SD-WAN Security:

All SteelConnect EX xx85 models deliver Next-Generation Firewall (NGFW) services. The SteelConnect EX-485 and EX-685 support additional advanced security functions including Next-Gen IPS, Malware Protection, Antivirus and Unified Threat Management (UTM) functionality. All models also provide flexible service-chaining to interface with any third-party security solution of the customer’s choosing.

Building the right Enterprise SD-WAN for your needs:

Read our latest blog on how to design the SD-WAN headend to connect these LTE/Wi-Fi enabled SD-WAN branch appliances to the datacenter or regional hubs. Download the SteelConnect EX Specification Sheet for SD-WAN models and technical details. 

Availability:

The xx85 platforms are currently orderable and generally available. Contact Riverbed for more information.

]]>
15 Surprising Stats on the Shift to Remote Work due to COVID-19 https://www.riverbed.com/blogs/15-surprising-stats-on-remote-work-due-to-covid-19/ Thu, 23 Apr 2020 12:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14757 As a result of the COVID-19 pandemic, “work from home” has rapidly escalated from one of many remote work options to “the remote work option.” For IT professionals, this means enabling employees with the basics (laptops and connectivity) and optimizing application delivery despite unpredictable network performance due to bandwidth contention and latency. Here are 15 stats to help you as you prepare for the new normal of working from home.

15 surprising stats on the shift to remote work

  1. There has been a massive shift to work from home. 88% of organizations have encouraged or required their employees to work from home and 91% of teams in Asia Pacific have implemented ‘work from home’ arrangements since the outbreak.[i]
  2. Coronavirus has been a catalyst for remote work. 31% of people said that Coronavirus (COVID-19) was the trigger to begin allowing remote work at their company.[ii]
  3. Organizations are mobilizing, using crisis response teams to coordinate their response. 81% of companies now have a crisis response team in place. [iii]
  4. Business continuity tops C-level concerns. 71% of executives are worried about continuity and remote work productivity during the pandemic.[iv]
  5. Cloud investment will continue. Software is expected to post positive growth of just under 2% overall this year, largely due to cloud/SaaS investments.[v]
  6. SaaS usage soars to meet collaboration needs. Use of video conferencing is exploding as Zoom reached 200 million daily participants (paid and free) up from just 10 million in December.[vi]
  7. Microsoft 365 video usage jumps. People are turning on video in Teams meetings two times more than before and total video calls in Teams grew by over 1,000 percent in the month of March.[vii]
  8. Cybercriminals are taking advantage of the crisis. Over a 24-hour period, Microsoft detected a massive phishing campaign using 2,300 different web pages attached to messages and disguised as COVID-19 financial compensation information that actually led to a fake Office 365 sign-in page.[viii]
  9. Technology and infrastructure are some of the biggest barriers to connectivity and remote workforce productivity. 54% of HR leaders indicated that poor technology and/or infrastructure for remote working is the biggest barrier to effective remote working in their organization.[ix]
  10. Poor SaaS performance hampers remote worker productivity. 42% of enterprises report that at least of their half distributed/int’l workers suffer consistently poor experience with the SaaS apps they use to get their jobs done.[x]
  11. New tools can improve SaaS performance. 81% report that it is important to adopt tools and techniques to address network latency issues that impact Microsoft 365 performance.[xi]
  12. Remote work has a positive impact on workforce retention. To retain staff in recovering from the COVID- 19 pandemic response later in 2020, organizations should expect that 75% of their staff will ask to expand their remote work hours by 35%.[xii]
  13. Remote work boosts productivity. Remote workers are 35% to 40% more productive than people who work in corporate offices.[xiii]
  14. Lower OpEx is an important benefit of work-from-home. 77% of executives say allowing employees to work remotely may lead to lower operating costs.[xiv]
  15. Remote work is here to stay. 74% of companies plan to permanently shift to more remote work post COVID.[xv]

Learn new short- and long-term strategies to enable your remote workforce and improve remote work productivity

The Gartner Report, Coronavirus (COVID-19) Outbreak: Short- and Long-Term Strategies for CIOs can provide you with insightful recommendations on how to respond, recover, and thrive. For more resources on how to optimize performance for your work-from-home employees, check out our solutions page www.riverbed.com/remote-workforce-productivity.

 

 

[i] Gartner, Coronavirus in Mind: Make Remote Work Successful!, 5 March 2020

[ii] https://www.owllabs.com/blog/coronavirus-work-from-home-statistics

[iii] Gartner, COVID-19 Bulletin: Executive Pulse, 3 April 2020

[iv] Gartner, COVID-19 Bulletin: Executive Pulse, 3 April 2020

[v] https://www.idc.com/getdoc.jsp?containerId=prUS46186120

[vi] https://blog.zoom.us/wordpress/2020/04/01/a-message-to-our-users/

[vii] https://www.microsoft.com/en-us/microsoft-365/blog/2020/04/09/remote-work-trend-report-meetings/

[viii] https://www.darkreading.com/threat-intelligence/after-adopting-covid-19-lures-sophisticated-groups-target-remote-workers/d/d-id/1337523

[ix] Gartner, Coronavirus in Mind: Make Remote Work Successful, 5 March 2020

[x] ESG, March 2019

[xi] TechTarget, Feb 2020

[xii] Gartner, What CIOs Need to Know About Managing Remote and Virtual Teams through the COVID-19 Crisis

[xiii] Gartner, How to Cultivate Effective “Remote Work” Programs, 2019

[xiv] https://www.flexjobs.com/blog/post/remote-work-statistics/

[xv] Gartner, COVID-19 Bulletin: Executive Pulse, 3 April 2020

]]>
The Golden Age of Spear Phishing https://www.riverbed.com/blogs/the-golden-age-of-spear-phishing/ Thu, 16 Apr 2020 14:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14701 I get it, everybody is working from home, and it is changing things on the network. The limits of VPNs have been pushed, stretched, and exceeded. Video conferencing systems have shown some “growing pains.” And, online SaaS applications have seen a lot of “resource unavailable” errors. These are examples of some of the effects we can easily see. What is less easy to see, however, has me much more worried.

Expect more Spear Phishing attacks

With face-to-face interactions removed, spear phishing has become a bit easier. Follow me on my thought exercise: you forget to lock your video conferencing room, a malicious actor joins (without video this time) and learns a detail or two of the on-goings in the business. Next, this hacker crafts a spear phishing email: “Attached is a link to the document I promised you during our 3:00 PM call. Ping me if you have further questions.” The link contains the malware, which now installs on the worker’s computer.

This malware has a signature that the corporate firewall might have blocked. The command & control (C2) communications perhaps go to a well-known C2 server, which the IDS (intrusion detection system) could have spotted. But because the VPN is struggling to keep up with demand, most workers have enabled split-tunneling1 so requests for resources outside the corporate network go direct to Internet. The firewall and IDS are not seeing the malware. Even if this particular scenario does not apply to your network, it does not stretch the imagination much to see how the current WFH environment has ushered in the Golden Age of Spear Phishing.

Data theft now easier than ever

In a similar vein, performance degradations and access to a company’s sensitive resources has become much harder to understand. It is as if we have all picked up and started working from the coffee shop. To enable access to resources, IT security teams are punching holes faster than a prize fighter. Which ones will get closed when people return to their offices?

Data theft is also much harder to control with so many employees working from home.
Data theft is also much harder to control with so many employees working from home.

Which data accesses are benign and which ones are malicious? What does data theft look like in these WFH times? Time will tell, but one thing is certain: what once appeared to be highly abnormal is now the new norm. It is going to take time to figure out what changed, how it changed, and how to tell right from wrong.

So, the new reality is that we do not know today what we will need to be looking at tomorrow. Especially if we work under the assumption that attack vectors have now moved outside of most corporate security visibility and that more system compromises are taking place where we are unable to directly detect them. Our best hope may be to detect the knock-on behaviors that result from these compromises: brute-force attempts at corporate resources, large data movements, scanning and reconnaissance behavior, etc. These “Network Behavioral Anomaly Detection” techniques have at times been accused of inconclusive alerting, yet a notification of an odd or changing behavior may be the only indicator the cyber defender is going to get these days.

Full fidelity visibility is the last line of defense

In fact, the best preparation we have is to simply record network data such as packets, flows, and logs – and store it for future forensic analysis. This, incidentally, separates the field of available visibility solutions. There are those that record everything they see vs. those that only record graphs and derived metrics. Full fidelity, or “forensically accurate” visibility, may seem like a last line of defense in normal times. But in changing times, it certainly shines at the front line.

In conclusion, even during the best of times threats are evolving. Investing in telemetry collection, and storage can help any organization prepare for an unexpected reality, whatever that may be. Just remember: packets don’t lie!

 

1 “Split-tunneling” is a VPN trick where only the traffic destined for the corporate network goes into the tunnel and all other traffic goes out the normal path to the Internet to reduce VPN congestion and delays.

]]>
The Key to Telework Productivity: Accelerated Network and Application Performance https://www.riverbed.com/blogs/the-key-to-telework-productivity-accelerated-network-and-app-performance/ Tue, 14 Apr 2020 15:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14597 As the world adjusts to our new normal, we find ourselves at a crossroads with complications and difficulties never faced before. As government agencies pivot their workforces to maximum telework per OMB guidance, IT departments are working overtime to enable networks to handle increased traffic and bridge latency issues to meet 24/7 uptime and productivity expectations. Telework productivity is a critical component of continuity of operations–mission operations and citizen services depend on it.

Telework introduces many complexities for IT departments. They experience a significant loss of visibility and control of networks and applications as the number of users with remote connections rapidly expands, thus creating unpredictable challenges and outcomes that can negatively impact employee productivity and ultimately the mission. To provide organizations guidance, Riverbed recently hosted a webinar (now available on-demand) focused on empowering remote employees, and I wanted to share some key takeaways as they relate to the federal workforce.

When teleworking, users struggle to remain productive when their connections and applications aren’t up to the task of the demands placed on them and thus, fail to perform. Users compete with spouses, children, roommates, and neighbors for bandwidth as network traffic explodes to support video streaming and gaming in addition to the collaboration and office applications that allow them to get their jobs done. Add latency to this–the time and speed of the traffic as it traverses from the user to the server and back–and productivity takes a hit. There is a misperception that bandwidth equates to max speed. In reality, latency can be the performance killer.

Agencies need a few key things to have optimal application performance:

  • Access to connect to all needed applications and services
  • Speed to provide agility and the ability to work efficiently
  • Availability to ensure dependability and to minimize risk

Agencies can achieve these through deployment of network optimization, application acceleration, and network visibility solutions, from each app location through to each remote user’s computer. Such solutions can enhance user experience by:

  • Avoiding duplication of data to reduce the amount of data being sent across the network
  • Modifying and prioritizing the transport of traffic over the network
  • Reducing application round trips across the network
  • Providing detailed visibility into bottlenecks that affect user experience

The right solutions can deliver up to 75-times faster downloads, 40-times faster collaboration, 10-times faster SaaS (including Office 365, Salesforce, ServiceNow and other applications), and nearly 99 percent data reduction on any network. All of these things work together to make applications fast, consistent and available at home or anywhere.

We’re more vulnerable when we telework–typically relying on one connection and no backup, along with a standardized ISP that is already struggling to keep up with increased traffic and network connectivity. You can improve the telework experience through deployment of application acceleration and network optimization solutions that provide accelerated access to on-prem, IaaS, or SaaS-based applications, even in less than ideal conditions. While other things feel uncertain right now, our at-home office experience doesn’t have to.

Riverbed understands the challenges private and public sectors organizations are facing right now. We are in this with you and are ready to help you maximize application and network performance to keep your workforce productive. We are offering FREE 90-day licenses to Riverbed Client Accelerator (formerly SteelHead Mobile), as well as FREE webinars to help you improve telework productivity during these challenging times. Please check back often for updates and new training materials.

 

]]>
Your Workforce is Working at Home. What Now? https://www.riverbed.com/blogs/serving-your-at-home-workforce/ Wed, 08 Apr 2020 17:28:39 +0000 https://live-riverbed-blog.pantheonsite.io?p=14604 Organizations around the world are grappling with exactly how to slow the spread of COVID-19. As they implement stringent measures designed to combat the virus and protect citizens, organizations are likewise taking decisive steps to safeguard employees and their communities, while prioritizing remote workforce productivity to support business continuity. Here’s how we think about it.

People come first. The word “unprecedented” has been used over and over again to define a situation that strains our collective understanding. Companies, governments, and people are united in that we are all carefully navigating new and uncertain terrain. No one alive has been through this. At a company level, asking “How does this put people first?” can serve as a good guidepost on doing right by employees, customers and partners.

Focus on serving your mobile and remote workforce. For organizations that were already adept at serving a mobile and modern workforce, this is the chance to outline best practices for your teams and serve as an example for others. Share what business applications your teams rely on – and how you weather variable network connectivity to keep them up and running. For organizations that haven’t yet fully embraced a distributed remote model, this is the time to step up and enable your workforce with the technology, tips and tools they need to be productive. Make sure to listen to the challenges they’re encountering and be as responsive as you can.

Performance is paramount. Collaboration tools have gone from useful to non-negotiable in short order. Everyone has to use them. IT organizations in companies large and small universally recognize that at-home and business networks will be under tremendous strain at a time when performance of apps like these and other tools is more important than ever. Companies will have to keep employees well connected to corporate networks, the cloud, and business-critical SaaS applications.

Beyond the focus on IT performance to ensure productivity, every company has to identify fresh markers of performance. In this new paradigm, plans and projects are being reshaped, reimagined, or sometimes scrapped entirely. It will take time but we all must laser in on what’s critical and communicate those priorities effectively to employees.

Visibility is vital. Companies need to be able to take real-time stock of network and application performance. This is more than just anecdotal evidence – although calls and emails to internal customer service channels are important signals worth elevating. This means reviewing the technology tools in place that provide that performance visibility.

Are you able to quickly diagnose and fix network issues? That’s important when surges arising from sudden demand are common. Can you easily optimize experience and workforce productivity regardless of location? That is also essential as companies attempt to deliver consistent, reliable uptime to workers.

For more insight, check out our work from home webinar with tips and demos on boosting at-home network performance.

What’s on your mind as you consider how best to support your workers at home?

]]>
Using SD-WAN Templates for Simplicity, Scale, and Cost Effectiveness https://www.riverbed.com/blogs/using-sd-wan-templates-for-simplicity-scale-and-cost-effectiveness/ https://www.riverbed.com/blogs/using-sd-wan-templates-for-simplicity-scale-and-cost-effectiveness/#comments Wed, 08 Apr 2020 06:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14573 Changing market dynamics require businesses to embrace digital transformation and to adopt new technologies that improve productivity and customer experience and reduce costs. Enterprises are rapidly adopting cloud services such as Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as Service (PaaS) across multiple clouds. As a result, network administrators are struggling with never-ending changes to networks and with constant mergers and acquisitions, it’s difficult to integrate new networks into a single network.

When implementing complex network changes, it is always useful to rely on a set of guided templates. An SD-WAN template is a framework to create or modify a specific device’s configuration for global and local deployments. Using templates, network administrators can group branches with similar business roles together. And, they can avoid the need to repeat common configurations across multiple branch offices and data centers.

SD-WAN templates also help create standardisation, thereby avoiding mistakes in network deployments. Templates solve problems of scale, cost, and agility and also provide role-based access control to different administrators. For example, a highly-skilled IT administrator can design templates used for complex deployments that a commissioning engineer can deploy at a branch office. SD-WAN templates can help IT teams:

  • Build in scale
  • Reduce network deployment and management costs
  • Avoid configuration errors
  • Reduce complexity

SteelConnect EX Templates

Riverbed’s enterprise-grade SD-WAN solution, SteelConnect EX, offers both device and service templates.

Device Templates

Using device templates, network administrators can automate most of the device-specific configurations for branch devices. This feature helps to configure WAN and LAN interfaces (Static or DHCP), Routing, NAT, DHCP, and other device-specific parameters. Each branch type can have multiple device templates such as:

  • MPLS and Internet WAN uplinks
  • Dual Internet WAN
  • DHCP LAN
  • Cloud services, such as AWS or Azure

There are two types of device templates: staging and post staging. Staging templates require minimum set-up for the branch to reach the SD-WAN controller. When staging is done at a different location (DC or NOC), the device is shipped with pre-configured information.

Select type SDWAN Staging, give the template a name, and select parent organization
Select type SDWAN Staging, give the template a name, and select parent organization

Create a new WAN Network
Create a new WAN Network

Name the WAN Network and select a transport domain
Name the WAN Network and select a transport domain

Select Interface Addressing type
Select Interface Addressing type

Post staging templates are typically used to create final branch configurations. Organisation details, bandwidth subscription, Routing, NAT (Network Address Translation), DIA (Direct Internet Access), DHCP, NTP and other management details are entered. 

Create template, select controllers, organization, bandwidth
Create template, select controllers, organization, bandwidth

 

Assign LAN and WAN ports
Assign LAN and WAN ports

Configure BGP, OSPF and static routes
Configure BGP, OSPF and static routes

DIA (Direct Internet Access) configurations
DIA (Direct Internet Access) configurations

NAT, DHCP, Relay configuration and management details
NAT, DHCP, Relay configuration and management details

Network administrators can then can add a Device Group and associate a staging or post staging template.

Select Devices/Device Groups
Select Devices/Device Groups

 

Service Templates

Service templates help configure services such as:

  • Stateful Firewall
  • NextGen Firewall
  • Quality of Service (QOS)
  • General
  • Application
  • Service Chain

Service Template Types
Service Template Types

Let’s use the NextGen Firewall service template as an example. It defines various policies and profiles that enforce rules with appropriate actions for:

  • DDOS
  • Authentication
  • Decryption
  • Security

DDOS attacks the machine and the network becomes inaccessible by flooding the target with a huge rate of traffic. With service templates, network administrators can configure profiles and set thresholds for various events as described in the graphic below:

Configure DDOS profile
Configure DDOS profile

Kerberos Authentication profile, LDAP Authentication profile, or the SAML Authentication profile can be used. Authentication timeout based on IP or Cache modes can also be configured as shown in the graphic below:

Authentication profile
Authentication profile

SSL decryption profiles can be defined based on configuration for each of the server certificates as shown below. Network administrators can decrypt the content with minimum key length supported. Various actions can be set for expired certificates or untrusted certificates to allow packets, drop packet, drop session, reject and alert. Similar actions for unsupported Cipher and Key Lengths can be configured.

SSL profile setting for the branch
SSL profile setting for the branch

The following graphic shows the configurations of various security aspects such as URL filtering, IP Filtering, Anti-Virus, and predefined vulnerabilities profiles.

Security profile
Security profile

SteelConnect EX Workflows

The configuration of Controllers, Organization, Templates, and Device creation can be simplified by the use of workflows. To create a branch device, workflows need to create templates (staging/post staging), device groups, and bind device data.

To Onboard Branch/DC devices using a workflow, enter branch-specific information for the templates used by this branch. An existing Device Group is selected or created. Device groups contain information about which templates to use for this branch. Hence, automation and deployment sites or groups of sites are easier, enabling scale at lower costs.

Add a device
Add a device

What Have We Learned?

Overall, SteelConnect EX templates offer an advantage to managing complex network deployments so network administrators can adapt networks to changing business dynamics with minimal costs.

]]>
https://www.riverbed.com/blogs/using-sd-wan-templates-for-simplicity-scale-and-cost-effectiveness/feed/ 2
11 Ways to Ensure Network Performance, Visibility and Security for Work-From-Home Users https://www.riverbed.com/blogs/ensure-network-visibility-security-for-work-from-home-users/ Tue, 07 Apr 2020 08:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14560 According to Gartner, 43% of U.S. workers spend some of their time working remotely and there’s a high proportion of work being done outside the traditional corporate office in Finland, Japan, Netherlands, and Sweden. However, nothing has prepared enterprise IT for the surge of work-from-home (WFH) users that has occurred as a result of ‘shelter in place’ mandates associated with the COVID-19 pandemic.

This rapid workplace shift is increasing pressure on existing IT systems (many of which were not fully prepared to handle this scenario at scale) while economic uncertainty is driving increased focus on cost containment across every industry. Ensuring remote workers remain productive and the data they share secure are two significant ways IT can contribute.

Network visibility for work-from-home users

Typically organizations have 59% more east-west traffic than north-south (Gigamon) and the expanded WFH policies are essentially driving that entire LAN traffic base over to VPN. During this transition, having accurate visibility into this new traffic profile is critical. Riverbed provides visibility into exactly what’s happening across your hybrid network.

  1. Application Intelligence automatically identifies over 2,000 applications on the network, allowing IT to prioritize business-critical and collaboration applications, and de-prioritize others. For example, most networks prioritize the burgeoning VoIP, WebEx and Zoom traffic. You can ensure they remain in a prioritized QoS category and troubleshoot outliers.
  2. Resolve remote access and VPN issues. This is a big one. A lot of NetOps teams are worried about how they’re going to support a huge increase in the number of people that connect to the enterprise via VPN. Two of the questions they look for Riverbed to answer are: “Does our VPN setup have the capacity to handle the additional work-from-home triggered workload?” and “How well is our VPN setup holding up under this additional load?” Riverbed AppResponse provides this visibility and more. Know what level of network performance your WFH remote users are actually experiencing.
  3. Measure real user experience of web applications and easily troubleshoot performance problems for remote workers.
  4. Another form of user experience is synthetic testing. Use synthetic testing to monitor network performance, infrastructure, or application performance 24 x 7. Create test scenarios to monitor essential applications like Microsoft Exchange, database transactions.

    Riverbed can distinguish business apps from recreation. Here we see BitTorrent is hogging the bandwidth.
    Riverbed can distinguish between business and recreational applications
  5. Re-plan for capacity changes, identify critical traffic, and optimize bandwidth usage for new traffic flows.
  6. If you use SteelHead WAN optimization, gain a centralized view into application performance, bandwidth reduction, QoS categories, responsiveness, and more.

Network security is a top concern with a distributed workforce

Unfortunately, as the employee base moves to work-from-home and other remote locations, cyber bad guys will try to take advantage of any lapses in security that are created by this shift. Now more than ever, your organization needs to be prepared. Riverbed helps with threat detection and mitigation on all network traffic. Riverbed’s NetProfiler Advanced Security Module helps detect and respond to threats by monitoring flow data from across your hybrid enterprise.

  1. Know when workers communicate with blacklisted systems, such as known malware download sites or command & control sites, so you can investigate and mitigate before additional systems in the network are infected.
  2. DDoS detection quickly identifies a broad range of DDoS attacks so you can make informed mitigation decisions to end interruptions to business sooner. The VPN has become a target for DDoS attacks and phishing for VPN account credentials – don’t let it become your weak link.

Example of an exfiltration alert in NetProfiler Advanced Security Module.
Exfiltration alert in NetProfiler Advanced Security Module

  1. Network security analytics baselines traffic and automatically identifies threats that generate unusual patterns, such as unexpected new services, hosts, or connections. These patterns could indicate data exfiltration, password brute force attempts, etc.
  2. App Intelligence automatically identifies more than 2,000 applications, helping you identify and close down “shadow IT” usage that could leave you vulnerable.
  3. Cyber threat hunting lets you explore for hidden, but suspected threats, before they become business-impacting events.

Riverbed has been helping enterprises across the globe with these exact challenges. Our customers are relying on our network visibility and security solutions, now more than ever, to help them handle the work-from-home surge. We can help your IT team, too!

As you work to re-architect your infrastructure for this work-from-home shift, remember that Riverbed Unified NPM is there to support your visibility needs. Riverbed delivers a Unified NPM platform that offers enterprise-scale visibility and analytics. It monitors all packets, all flows, and all infrastructure metrics enterprise-wide to quickly detect and remediate performance issues and security threats. Riverbed Unified NPM combines the breadth, depth and scale of information across on-premises, hybrid and multi-cloud architectures so you get end-to-end visibility with no blind spots.

For more information, go to www.riverbed.com/npm.

]]>
5 Ways to Reduce Hybrid Cloud Complexity https://www.riverbed.com/blogs/5-ways-to-reduce-hybrid-cloud-complexity/ Tue, 07 Apr 2020 05:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14523 How can you manage multi-cloud complexity in today’s distributed environments? A common approach–using disparate cloud vendor tools–is often complex. Each vendor tool was designed for a specific public cloud. If you use multiple clouds (and 84% do according to the 2019 RightScale State of the Cloud Report), you’ll have multiple tools to manage. Pulling together information across all of them is time consuming and highly manual, which means it will take longer to isolate problems. Worst of all, you’ll have less time to focus on the really important stuff like supporting your mobile workforce.

With so much to manage, how can you get ahead in a hybrid and multi-cloud world? Here are five ways that you as an IT professional can reduce hybrid and multi-cloud complexity:

5 ways to manage multi-cloud complexity

  1. Provide full transparency across private clouds, multiple public clouds, and the networks that support them. 91% of enterprises have adopted public cloud and 72% private cloud, according to RightScale, so it’s critical to provide visibility across both as well as the networks that underpin them. And, with today’s work-from-home focus, the investment in cloud computing and cloud performance is only increasing.
  2. Deliver visibility into legacy and emerging apps. This requirement will be more important for more established organizations as they transition to the modern apps but still have a large percentage of legacy application environments (physical and virtual).
  3. Automate the discovery of application environments. Automation is a highly effective way to manage the complexity of distributed environments. One way to leverage automation is to auto-discover application environments to show where applications, services and workloads are located and how they are connected. This capability is critical to isolate issues quickly and resolve intermittent problems and brown outs and ensure your digital workforce can get the job done.
  4. Expand your data set. You should have the ability to collect data from each cloud service and integrate it with your existing visibility solution to provide holistic insights. For example, you should be able to use AWS metrics with your APM and NPM tools to connect the dots between apps, workloads, networks, and locations.
  5. Get granular. Since modern application services spin up and down in seconds, your data needs to be highly granular, ideally with 1-second monitoring intervals. It’s also a best practice to combine detailed data from network packets with flow data and device telemetry to provide a complete picture.

Learn more about reducing multi-cloud cloud complexity

To learn more about how to reduce hybrid and multi-cloud complexity and how you can ensure performance for your digital workforce everywhere and anywhere, see the ESG Report: Reducing Hybrid and Multi-cloud Complexity: The Importance of Visibility in Multi-cloud Environments.

Multi-cloud complexity

 

]]>
Add Visibility to Your SteelHead to Optimize Network Performance https://www.riverbed.com/blogs/add-visibility-steelhead-optimize-network-performance/ Wed, 01 Apr 2020 12:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14385 For some reason, whenever I think about how Riverbed NPM and SteelHead WAN Optimization can work together to provide better visibility into optimized network performance, the Johnny Nash song “I Can See Clearly Now” comes to mind:

I can see clearly now the rain is gone
I can see all obstacles in my way
Gone are the dark clouds that had me blind
It’s gonna be a bright (bright)
Bright (bright) sunshiny day…

Whether you are an existing SteelHead user or just thinking about adding WAN optimization to your portfolio, adding Riverbed NetProfiler to your SteelHead environment makes a lot of sense. Check out the solution brief for more info.

You see, SteelHeads utilizes SteelFlow, Riverbed’s proprietary version of network flow data. SteelFlow allows SteelHeads to send unique and rich optimization metrics to NetProfiler, our enterprise flow monitoring and analysis solution. This flow information includes application mapping, bandwidth reduction, optimized traffic latency, QoS, and retransmission metrics.

I can see all obstacles

When your organization uses WAN optimization to improve application performance, it can complicate your network visibility story.WAN optimization can mask many of the details necessary to monitor end-to-end visibility. NetProfiler overcomes the visibility blind spots that WAN optimization sometimes introduces into the network by its very nature. When you add NetProfiler to your SteelHead deployment it becomes a bright sunny day in the NetOps Center again. You gain:

  • The ability to see all applications (2000+ auto-defined apps, plus custom-defined apps) everywhere they run.
  • Centralized quality of service (QoS) policy configuration and visibility. NetProfiler aligns both inbound and outbound QoS results with business objectives using NetProfiler QoS rules.

    Get WAN bandwidth utilization reduction​, including percentage of reduction on utilized traffic. Note: Web bandwidth utilization was reduce 99% across the WAN.
    Get WAN bandwidth utilization reduction​, including percentage of reduction on utilized traffic. Note: Web bandwidth utilization was reduce 99% across the WAN.
  • Accurate response time analysis of optimized applications.
  • The ability to understand bandwidth reduction benefits. Report on all SteelHead optimization results simultaneously, and uncover additional optimization opportunities.
  • WAN visibility (optimized and non-optimized traffic) into utilization for every location.
  • Centralized troubleshooting of remote LANs.

Gone are the dark clouds that had me blind

The results are widespread and instantaneous. By using NetProfiler to optimize network performance management with your SteelHead, you gain tremendous benefits:

  • Understand the end-to-end picture of your optimized network and application performance for faster troubleshooting.
  • Keep critical applications running at peak performance—all the time, in all places—not just across the WAN.
  • Identify performance issues earlier, as soon as they start, to avoid business-impacting issues.
  • Troubleshoot performance problems quickly and efficiently, no matter where they occur.

To learn more about how NetProfiler can provide comprehensive visibility into SteelHead WAN optimization, download the solution brief.

]]>
Building an SD-WAN Headend https://www.riverbed.com/blogs/building-an-sd-wan-headend/ https://www.riverbed.com/blogs/building-an-sd-wan-headend/#comments Tue, 24 Mar 2020 12:30:07 +0000 https://live-riverbed-blog.pantheonsite.io?p=14177 When working with SteelConnect EX an important concept to understand is that of a SD-WAN headend. You simply cannot operate a SteelConnect EX SD-WAN network without the headend. We covered the high-level overview of the headend in our Lightboard video and you can view it on the Riverbed YouTube Channel. However, for the sake of being thorough, let’s review what a SD-WAN headend is and the components involved.

Components of a Headend

There are three main components of a headend. These include:

  • SteelConnect Analytics
  • SteelConnect Director
  • Controller

These three entities are responsible for the management and control plane of the SteelConnect EX solution.

SteelConnect EX Director (Director)

SteelConnect Director is the Management interface that you work in. As you configure templates and settings here the Direction uses Netconf over SSH to provision the SteelConnect EX devices via the Controller.

SteelConnect EX Controller (Controller)

The controller establishes secure management tunnels to each SteelConnect EX. The Controller acts as a BGP route-reflector, reflecting overlay prefixes to each site to establish reachability between sites using the SD-WAN overlay.

SteelConnect EX Analytics (Analytics)

SteelConnect Analytics receives all telemetry information from the SteelConnect EX sites and provides you with that data by means of dashboards and log files.

Installing the SD-WAN Headend

There are several steps that must be followed to install a headend. One item to note is that the headend may live in the data center, but it is not part of the data plane. To establish connectivity from the SD-WAN overlay to the data center, a SteelConnect EX FlexVNF must be installed in the data center. Let’s walk through the configuration of the headend.

Step 1: Add headend components to topology

For the purpose of this article, I’m going to make the assumption that the network infrastructure is already configured to support the addition of our three new devices: Director, Controller, and Analytics. We will add them according to the following diagram.

Riverbed SD-WAN Headend
Riverbed SD-WAN Headend

To elaborate a bit further on the diagram, all devices define ethernet0 as the management interface and thus the Director, Controller, and Analytics are connected to the management network on ethernet. From the point of view of the SteelConnect Director, the GUI management performed by an admin is done via the Northbound interface. This is also where API calls happen. We use the Director southbound interface, in this case, ethernet1, as the control network.

Step 2: Perform the initial setup of the SteelConnect Director

The implement the Director, we must begin by following these steps:

  1. Open up the Director CLI
  2. Log in to the Director using the default credentials
  3. Perform the initial setup script.

Below is a CLI output of the Director. As you’ll note, upon initial login with the Administrator account we are automatically prompted to enter the setup. Answering yes to this prompt begins the setup script.

Ubuntu 14.04.6 LTS Director-1 ttyS0

Director-1 login: Administrator
Password: 
===============================================================================
WARNING!
This is a Proprietary System
You have accessed a Proprietary System

If you are not authorized to use this computer system, you MUST log off now.
Unauthorized use of this computer system, including unauthorized attempts or
acts to deny service on, upload information to, download information from,
change information on, or access a non-public site from, this computer system,
are strictly prohibited and may be punishable under federal and state criminal
and civil laws. All data contained on this computer systems may be monitored,
intercepted, recorded, read, copied, or captured in any manner by authorized
System personnel. System personnel may use or transfer this data as required or
permitted under applicable law. Without limiting the previous sentence, system
personnel may give to law enforcement officials any potential evidence of crime
found on this computer system. Use of this system by any user, authorized or
unauthorized, constitutes EXPRESS CONSENT to this monitoring, interception,
recording, reading, copying, or capturing, and use and transfer. Please verify
if this is the current version of the banner when deploying to the system.
===============================================================================

  ____  _                _              _                                
 |  _ \(_)_   _____ _ __| |__   ___  __| |                               
 | |_) | \ \ / / _ \ '__| '_ \ / _ \/ _` |                               
 |  _ <| |\ V /  __/ |  | |_) |  __/ (_| |                               
 |_| \_\_| \_/ \___|_|  |_.__/ \___|\__,_|                               
  ____  _            _  ____                            _     _______  __
 / ___|| |_ ___  ___| |/ ___|___  _ __  _ __   ___  ___| |_  | ____\ \/ /
 \___ \| __/ _ \/ _ \ | |   / _ \| '_ \| '_ \ / _ \/ __| __| |  _|  \  / 
  ___) | ||  __/  __/ | |__| (_) | | | | | | |  __/ (__| |_  | |___ /  \ 
 |____/ \__\___|\___|_|\____\___/|_| |_|_| |_|\___|\___|\__| |_____/_/\_\
 Release     : 16.1R2
Release date: 20191101
Package ID  : 117dde1
------------------------------------
VERSA DIRECTOR SETUP
-bash: /var/log/vnms/setup.log: Permission denied
------------------------------------
Do you want to enter setup? (y/n)? y
[sudo] password for Administrator: 
------------------------------------
Running /opt/versa/vnms/scripts/vnms-startup.sh ...
------------------------------------
Do you want to setup hostname for system? (y/n)? y
Enter hostname: Director-1
Added new hostname entry to /etc/hosts
Added new hostname entry to /etc/hostname
Restarting network service ...
Do you want to setup network interface configuration? (y/n)? y
------------------------------------
Setup Network Interfaces
------------------------------------
Enter interface name [eg. eth0]: eth0
Existing IP for eth0 is 192.168.122.174
Configuration present for eth0, do you want to re-configure? (y/n)? 192.168.122.174
Answer not understood
Configuration present for eth0, do you want to re-configure? (y/n)? y
Re-configuring interface eth0
Enter IP Address: 192.168.122.174
Enter Netmask Address: 255.255.255.0
Configure Gateway Address? (y/n)? y
Enter Gateway Address: 192.168.122.1
------------------------------------
Adding default route - route add default gw 192.168.122.1
Added interface eth0
Configure another interface? (y/n)? y
Enter interface name [eg. eth0]: eth1
Existing IP for eth1 is 10.100.3.10
Configuration present for eth1, do you want to re-configure? (y/n)? y
Re-configuring interface eth1
Enter IP Address: 10.100.3.10
Enter Netmask Address: 255.255.255.0
------------------------------------
Added interface eth1
Configure another interface? (y/n)? n
Configure North-Bound interface (If not configured, default 0.0.0.0 will be accepted) (y/n)? y
------------------------------------
Select North-Bound Interface 
------------------------------------
Enter interface name [eg. eth0]: eth0
------------------------------------
Select South-Bound Interface(s) 
------------------------------------
Enter interface name [eg. eth0]: eth1
Configure another South-Bound interface? (y/n)? n
Restarting network service ...
Enable secure mode for Director HA ports? (y/n)? n
 => Clearing VNMSHA iptables rules
 => Persist iptable rules and reload..
 => Done.
Secure Director HA communication? (y/n)? n
 => Clearing strongSwan ipsec configuration..
 => Restarting ipsec service..
 => Done.
Prompt to set new password at first time UI login? (y/n)? n
Restarting versa director services, please standby ...
------------------------------------
Stopping VNMS service
------------------------------------
Stopping VNMS:TOMCAT.............[Stopped]
Stopping VNMS:KARAF..............[Stopped]
Stopping VNMS:REDIS..............[Stopped]
Stopping VNMS:POSTGRE............[Stopped]
Stopping VNMS:SPRING-BOOT........[Stopped]
Stopping VNMS:SPACKMGR...........[Stopped]
Stopping VNMS:NCS................[Stopped]
 * Stopping daemon monitor monit
   ...done.
  ____  _                _              _                                
 |  _ \(_)_   _____ _ __| |__   ___  __| |                               
 | |_) | \ \ / / _ \ '__| '_ \ / _ \/ _` |                               
 |  _ <| |\ V /  __/ |  | |_) |  __/ (_| |                               
 |_| \_\_| \_/ \___|_|  |_.__/ \___|\__,_|                               
  ____  _            _  ____                            _     _______  __
 / ___|| |_ ___  ___| |/ ___|___  _ __  _ __   ___  ___| |_  | ____\ \/ /
 \___ \| __/ _ \/ _ \ | |   / _ \| '_ \| '_ \ / _ \/ __| __| |  _|  \  / 
  ___) | ||  __/  __/ | |__| (_) | | | | | | |  __/ (__| |_  | |___ /  \ 
 |____/ \__\___|\___|_|\____\___/|_| |_|_| |_|\___|\___|\__| |_____/_/\_\
 Starting VNMS service
------------------------------------
Starting VNMS:NCS................[Started]
Starting VNMS:POSTGRE............[Started]
Starting VNMS:SPRING-BOOT........[Started]
Starting VNMS:REDIS..............[Started]
Starting VNMS:KARAF..............[Started]
Starting VNMS:TOMCAT.............[Started]
------------------------------------
Completed Setup
------------------------------------
Press ENTER to continue
------------------------------------
To run setup manually: /opt/versa/vnms/scripts/vnms-startup.sh
------------------------------------

Once you’ve finished the script you’ll need to reboot the server.  I’ll do that in the following output.

Ubuntu 14.04.6 LTS Director-1 ttyS0

Director-1 login: Administrator
Password: 
Last login: Mon Mar 16 22:23:09 UTC 2020 on ttyS0
===============================================================================
WARNING!
This is a Proprietary System
You have accessed a Proprietary System

If you are not authorized to use this computer system, you MUST log off now.
Unauthorized use of this computer system, including unauthorized attempts or
acts to deny service on, upload information to, download information from,
change information on, or access a non-public site from, this computer system,
are strictly prohibited and may be punishable under federal and state criminal
and civil laws. All data contained on this computer systems may be monitored,
intercepted, recorded, read, copied, or captured in any manner by authorized
System personnel. System personnel may use or transfer this data as required or
permitted under applicable law. Without limiting the previous sentence, system
personnel may give to law enforcement officials any potential evidence of crime
found on this computer system. Use of this system by any user, authorized or
unauthorized, constitutes EXPRESS CONSENT to this monitoring, interception,
recording, reading, copying, or capturing, and use and transfer. Please verify
if this is the current version of the banner when deploying to the system.
===============================================================================

  ____  _                _              _                                
 |  _ \(_)_   _____ _ __| |__   ___  __| |                               
 | |_) | \ \ / / _ \ '__| '_ \ / _ \/ _` |                               
 |  _ <| |\ V /  __/ |  | |_) |  __/ (_| |                               
 |_| \_\_| \_/ \___|_|  |_.__/ \___|\__,_|                               
  ____  _            _  ____                            _     _______  __
 / ___|| |_ ___  ___| |/ ___|___  _ __  _ __   ___  ___| |_  | ____\ \/ /
 \___ \| __/ _ \/ _ \ | |   / _ \| '_ \| '_ \ / _ \/ __| __| |  _|  \  / 
  ___) | ||  __/  __/ | |__| (_) | | | | | | |  __/ (__| |_  | |___ /  \ 
 |____/ \__\___|\___|_|\____\___/|_| |_|_| |_|\___|\___|\__| |_____/_/\_\
 Release     : 16.1R2
Release date: 20191101
Package ID  : 117dde1
[Administrator@Director-1: ~] $ sudo reboot
[sudo] password for Administrator: 

Broadcast message from Administrator@Director-1
        (/dev/ttyS0) at 22:29 ...

The system is going down for reboot NOW!
[Administrator@Director-1: ~] $ 
Ubuntu 14.04.6 LTS Director-1 ttyS0

Director-1 login:

Step 3: Perform the initial setup of Analytics

The next step in bringing up a headend is to configure the Analytics server. Analytics and Director will need to communicate securely so we are going to setup the network configuration first, then we are going to sync certificates between the two. Perform the following tasks to implement the Analytics server.

  1. Double click on the Analytics icon to open up the CLI
  2. Log into Analytics with the credentials “versa/versa123
  3. Edit the /etc/network/interfaces file with static IP addressing.

Use sudo nano /etc/network/interfaces for task 3 above.

 GNU nano 2.2.6         File: /etc/network/interfaces                Modified  

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static 
address 192.168.122.175
netmask 255.255.255.0
gateway 192.168.122.1

auto eth1
iface eth1 inet static
address 10.100.3.11
netmask 255.255.255.0

Next, bounce each interface.

[versa@versa-analytics: ~] $ sudo ifdown eth0
[versa@versa-analytics: ~] $ sudo ifdown eth1                
ifdown: interface eth1 not configured
[versa@versa-analytics: ~] $ sudo ifup eth0
[versa@versa-analytics: ~] $ sudo ifup eth1                  
[versa@versa-analytics: ~] $

Once the interfaces have been bounced we need to confirm the IP addressing and ping the Director. I’ll do that in the following output.

[versa@versa-analytics: ~] $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 0c:5d:40:dd:78:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.175/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::e5d:40ff:fedd:7800/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 0c:5d:40:dd:78:01 brd ff:ff:ff:ff:ff:ff
    inet 10.100.3.11/24 brd 10.100.3.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::e5d:40ff:fedd:7801/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:5d:40:dd:78:02 brd ff:ff:ff:ff:ff:ff
[versa@versa-analytics: ~] $ ping 192.168.122.174
PING 192.168.122.174 (192.168.122.174) 56(84) bytes of data.
64 bytes from 192.168.122.174: icmp_seq=1 ttl=64 time=1.38 ms
64 bytes from 192.168.122.174: icmp_seq=2 ttl=64 time=0.895 ms
^C
--- 192.168.122.174 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.895/1.140/1.385/0.245 ms
[versa@versa-analytics: ~] $

Now that I have the basic connectivity from Analytics I need to add resolution for Director–1. This step is important because later on, I need to register the Director with the Analytics server in the GUI interface and this is done by name. That name must be resolvable.

[versa@versa-analytics: ~] $ sudo nano /etc/hosts
  1 127.0.0.1   localhost
  2 127.0.1.1   versa-analytics
  3 192.168.122.174 Director-1
  4 
  5 # The following lines are desirable for IPv6 capable hosts
  6 ::1     localhost ip6-localhost ip6-loopback
  7 ff02::1 ip6-allnodes
  8 ff02::2 ip6-allrouters

Now we need to navigate to the scripts directory so we can run the vansetup script.

[versa@versa-analytics: ~] cd /opt/versa/scripts/van-scripts

Now that I’m in the van-scripts directory I can execute the vansetup python script

[versa@versa-analytics: van-scripts] $ sudo ./vansetup.py 
[sudo] password for versa: 
/usr/local/lib/python2.7/dist-packages/cassandra_driver-2.1.3.post-py2.7-linux-x86_64.egg/cassandra/util.py:360: UserWarning: The blist library is not available, so a pure python list-based set will be used in place of blist.sortedset for set collection values. You can find the blist library here: https://pypi.python.org/pypi/blist/
VAN Setup configuration start
<-- output omitted -->

Update config files

As the script runs you will be asked to delete the database. We want to do this so that it’s rebuilt from scratch with no existing data. Basically, we want a fresh start.

Delete the database? (y/N) y

Proceeding to delete the database in 5 seconds

Next, we will reboot when prompted to do so.

Reboot the node(recommended)? (y/N) y

After the reboot, we want to identify if the database successfully restarted after we deleted it. To perform this task you need to scroll up into the output text and find the statement that identifies a successful restart of the Cassandra database. You can see an example of the output you’re looking for below.

DSE daemon starting with Solr enabled (edit /etc/default/dse to disable)
   ...done.
Waiting for host 127.0.0.1 to come up 
0
UN  127.0.0.1  53.6 KB    ?       fa7139b0-77c1-4b0f-a967-6d754ea7aa28  -3572760821973264000                     RAC1


We can also check the state of the database after reboot by logging back in and using the nodetool status command. Specifically, look for the UN that indicates the database is UP and NORMAL. This is the same output that you would have f0und by scrolling back up through the script output.

[versa@versa-analytics: ~] $ nodetool status
Datacenter: Search-Analytics
============================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address    Load       Owns    Host ID                               Token                                    Rack
UN  127.0.0.1  287.27 KB  ?       fa7139b0-77c1-4b0f-a967-6d754ea7aa28  -3572760821973264000                     RAC1

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
[versa@versa-analytics: ~] $

Now we are going to enter the CLI of Analytics. You access the CLI by entering the command cli. You can see this below.

[versa@versa-analytics: ~] $ cli

versa connected from 127.0.0.1 using console on versa-analytics
versa@versa-analytics>

Next, we will enter the configuration mode using the configure command.

versa@versa-analytics> configure
Entering configuration mode private
[ok][2019-07-14 15:42:01]

[edit]
versa@versa-analytics%

Now that we are in configuration mode, we want to set the analytics node to the southbound interface IP address. This will also include defining the port to use for communication, the storage directory, and the format. Here’s the information we need:

  • Use Port 1234
  • Set the storage directory to /var/tmp/log
  • Use the syslog format
versa@versa-analytics% set log-collector-exporter local collectors VAN address 10.100.3.11 port 1234 storage directory /var/tmp/log format syslog
[ok][2019-07-14 15:48:24]

Now we need to commit the changes and exit. You can see that in the following output.

versa@versa-analytics% commit
Commit complete.
[ok][2019-07-14 15:49:24]

[edit]
versa@versa-analytics% exit
[ok][2019-07-14 15:49:26]
versa@versa-analytics> exit
[versa@versa-analytics: ~] $

Step 4: Connect to the Director Web Interface

My next step will be to connect to the Director GUI. We browse to the northbound interface IP address, which is the address we set on eth0 earlier. The screenshot below is not using the same IP address that we configured, but hopefully, you get the point. It’s an HTTPS connection and we are going to be warned about the self-signed certificate. Once you accept the certificate you can log in with the Administrator credentials.

Accept the certificate
Accept the certificate

Next, log in to the Director with the default credentials.

Director Login
SteelConnect Director Login

You’ll be asked to reset the password for the GUI.  Follow those instructions and click change.

Director Password Reset
SteelConnect Director Password Reset

Now we need to log in a second time with the new credentials.

The new password is only used for the GUI.

Second Login
Second Login

Step 5: Define the Analytics Cluster

After we’ve logged into the Director GUI we need to define our analytics cluster. To do so, navigate to Administration>Connectors>Analytics Cluster and click the + button to add a new Analytics Cluster. The northbound IP of our analytics cluster is 192.168.122.175 and the southbound IP is 10.100.3.110 (Yes, there is a type in my screenshot).

Add Analytics Cluster
Add Analytics Cluster

You’ll also notice that we give the cluster a name, in this case, Analytics, and we also name the northbound IP Analytics-1. Also, the connector port is left at the default value of 8080. We will use this port to connect to the Analytics GUI later on.

Analytics Cluster details
Analytics Cluster Details

Step 6: Generate and Sync certificates between Director and Analytics

Now that the Analytics cluster has been defined in the Director GUI we need to sync certificates between the two. To do so we will generate the certificate from the CLI of the Director. This is seen in the following output from the director CLI.

Director-1 login: Administrator 
Password: 
Last login: Mon Mar 16 22:28:56 UTC 2020 on ttyS0
===============================================================================
WARNING!
This is a Proprietary System
You have accessed a Proprietary System

If you are not authorized to use this computer system, you MUST log off now.
Unauthorized use of this computer system, including unauthorized attempts or
acts to deny service on, upload information to, download information from,
change information on, or access a non-public site from, this computer system,
are strictly prohibited and may be punishable under federal and state criminal
and civil laws. All data contained on this computer systems may be monitored,
intercepted, recorded, read, copied, or captured in any manner by authorized
System personnel. System personnel may use or transfer this data as required or
permitted under applicable law. Without limiting the previous sentence, system
personnel may give to law enforcement officials any potential evidence of crime
found on this computer system. Use of this system by any user, authorized or
unauthorized, constitutes EXPRESS CONSENT to this monitoring, interception,
recording, reading, copying, or capturing, and use and transfer. Please verify
if this is the current version of the banner when deploying to the system.
===============================================================================

  ____  _                _              _                                
 |  _ \(_)_   _____ _ __| |__   ___  __| |                               
 | |_) | \ \ / / _ \ '__| '_ \ / _ \/ _` |                               
 |  _ <| |\ V /  __/ |  | |_) |  __/ (_| |                               
 |_| \_\_| \_/ \___|_|  |_.__/ \___|\__,_|                               
  ____  _            _  ____                            _     _______  __
 / ___|| |_ ___  ___| |/ ___|___  _ __  _ __   ___  ___| |_  | ____\ \/ /
 \___ \| __/ _ \/ _ \ | |   / _ \| '_ \| '_ \ / _ \/ __| __| |  _|  \  / 
  ___) | ||  __/  __/ | |__| (_) | | | | | | |  __/ (__| |_  | |___ /  \ 
 |____/ \__\___|\___|_|\____\___/|_| |_|_| |_|\___|\___|\__| |_____/_/\_\
 Release     : 16.1R2
Release date: 20191101
Package ID  : 117dde1
[Administrator@Director-1: ~] $ cd /opt/versa/vnms/scripts/
[Administrator@Director-1: scripts] $ sudo su versa
[sudo] password for Administrator: 
orepass versa123 --overwritevnms/scripts$ ./vnms-certgen.sh --cn Director-1 --st 
 => Generating certificate for domain: Director-1
 => Generating ca_config.cnf
 => Generated CA key and CA cert files
 => Generating SSO certificates
 => Generating websockify certificates
 => Saving storepass and keypass

This must be done from the user account versa. After generating the certificate be sure to exit this user and return to Administrator.

Next, we will sync the certificate with Analytics. This is done using the vnms-cert-sync.py script. The script SCP’s the certificate to the correct location on Analytics.

versa@Director-1:/opt/versa/vnms/scripts$ exit
exit
[Administrator@Director-1: scripts] $ ./vnms-cert-sync.sh --sync
Syncing Director certificates to VAN CLuster
Enter VAN Cluster Name:
Analytics
VAN Clusters IPs: 192.168.122.175 
Attempting Key Based Auth..
Can we pick Private Key from ~/.ssh/id_rsa[y/n]y    
Enter password for Versa User for sudo:
Password: 
[Errno 2] No such file or directory: '/home/Administrator/.ssh/id_rsa'
Looks like SSH Key exchange not setup, falling back to password
Please Enter password for User - versa: 
Password: 
/usr/lib/python2.7/dist-packages/Crypto/Cipher/blockalgo.py:141: FutureWarning: CTR mode needs counter parameter, not IV
  self._cipher = factory.new(key, *args, **kwargs)
Connected to 192.168.122.175
[sudo] password for versa: rm: cannot remove '/opt/versa/var/van-app/certificates/versa_director_client.cer': No such file or directory

[sudo] password for versa: rm: cannot remove '/opt/versa/var/van-app/certificates/versa_director_truststore.ts': No such file or directory

DEleted Existing Certificate
SFTPed certificate File
Locate keytool utility:

/usr/lib/jvm/jre1.8.0_231/bin/keytool

Copy certificate:

Certificate: /opt/versa/var/van-app/certificates/versa_director_client.cer

 * Stopping versa-confd

 * Stopping versa-lced

 * -n  ... waiting for versa-lced to exit

 * Stopping versa-analytics-app

 * -n  ... waiting for versa-analytics-app to exit

 * Stopping daemon monitor monit

   ...done.

 * Versa Analytics Stopped

   ...done.

   ...done.

 * Restarting daemon monitor monit

   ...done.

 * Starting versa-analytics-app

 * Versa Analytics Started



             .---.,

            (      ``.

       _     \        )    __      ________ _____   _____

      (  `.   \      /     \ \    / /  ____|  __ \ / ____|  /\

       \    `. )    /       \ \  / /| |__  | |__) | (___   /  \

        \     |    /         \ \/ / |  __| |  _  / \___ \ / /\ \

         \    |   /           \  /  | |____| | \ \ ____) / ____ \

          \   |  /             \/   |______|_|  \_\_____/_/    \_\

           \  | /

            \_|/                   _   _  _   _   _ __   _______ ___ ___ ___

                                  /_\ | \| | /_\ | |\ \ / /_   _|_ _/ __/ __|

                                 / _ \| .` |/ _ \| |_\ V /  | |  | | (__\__ \

                                /_/ \_\_|\_/_/ \_\____|_|   |_| |___\___|___/





[sudo] password for versa: cp: '/opt/versa/var/van-app/certificates/versa_director_client.cer' and '/opt/versa/var/van-app/certificates/versa_director_client.cer' are the same file

Certificate was added to keystore

Certificate Installed

Next, we need to reboot the server.

[Administrator@Director-1: scripts] $ sudo reboot

Broadcast message from Administrator@Director-1
        (/dev/ttyS0) at 22:50 ...

The system is going down for reboot NOW!

Ubuntu 14.04.6 LTS Director-1 ttyS0

Director-1 login: 

Step 7: Log in to the Analytics GUI

Now we log into the Analytics GUI using the northbound interface and port 8080.

Analytics GUI login
SteelConnect Analytics GUI Login

Step 8: Add the Director hostname

After logging into the Analytics GUI we need to add the Director hostname. Recall earlier when we set up Analytics from the CLI we created a resolution for the Director hostname. To complete this step we need to navigate to Admin>Authentication and add the Director hostname.

This will match the entry placed in /ect/hosts.

Register the Director
Register the Director

To finish this step, don’t forget to click Register.

Step 9: Add the first organization

Now we are going to return to the Director GUI and add our first organization. We need a top-level “Parent” organization before we can add any controllers.

  1. Return to the Director GUI.
  2. Navigate to Administrator>Organization and click the + button.
  3. Provide the following values:
Name Subscription Profile
Riverbed Default-All-Services-Plan
  1. Click on the Analytics Cluster tab.
  2. Add the Analytics Cluster as seen below.

Add an Organization
Add an Organization

After the analytics cluster has been added we need to navigate to the Supported User Roles tab and add all roles for the parent organization.

  1. Click on the Supported Users Role tab.
  2. Click Add All.

Update User Roles
Update User Roles

Finish up by clicking OK.

 

Step 10: Configure the Controller IP

Well, we’re getting close to having a functional headend. If you’re still following along you may be thinking that this is a lot of work. In reality, what we’ve done here is not significant. We’ve brought two of the three devices in our headend up and the process has taken us less than an hour. To add to that, this is something you will only do once. After the headend is up and running you’ll mostly work with templates to apply configurations to onboarded branches. We’ll cover that in another article. However, I digress. Let’s return to the process.

The next step is to deploy the controller. To do so, we need to enable the eth0 interface on the controller itself. Remember that the controller runs SteelConnect EX software, which is the same software as what you will run in the branch. The difference is that it’s defined as a controller in the initial setup. So, let’s follow these steps to bring the controller into the headend deployment.

  1. Connect to the console of the Controller.
  2. Login to the controller using the username and password admin/versa123.
  3. Edit the /etc/network/interfaces file.
[admin@versa-flexvnf: ~] $ sudo nano /etc/network/interfaces
[sudo] password for admin:

In the interfaces file, set the IP address for the controller based on the table below.

IP address Netmask Gateway
192.168.122.176 255.255.255.0 192.168.122.1

You can see an example of the configuration file below.

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.122.176
netmask 255.255.255.0
gateway 192.168.122.1

Now we need to bounce the interface.

[admin@versa-flexvnf: ~] $ sudo ifdown eth0 
RTNETLINK answers: No such process
[admin@versa-flexvnf: ~] $ sudo ifup eth0   
[admin@versa-flexvnf: ~] $

And of course, we want to ping the Director to make sure we have connectivity. Once this is done we can move on to deploy the controller in the Director GUI.

[admin@versa-flexvnf: ~] $ ping 192.168.122.174
PING 192.168.122.174 (192.168.122.174) 56(84) bytes of data.
64 bytes from 192.168.122.174: icmp_seq=1 ttl=64 time=1.28 ms
64 bytes from 192.168.122.174: icmp_seq=2 ttl=64 time=0.782 ms
^C
--- 192.168.122.174 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.782/1.034/1.286/0.252 ms
[admin@versa-flexvnf: ~] $ 

Step 11: Deploy the Controller in the Director GUI

The next step is to deploy the controller in the Director GUI. We’re going to deploy the controller in the Riverbed organization. Remember that this was our parent organization. We can use this organization as our only organization or we can deploy multiple tenants with SteelConnect EX. For our examples in this blog series, we will use a single-tenant, Riverbed. Follow these steps to deploy the controller.

  1. Return to the Director GUI.
  2. Navigate to Workflows>Infrastructure>Controllers
  3. Click the + button to add a workflow.
  4. Provide the following elements to the General page.
  5. Name.
  6. Provider Organization.
  7. Check Staging Controller.
  8. Enter the IP address that you applied in the previous step.
  9. Select the Analytics cluster.
  10. Click Continue

Controller General Settings
Controller General Settings

When you enter the IP address of the controller it will test connectivity. You will see this in the window in the form of a spinning image, although it may be brief.

Next, enter location information. This requires City, State, Country and then clicking Get Coordinates, in which the Lattitude and Longitude will be populated. Then you can click Continue.

Controller Location Settings
Controller Location Settings

On the next tab, you need to enter the Control Network information, which includes the Network Name, interface, and IP address, as seen in the image below. Click Continue when the values have been entered.

Controller Control Network Settings
Controller Control Network Settings

On the controller, eth0 is connected to the management network. eth1 is connected to the control network but within the cli of the controller it is identified as vni–0/0/. This means eth2 will be identified in the controller cli as vni–0/1 and is connected to the MPLS network via the MPLS_SWITCH.

Next, configure the WAN interfaces. This task has multiple substeps as seen below. You need to repeat the following process for MPLS.

  1. Click on the +WAN Interface link on the top right side of the interface.
  2. Create an interface names Internet and select Internet as the Transport Domain.
  3. Click OK.

Create a WAN interface
Create a WAN Interface

Now select the VNI interfaces that connect to Internet and MPLS.

Selecting VNI Interfaces
Selecting VNI Interfaces

In our topology, VNI–0/1 is eth2 and VNI–0/2 is eth3. This is important because eth0 is connected to the management network, the northbound side of Director, and eth1 is connected to the control network, the southbound side of Director.

Select the appropriate network names, and provide the IP address and gateway for each. There’s a table below to show you the values I used.

Addressing VNI interfaces
Addressing VNI interfaces

 

Network Name IP address Mask Gateway Public IP
MPLS 10.100.21.3 /24 10.100.21.1
Internet 10.100.19.2 /30 10.100.19.1 192.168.122.25

Also, an important step for us here is that we need to advertise a public IP address for the Internet-Only branches to reach the controller. If we fail to add the public IP address here when an Internet-only branch is onboarded they will not be able to reach the controller. That being said, we also need to make sure that Static NAT and Access Rules are configured on the perimeter firewall (I’m not showing that in this article).

To finish things up you need to click Deploy. When you click deploy you should see a popup asking you to create the overlay scheme. Be careful here to allocate this addressing based on the sizing of your organization. Using a /24 would limit you to 256 branch sites as this space is used to address each site in the SD-WAN fabric.

In the following output, I have entered the IPv4-Prefix for the overlay addressing pool, as well as the maximum number of organizations as seen below. Therefore each organization has around 65K branch sites that we can address, not that we would have that many.

IPv4-Prefix Maximum Organizations
10.254.0.0/16 16

Create Overlay Addressing Scheme
Create Overlay Addressing Scheme

I’ll wrap this up by clicking Update.

Note in the bottom of the Director GUI that the controller workflow is immediately deployed.

View progress
View Progress

Step 12: View the progress of the controller deployment

There is a tasks view that we can open to see the progress. You can access this in the Director GUI by clicking the Tasks icon. This is the icon on the top right-hand side of the interface that looks like a checklist. This will open a list of tasks that you can expand and view as seen below. In the following output, you can see that the controller was deployed and in the running messages, you can see what happened at each step of the deployment that took place behind the scenes.

Progress details
Progress Details

Step 13: Log in to the Controller CLI and confirm the deployment.

Next, we are going to connect to the command line of the controller and have a look at how to verify the deployment there.

In the following output, you can see that I have accessed the CLI.

[admin@Controller-1: ~] $ cli


             .---.,
            (      ``.
       _     \        )    __      ________ _____   _____
      (  `.   \      /     \ \    / /  ____|  __ \ / ____|  /\
       \    `. )    /       \ \  / /| |__  | |__) | (___   /  \
        \     |    /         \ \/ / |  __| |  _  / \___ \ / /\ \
         \    |   /           \  /  | |____| | \ \ ____) / ____ \
          \   |  /             \/   |______|_|  \_\_____/_/    \_\
           \  | /
            \_|/                   _  _ ___ _______      _____  ___ _  _____
                                  | \| | __|_   _\ \    / / _ \| _ \ |/ / __|
                                  | .` | _|  | |  \ \/\/ / (_) |   / ' <\__ \
                                  |_|\_|___| |_|   \_/\_/ \___/|_|_\_|\_\___/



admin connected from 127.0.0.1 using console on Controller-1
admin@Controller-1-cli>

Once I’m in the CLI, I can use the show interfaces brief | tab command to view the interfaces that have been configured. You can see a sample of that output below. Let’s dig into what we’re seeing here.

admin@Controller-1-cli> show interfaces brief | tab
NAME         MAC                OPER  ADMIN  TENANT  VRF                    IP                  
------------------------------------------------------------------------------------------------
eth-0/0      0c:5d:40:be:eb:00  up    up     0       global                 192.168.122.176/24  
tvi-0/2      n/a                up    up     -       -                                          
tvi-0/2.0    n/a                up    up     1       Riverbed-Control-VR    10.254.16.1/32      
tvi-0/3      n/a                up    up     -       -                                          
tvi-0/3.0    n/a                up    up     1       Riverbed-Control-VR    10.254.24.1/32      
tvi-0/602    n/a                up    up     -       -                                          
tvi-0/602.0  n/a                up    up     1       Riverbed-Control-VR    169.254.0.2/31      
tvi-0/603    n/a                up    up     -       -                                          
tvi-0/603.0  n/a                up    up     1       Analytics-VR           169.254.0.3/31      
vni-0/0      0c:5d:40:be:eb:01  up    up     -       -                                          
vni-0/0.0    0c:5d:40:be:eb:01  up    up     1       Riverbed-Control-VR    10.100.3.12/24      
vni-0/1      0c:5d:40:be:eb:02  up    up     -       -                                          
vni-0/1.0    0c:5d:40:be:eb:02  up    up     1       MPLS-Transport-VR      10.100.21.3/24      
vni-0/2      0c:5d:40:be:eb:03  up    up     -       -                                          
vni-0/2.0    0c:5d:40:be:eb:03  up    up     1       Internet-Transport-VR  10.100.19.2/30      
vni-0/3      0c:5d:40:be:eb:04  down  down   -       -                                          
vni-0/4      0c:5d:40:be:eb:05  down  down   -       -                                          

[ok][2020-03-16 17:06:45]
admin@Controller-1-cli>

In the above output, the IP address assigned to the Riverbed-Control-VR on vni–0/0.0 10.10.254.16.1. This is from the subnet that we defined as the overlay network when we deployed the controller (remember the popup?). The IP address applied to the MPLS-Transport-VR is 10.100.21.3. This was the IP address that you applied to the MPLS interface vni–0/1. The IP address applied to the Internet-Transport-VR is 10.100.19.2. This is the IP address that you assigned to vni–0/2 when you deployed the controller in the Director interface.

Now, this output brings up a very good question. We know what the vni’s are. We assigned IP addresses to them when we onboarded the controller. VNI stands for Virtual Network Interface, and they are virtual in the sense that the controller software maps them to a physical interface on the hardware. For example, since eth0 is used for management, the SteelConnect EX software maps eth1 to vni-0/0 which is the control network, and eth2 gets mapped to vni-0/1. Eth3 then gets mapped to vni-0/2. But what are these TVI’s? We will save the discussion of that topic for another article, however, so that we understand what we are looking at here, a TVI is a Tunnel Virtual Interface. A TVI is not mapped to a physical interface. There are two of each TVI because SteelConnect EX sets up an unencrypted channel as well as an encrypted channel.

tvi-0/2      n/a                up    up     -       -                                      
tvi-0/2.0    n/a                up    up     1       Riverbed-Control-VR    10.254.16.1/32  
tvi-0/3      n/a                up    up     -       -                                      
tvi-0/3.0    n/a                up    up     1       Riverbed-Control-VR    10.254.24.1/32  
tvi-0/602    n/a                up    up     -       -                                      
tvi-0/602.0  n/a                up    up     1       Riverbed-Control-VR    169.254.0.2/31  
tvi-0/603    n/a                up    up     -       -                                      
tvi-0/603.0  n/a                up    up     1       Analytics-VR           169.254.0.3/31

Step 14: Configure a static route for Director

We are so close! This is the final step of my headend deployment and this step is important! We now have to tell the Director how to reach SteelConnect EX Control-VRs or we will not be able to onboard our branches. Recall that the Director has two interfaces: Management and Control. The default route points to the management interface, but the 10.254.0.0/16 overlay network is reachable on the control site or southbound side. This is how the Director connects to the branches via SSH and delivers netfonf connamds. If you miss this step then it just doesn’t work. So, let’s wrap this up. Follow these steps:

  1. From the director command line edit the /etc/network/interfaces file.
  2. Add the following line under eth1, the Southbound/Control network.

SSH to Director
SSH to Director

Enter the following command.  You can see an example in the image below.

post-up route add -net 10.254.0.0 netmask 255.255.0.0 gw 10.100.3.12
Add route to overlay
After you save the interfaces file with the route added, you need to bounce the eth1 interface.

 

sudo ifdown eth1
sudo ifup eth1

Next, we make sure that the route has been applied.

admin@Director-1:~$ netstat -rn
Kernel IP routing table
Destination     Gateway            Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.122.1      0.0.0.0         UG        0 0          0 eth0
10.100.2.0      0.0.0.0            255.255.255.0   U         0 0          0 eth0
10.100.3.0      0.0.0.0            255.255.255.0   U         0 0          0 eth1
10.254.0.0      10.100.3.12        255.255.0.0     UG        0 0          0 eth1
admin@Director-1:~$

And just like that, we have a headend ready to onboard. Let’s take a minute to review what we’ve done here.

Wrap up

We’ve covered a lot of ground in this article. The good news is that this is the most difficult part of the deployment (and it wasn’t even that difficult). But here is what we’re left with at the end of this article:

  • The Director has been configured.
  • Analytics has been configured.
  • We have GUI access to the Director and Analytics.
  • The Controller has been configured.
  • VNI’s and TVI’s are up on the controller.

Please stay tuned for more articles in this series as we onboard branches, configure routing and traffic steering, and explore the many, many technical features of SteelConnect EX.

]]>
https://www.riverbed.com/blogs/building-an-sd-wan-headend/feed/ 3
SD-WAN Data Center Integration https://www.riverbed.com/blogs/sd-wan-data-center-integration/ Wed, 18 Mar 2020 12:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14448
We continue our learning journey on SteelConnect EX, Riverbed’s Enterprise SD-WAN offering. This time, we are going to address one of the hottest and most complex topics when leading a major transformation like SD-WAN: the integration of the solution in your data center.
Unfortunately, a blog post would not be long enough to detail all the possible options and anyway, it would be foolish of me trying to address this topic in an exhaustive manner: there are as many data centers as there are enterprise customers…As a result, I am going to focus on the main principles that an architect should follow when integrating SteelConnect-EX in their network and some good questions to ask yourself.

Data center = Head-End

In a previous post, we reviewed the components of the solution:

  • Director is the component responsible for the management plane of the SD-WAN fabric;
  • Analytics is offering visibility on the network by collecting metrics and events via IPFIX and Syslog from branch gateways;
  • Controller is in charge of the control plane for the SD-WAN Fabric;
  • Branch Gateways – also known as SteelConnect EX appliances

The Director, analytics and controller form what we call the Head-End. Although they can be hosted in a traditional data center, they—and specifically the controller—are not part of the data plane, therefore a “branch” gateway will be required in the data center to join this particular site to the SD-WAN fabric.

Starter or dessert, that is the question

In any case, the first brick to deploy should always be the Head-End: whether it is hosted in your data center, in the Cloud, in a dedicated site or a managed service/hosted service.

Then, shall we start the rollout of the SD-WAN infrastructure with the datacenter or keep it at the end? This is a question that pops up all the time and the best answer to give is: it depends…

Data centers are traditionally more complex networks so my preference is to start here then the rest of the rollout will be easier and incremental. Additionally, since the data center is terminating most of the connections from branch offices that are consuming apps, you can quickly benefit from offloading traffic from MPLS to Internet uplink and leverage path resiliency features (FEC, Packet Racing, load balancing…) along with Application SLAs to enhance the user experience. Furthermore, as we deploy SD-WAN gateways in remote sites, we can track the performance of the data center appliances and validate initial assumptions made for the sizing.

Nevertheless, there are cases where it can make sense to conclude the rollout with the data center. It really depends on your drivers for adopting SD-WAN, your constraints (say a network freeze for a given period in the data center) and how you will be able to get immediate value. For example, should Direct Internet Breakout be a requirement for you to offload your MPLS and enhance the performance for SaaS or Cloud based applications, deploying gateways in the remote sites first will certainly deliver value. There is no need for the data center to be ready in that case. Another example could be routers’ end of life. Should you need to replace your routers in the branches, a SteelConnect EX appliance can be installed as a plain router first. SD-WAN features can be enabled at a later stage.

There are no good or bad answers here. Review your drivers for adopting SD-WAN and plan accordingly.

The golden rules

Deploying SteelConnect EX in your data center should be hassle free as long as you follow these few rules:

  • It is a router! As long as you are using standard routing protocols like BGP and OSPF, you can deploy the gateway the way you want. As opposed to most of the other solutions on the market, with SteelConnect EX you will benefit from all the bells and whistles of the routing protocols so you have full control and a lot of flexibility.
  • The controller must be on the WAN side of the gateway. Should you deploy the Head-End in the data center, you need to make sure that the only way for the appliance to form overlay tunnels with the controller is from its WAN interfaces.
  • The data center gateway can’t seat between the controller and remote sites gateways. This is the corollary of the previous rule. Should you deploy the Head-End in the datacenter and, for example, you replace the MPLS CE router with the SD-WAN gateway, you need to make sure that the controller has a different connection to MPLS or, if that’s not possible, the controller should only be available via the Internet.
  • The data center gateway can’t get access to the control network. It is a best practice to keep the control network that interconnects the Head-End components together (see our previous post about the architecture) isolated. As a result, should you deploy the Head-End in the data center, make sure the Control network subnet does not leak into the LAN. Use firewalls, Access Lists or routing redistribution policies to avoid that behavior.

Examples and counter-examples

In the following example, the Head-End is hosted in a different site or Cloud hosted or a managed service. The data center appliances are inserted between the aggregation layers and the CE routers.

Note that it is not always possible to grant direct access to all WANs to the controllers – in particular for Cloud-Hosted setup. As long as there is network connectivity between all the SD-WAN gateways and the controllers, this is fine. This will be the topic of a next blog post.

We could easily replace the CE routers as well. At the moment, our appliances are only offering ethernet (copper or fiber) NICS though.

For risk-adverse organizations that want to adopt SD-WAN with minimal disruption, it is also completely fine to deploy the following architecture. Here, the data center gateways are out-of-path and rely of route attraction, conversely route retraction if one route disappears from the SD-WAN overlay network. dc architecture 3

In the above example, there is only one connection depicted between the SteelConnect EX gateway and the WAN distribution router. In reality, we would need one per uplink (in this example, three connections: MPLS A, MPLS B and Internet) plus one for the LAN side. However, we could also rely on VLANs and have trunk connection(s) to transport LAN and WAN traffic.

We can achieve high-scalability and high-throughput by horizontally scaling the number of gateways. This deployment is called Hub Cluster and can be seen in the following topology example.

In this previous examples, the Head-End was not hosted in the data center. For organizations requiring all components to be deployed on-premises, solutions exist. Simply follow the golden rules. This following setup is not supported as the controllers seat on the LAN side of the gateways.

 

A potential solution to comply with that rule is depicted as follows:

Note that in order for the gateways to communicate with the controllers via the Internet uplink, they will need to use controllers’ public IP addresses. Indeed, when the Director pushes the configuration down to the appliance, if a public IP address is setup on the controller’s Internet uplink, this IP address will be part of the configuration, not the private IP address. Therefore, the firewall should be configured to allow that communication.

It may happen that there is no LAN interfaces left on the CE routers, in this case, you could have the controllers only connected to the Internet. However, you would need to make sure that all SD-WAN sites have network reachability to the controllers either with a direct Internet connection or an Internet Gateway within the MPLS cloud.

Should you keep your WAN distribution routers, having data center gateways and controllers at the same level will work too.

Checklist for a successful implementation

All data center networks are different. There are questions to ask yourself when you are approaching a design. Here is a list which does not pretend to be exhaustive:

What are our goals and drivers? The answer to that question should remain at the center of all decisions and answers to the following questions.

    • How many remote sites?
  • What are our throughput requirements today and in the coming months?
  • What are your requirements in terms of service resiliency, SLAs?
  • What are the routing protocols in use? Can we use BGP or OSPF?
  • Are we replacing the CE routers with the SD-WAN gateways or not?
  • Can we integrate with WAN distribution routers?
  • Do we need hardware appliances or will we go virtual?
  • What are the interface type and speed requirements?
  • Is there a WAN optimization solution in place?
  • Can we allocate public IP addresses to the controller?
  • How will we deploy the controller?
  • Are we using the data center as a hub or transit site?
  • Are there firewalls in the network path?

What have we learned today?

A data center is just “another branch” that requires its own SD-WAN gateway appliance—even if you host the Head-End here.

Please note that in the upcoming version 20.2, it will be possible to use a gateway appliance as a controller too, it will assume both roles at the same time. However, we will always need at least one dedicated primary controller. More details to come in a further post.

The SteelConnect EX is a router. Leverage all your routing knowledge to deploy it in your data center.

A question, a remark, some concerns? Please don’t hesitate to engage us directly on Riverbed Community.

Watch Video

]]> Five Must-Haves for Unified Network Performance Management (NPM) https://www.riverbed.com/blogs/5-must-haves-for-unified-npm/ Tue, 17 Mar 2020 12:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14352 We’re all used to the back and forth: integrated platform or standalone, best-of-breed? “Tools that can correlated multiple classes of network data are more effective in all NPM (network performance management) use cases,” according to industry analyst Enterprise Management Associates (EMA). To help in your search for an integrated platform, EMA lists five must-haves for unified NPM below:

#1: Diverse data collection and analysis

NPM tools that correlate multiple data sources provide better insight into application performance, security, events, anomaly detection, and ultimately end-user experience. They gather data more than packet data. They include data from device metrics, flow records, tests (ping and traceroute), logs, synthetic traffic, and even pull events and data from other systems.

#2 Workflows for key unified NPM use cases

Unified NPM solutions should support workflows and functionality for each of the key use cases listed:

  • Performance monitoring
  • Troubleshooting
  • Security monitoring and response
  • Capacity management
  • Cloud application migration assessment

#3 Platform scalability

NPM tools must be able to do everything at scale: collect, process, store, and analyze. The amount of data is always expanding and includes data from all your physical and virtual networks whether they are in the cloud or on-prem. Even if you are not supporting IoT (Internet of Things) edge devices, you will probably need to do so at some point in the future.

 #4 Data granularity

With so much data to collect, process, store, and analyze, the temptation is to aggregate the data. But, then what do you do when you need to drill into the details? Do you have the raw data captured at high frequency that allows you to drill down and gain critical insights?

#5 AIOps-driven NPM

Artificial intelligence for IT operations or AIOps is used to very quickly surface anomalies and detect patterns in large volumes of data. With AIOps, these enterprises can better automate key network management processes, including network traffic analysis, root cause analysis, capacity management, and security remediation. It’s no wonder that 92% of enterprises are using or want to use AIOps-driven NPM, according to EMA.

Do you have others to add to the list? Or, if you’d like to learn more about the big five, must-haves for unified NPM, check out the infographic.

 

]]>
Six Blind Men and the Elephant https://www.riverbed.com/blogs/six-blind-men-and-the-elephant/ Tue, 03 Mar 2020 13:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14356 I was recently reminded of the story of the blind men and the elephant when I read the statistic that 67% of enterprises have 3-6 network performance management (NPM) tools installed. These teams struggle with problem detection and spend more time on reactive troubleshooting than their counterparts who use more integrated NPM tools.

Back to the elephant

For those unfamiliar with the story, six blind men come into contact with an elephant for the first time. The first man touches the elephant’s solid side and says that the elephant is exactly like a wall. Then the second touches the tusk and says he is round and sharp like a spear. He is followed by a third man who feels the trunk and says snake. The fourth wraps his arms around the leg and says that an elephant is like a tree. The fifth the ear and says it is like a fan. And the sixth grabs the elephant by the tail and says he is exactly like a rope. Each man comes to his own conclusion based on his own data points and his own previous knowledge. At last, the elephant moves on, yet the blind men continue arguing, each one believing that he was absolutely right.

You’re probably nodding your head at this point, especially if you work in NetOps. IT environments are becoming more complex, more distributed, and more dynamic. And infinitely harder to manage. Yet, without a well-performing and secure network, your digital transformation initiatives and workforce productivity are put at risk.

Patchwork doesn’t work

Like with the elephant, getting information on only part of the picture in isolation makes it difficult for you to resolve the complex problems that can impact your business-critical applications. Many teams are like the blind men, arguing based on their own data and unable to collaborate and form a complete, unified picture. Only by bringing multiple sources of data together can you see the whole of the elephant.

“Patching together legacy tools and disparate solutions doesn’t work. Instead, it reduces agility and efficiency, diminishes the user experience, and drives up costs,” according to ESG’s new report.

Pulling the sources of data together

Here is a short list of sources of data that you will need to integrate and analyze to troubleshoot and proactively resolve network issues effectively and efficiently:

  • Device metrics
  • Flow records
  • Packets
  • Tests, like ping and traceroute
  • Logs
  • Synthetic traffic
  • Events/data collected from other IT systems

Conquer fragmentation with integrated NPM tools

Best-in-class integrated NPM tools collect raw data in tight intervals, store as much data as possible, and facilitate drill downs in to the data that provide critical insights. Learn how to conquer fragmentation and integrate your approach to NPM with Riverbed in this new report.

]]>
The Unpredictability of Office 365 Performance in a Work-from-Home Culture https://www.riverbed.com/blogs/unpredictability-office-365-performance-in-work-from-home-culture/ Tue, 25 Feb 2020 13:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14265 I often talk about modern workforces and how we have evolved from the 9 to 5 culture of going to work at some office or branch, and 8-hours later we come home. Sort of funny to even think that those days ever existed when today we’re expected to be able to respond 24 x 7, no matter where we are. We work from airports, coffee shops, planes and trains… I work from a ferry… we work from client sites far away from home and we work, of course… from home. And we work from home a lot. The expectation is that we are responsive so that we are never the bottleneck between a happy customer, a growing pipeline, a new design coming to market or a social media campaign being launched.

And to be responsive, we need our collaboration and file sharing apps to respond too—these days, often apps that sit in the cloud like Office 365 (O365). In other words, we need these apps to perform on demand.

Technically speaking, latency plays a significant role in in whether or not an app performs as we expect, when we need it. Often the misconception is that network bandwidth does the trick—but in reality, while adding bandwidth to a network can help streamline traffic, and ultimately allow us to get the most bang for our network spend, it doesn’t much assist with the experience a user has of an application. That requires a change in latency, and perhaps a boost that comes from tools purpose-built to accelerate apps regardless of latency.

An easy way to think about this is a drive from San Francisco to New York takes 44 hours. Even if I add more lanes to the freeway to make space for more cars, the drive is still going to take roughly 44 hours. Apps behave similarly. Unless the latency changes between the starting point and end point, my app response time will remain as is.

SF to NYC
Courtesy of Google Maps

Interestingly, I was running a test while working from my home office yesterday. Pretty straight forward stuff. Sitting in Marin County just north of San Francisco, one would think that I am relatively close to an O365 cloud pop, and my latency low.

At Riverbed these days we often talk about the incredible IT complexity of navigating today’s hybrid enterprise networks and apps. We talk about how unpredictable application performance can be as a result. But it wasn’t until yesterday that I realized just how significant that statement is. We absolutely NEED applications to perform to get our jobs done and done well for our company. But as network conditions seems to change like the wind, the performance of our business apps aligns.

So I was in my home office uploading a very large file with embedded video and graphics yesterday to SharePoint as a part of my recent endeavor to better understand the impact of application performance on business outcomes. In this case I was looking at the difference between a cold upload with the Riverbed SaaS Accelerator cloud service, versus an upload that was NOT enabled with Riverbed SaaS Accelerator.

In this particular case, while the cold file upload to O365 using SaaS Accelerator performed 45% faster than the upload of the same file without SaaS Accelerator, it took a little longer than I expected. Mind you—I was working from a home network (for me it was Comcast Infinity), but we all do that, so it’s a reasonable test.

After the upload, I decided to ping my system to see what my average latency was telling me. FROM THE SF BAY AREA, where you would expect latencies to always be low for O365, my latency average at this particular time was 196ms. You would think I was on the other side of the world! Comcast is getting a call from me!

Ping!
Ping!

Later in the day, I did a warm upload of the same large file, also using SaaS Accelerator. First of all, the result of the warm upload performed over 4000% faster than the original cold upload (4413.51% to be exact, going from several minutes to just a few seconds), for the most part, a testament to the Riverbed application acceleration. I also checked the latency at my home office. Now it was 91ms. Not as low as I would expect, but improved since earlier for whatever reason.

Ping!
Ping!

So again, we talk about our modern workforces accessing varying networks, and the unpredictability of application performance because of always changing conditions. As we work from home and other places such as airports, client sites, coffee shops and so on, IT teams may have no control over the conditions employees encounter as they need applications to help them execute their jobs.

So the moral of this story for enterprises with many employees who are making things happen at any hour of the day:

  • Application performance is incredibly unpredictable in today’s digital climate
  • The biggest impact on app performance—latency—will change for reasons that are out of IT control
  • Riverbed SaaS Accelerator can make sure business apps like O365 perform—no matter what the conditions may be.

Incidentally, I recently noted on LinkedIn that a common misconception in a global enterprise is that SD-WAN alone will eliminate application performance concerns. But as we have discussed in this blog, ensuring the applications we invest in always perform at their best requires us to take on latency—and perhaps even more complex than network management, latency is incredibly unpredictable in today’s digital world! Here’s a short video from my partner Brandon Carroll, CCIE #23837 introducing a way to address both.

]]>
24×7 Enterprise Apps: Office 365 on Planes, Trains, Automobiles and Home Offices Part 2 https://www.riverbed.com/blogs/24x7-enterprise-apps-o365-part-2/ Tue, 18 Feb 2020 13:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14243 How would you feel about 150% faster time to revenue?

Revenue growth

If you read Part 1 of this story, you know that my personal experience and observations in the first few weeks of having Riverbed SaaS Accelerator running to boost the performance of my Office 365 (O365) has been noticeably better.

But just how much would you believe me if I didn’t CLOCK IT?

So that’s what I set out to do. Mind you—I’m really not a naturally technical person, but I’ve been around the networking space for a little more than 20 years, so I suppose I have learned a thing or two…

These days when I speak with sellers, partners and enterprise customers, we often find ourselves considering the many ways users access the apps they use to do their jobs.  It’s become what I call ‘the Planes, Trains and Automobiles’ talk. I mean—in reality, in these digital times we try our best to be available wherever we happen to be. And more often than not, that can also mean a lot of logging in from home and local coffee shops! So that’s what I decided to test first: my home and my local Equator Coffee in Larkspur, California.

THE GOAL: To prove that the experience I shared on stage and in the previous blog wasn’t just a fluke—and so that I could share proof with YOU!

Coffee shop

Working at Equator Coffee, Larkspur
Equator Coffee, Larkspur, CA

In this initial test, I decided to VPN in from my local coffee shop to simulate a real world backhaul scenario. First, I would clock and upload a large file to OneDrive, enabled WITH Riverbed SaaS Accelerator. Then I would clock the same file upload to Dropbox, NOT enabled with SaaS Accelerator. I used the stopwatch on my phone for this test and hit START at the same time as I hit the UPLOAD button. In a future blog I’ll show you what happens when I do this without the VPN too. All of the tests here are cold uploads. In another blog I’ll get into the distinctions between first-effort cold and subsequent warm uploads. Here’s what happened:

OneDrive (with SaaS Accelerator enabled for O365 and Client Accelerator enabled on my laptop)

  • 129MB ppt upload
  • Avg latency 91.614ms
  • 1 min 55 sec upload
  • VPN active

 

DropBox (no SaaS Accelerator)

  • 129MB ppt upload
  • Avg latency 31.675ms
  • 5:04+++

It’s important to note what I mean by adding the ‘+++’ after the 5:04. That just means that the file was not finished uploading and I got frustrated and shut it down before the upload completed. I mean—has this happened to you? You’re working outside the office and doing some sort of file share to an enterprise SaaS platform, and the upload or download takes so long that you get distracted and walk away, putting off what you were focused on for another time? I wonder how much work time we all waste on this sort of frustration?

Anyway, the conclusion here was that the OneDrive through the VPN with SaaS Accelerator was more than 225% faster—and since the upload done without SaaS Accelerator was never completed, who knows when that would have completed.

Incidentally, I looked at the latency from the coffee shop, and as you can see from the averages noted above, the latency in this case was manageable. Imagine if my latencies were even higher—as they can often be for employees who are often on the road and mobile.

Now let’s take a look at my home office.

Home office

And so I went home and ventured to test from there. After all, many of us work from home regularly—whether it’s logging in at night to get an urgent something out to a customer, to meet a deadline, or working some days of the week from a home office, working from home is hardly unusual behavior in 2020. In fact, this morning I was reviewing a survey of 104 executives at large enterprises done by one of our teams. This was focused on the use of enterprise SaaS applications, and 78% of those surveyed noted ‘home’ as a place where they regularly access O365.

For this test, I decided to use a slightly larger file, and also go direct to the Internet. It was a relatively arbitrary choice, but in this case, I looked at this without the VPN. Here’s what resulted:

OneDrive (SaaS Accelerator enabled)

  • 173MB ppt upload
  • Avg latency 73ms
  • 39-second upload

 

Stopwatch on my cell phone

DropBox (no SaaS Accelerator)

  • 173MB ppt upload
  • Avg latency 21ms
  • 2:37.47 minutes
  • What I noticed: a lot of hanging, wondering when the file was going to finish uploading; risk of losing patience as I did in the coffee shop

With Riverbed SaaS Accelerator, the upload was 75% faster

Now by no means is this meant as a negative on either SaaS application. Whether it’s O365 or Dropbox or Box or Salesforce or otherwise, these modern tools have given us new roads into collaboration and sharing on a global scale that we really were unable to achieve by way of old-school data center-based application approaches.

However, the question now becomes this: are we getting the most out of these applications into which we are investing hundreds of thousands and millions of budget dollars on behalf of our companies.

What happens when we apply the concept of files-sharing to and from a SaaS cloud such as O365 to 100, 200, 1000 or more revenue-generating employees uploading and downloading files every business day in order to get a new product to market, to execute a time sensitive mission, to collaborate on a big R&D project or automotive design, to process orders, connect with a customer, or any other business-critical transaction?

And if we take just the average acceleration of the 2 examples I have noted here, 225% faster and 75% faster respectively, how would your business be impacted if every SharePoint and OneDrive action performed 150% faster?

]]>
SteelConnect EX SD-WAN Architecture Overview https://www.riverbed.com/blogs/steelconnect-ex-sdwan-architecture-overview/ Thu, 13 Feb 2020 13:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14296 The holiday season is just over and while I was looking at my kids taking apart their brand new toys—and telling them they probably should not, I remembered that I was actually the very same years ago. I wanted to understand how things were built and how this new cool 1/18 racing car was able to reproduce the sound of an actual engine and have lights on.

The truth is, now as a grown-up, I still enjoy that, drilling down and get my hands dirty. I like to understand how things are working under the hood. That helps me to anticipate the capabilities and limitations of a product, beyond marketing shiny announcements.

If you are like me and interested in SD-WAN, you are in the right spot: we are going to explore the world of SteelConnect EX, Riverbed’s Enterprise SD-WAN offering.

In this first episode of the series, we are going to discuss the overall architecture of the Riverbed SD-WAN solution.

Components

Following SDN’s disaggregation principles, SteelConnect EX enterprise SD-WAN solution is comprised of several stacks:

  • Director is the component responsible for the management plane of the SD-WAN fabric;
  • Analytics is offering visibility on the Network by collecting metrics and events via IPFIX and Sysflow from branch gateways;
  • Controller is in charge of the Control plane for the SD-WAN Fabric;
  • Branch Gateways—also known as SteelConnect-EX appliances—are the SD-WAN appliances that will be deployed in the various sites. They are available in various form factors including Hardware, Virtual and Cloud (for IaaS platform like AWS, Azure…). Gateways are actually the data plane and will be deployed in all SD-WAN sites: Data centers, Hubs, Cloud (IaaS) and offices.

SteelConnect EX architecture
SteelConnect EX architecture

Each of the components can be deployed in High-Availability mode.

Each of those components is multi-tenant. All of them. Even the Branch Gateways! This will be the topic of a dedicated upcoming blog post.

Head-Ends

Director, Analytics and Controller are the three components that we call Head-Ends. They can be deployed in a data center, in the Cloud (Azure, AWS…) or hosted and operated by a Telco Service Provider on their network.

SteelConnect-EX Head-Ends
SteelConnect-EX Head-Ends

Director

The Director is a management system for the provisioning, management and monitoring of the SD-WAN infrastructure. It means that we can:

  • Create template of configurations for networking, SD-WAN policies (overlays, Path-Selection, path resiliency features…), Security and so on.
  • Manage gateways’s full lifecycle (on-boarding, configuration, firmware upgrade, RMA…)
  • Monitor and get alerts

Director can be configured via a web GUI, RESTful APIs or even CLI.
Director pushed the configuration to the Branch Gateways via NetConf. The NetConf commands are routed via the Controller.

Director
Director

Director offers Role-Based Management Access Control (RBAC) which means that one can delegate the management of a portion of the network to different individuals or teams.

Director can integrate with third-party solution as well and orchestrate the deployment of virtual SteelConnect-EX on private and public clouds.

Visibility and monitoring with analytics

SteelConnect Analytics is a big data solution that provides real-time and historical visibility, baselining, correlation, prediction and closed-loop feedback for SteelConnect EX software-defined solutions.
The key features include:

  • Policy driven data logging framework
  • Reporting for multiple networks and security services
  • Real-time and Historical traffic usage and anomaly detection
  • Multi-organizational reporting
  • Analytics will collect IPFIX and Syslog from gateways via the Controller.

SteelConnect Analytics

Analytics is an optional component of the solution but highly recommended to get visibility into the SD-WAN fabric.

Controller

From a software point of view, a Controller runs the exact same code (i.e same firmware) than the Branch Gateway. When on-boarded on the Director, that particular appliance is given a role, the controller role, and will be in charge of the control plane.

The Controller is in charge of on-boarding SD-WAN gateways into the network. It uses IKE and PKI certificates to authenticate branch SteelConnect-EX appliances.

From a routing point of view, a Controller acts as a route reflector for SD-WAN branches. When one branch gateway advertises a route to the Controller, it will be “reflected” to all other SD-WAN gateways (within a specific Transport Domain, we will discuss it in a following article). In fact, in addition to route information, the Controller reflects as well the Security Association (SA) information so that the destination branches in a same VPN can establish secure data channels between each others.

The Controller enables IPSEC connectivity between SD-WAN sites without the overhead of maintaining a full mesh of IKE Keys among all branches. This optimization reduces the complexity and overhead of maintaining N2 links and keys. The Control Plane between the Controller and the SteelConnect-EX appliances distributes IPSEC keys to other branch nodes.

The Controller will never route user traffic (data plane). The tunnels formed with branch appliances are only used for the control plane: routing (MP-BGP), security key information, NetConf via SSH, IPFIX, probing… It means that, should you deploy the Head-Ends in your data-center, you will need to have a SD-WAN gateway there too to send traffic across the SD-WAN fabric.

The controller will route control traffic between the Head-Ends and the SD-WAN gateways via the overlay network.
A Controller can handle up to 2’500 sites. Should we need to scale to higher numbers, we can scale horizontally and add more Controllers.

Network topology

Director, Analytics and Controller are colocated and will be connected between each other by a Control Network (Southbound for Analytics and Director). All communications between Analytics, Director and the SD-WAN gateways will be done via the Control Network and routed by the Controller.

This Control Network will not be routed and not advertised on the network.

Head-End Network Topology
Head-End Network Topology

A Management Network is also configured to expose GUI and APIs to the administrators as well as third-party tools.

To conclude

In this first episode of the series about SteelConnect-EX, we highlighted the role of the 4 main components of the solution: the three Head-End devices: Director, Analytics and Controller. SteelConnect-EX gateways that are deployed in all SD-WAN sites.

In the following post, we are going to have a look at the routing principles of the SteelConnect-EX gateways.

If you enjoyed it or if you have questions, feel free to leave a comment or engage us on the Riverbed Community web site and Twitter.

Watch video 

 

]]>
Portal: Central Management Now “Free” to AppResponse Customers https://www.riverbed.com/blogs/portal-central-management-now-free-to-appresponse-customers/ Wed, 12 Feb 2020 13:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14235 With the latest release of Portal version 3.3, central management workflows such as managing Host Groups and application definitions are now available to all AppResponse customers without having to purchase a Portal license. However, using Portal’s “dashboard” capabilities to show performance data from any data source(s) still requires a valid Portal license.

Let’s dig into this in a little more detail. There are currently two main parts of Portal:

  1. Dashboards, that allows you to bring data from AppResponse, NetProfiler, NetIM, Aternity End User Experience Monitoring, Aternity Application Performance Monitoring and/or third-parties into a single, curated view that can be shared with executives, line of business, app owners, etc.
  2. Central management, which makes managing large numbers of AppResponse appliances easier

So what specifically does Central Management do? It streamlines the management of distributed AppResponse appliances with features such as:

  • Software upgrade orchestration: Allows you to remotely update connected AppResponse appliances (virtual, physical or cloud appliances). Just upload a valid AppResponse update ISO obtained from the Riverbed support site and Portal pushes it out to the selected AppResponse appliances during the remote update procedure.
  • System & traffic health status: Lets you monitor system and traffic health metrics for connected AppResponse appliances. These metrics include disk health, chassis health, monitoring interface drops, time sync, and power supply. Typical red-yellow-green LED status values come directly from the AppResponse System Health info while tool tips provide the status reason. Clicking any LED indicator takes you to the corresponding page on the appliance for more info.
  • Users & roles management: Using an existing AppResponse centrally managed by Portal, you can clone that system and all roles and users. You can add, edit, or remove a role or user, just like you would on a local AppResponse, and these changes will be pushed to any remote systems. The distribution column tells you how many AppResponse systems the role or user is distributed to.

Portal can manage AppResponse User Accounts and Administer Roles
Portal can manage AppResponse User Accounts and Administer Roles

  • App & host group definitions: You can also centrally manage your host group and app definitions. Apps definitions include general, URL, and Web apps. You can create, edit and delete apps within Central Manager; push, remove and import apps on remote appliances; and apply tags to the apps and host groups.

Portal Central Management simplifies the process of managing remote AppResponse environments. If you don’t already have it, and you have several AppResponse appliances, you need to get it! It will take the hassle out of upgrading, monitoring, and managing your AppResponse appliances.

To get started, you simple download Portal 3.3 from the Riverbed support site.”

 

]]>
24×7 Enterprise Apps: Office 365 Performance on Planes, Trains, Automobiles and Home Offices, Part 1 https://www.riverbed.com/blogs/o365-performance-acceleration-saas-acceleration-part-1/ https://www.riverbed.com/blogs/o365-performance-acceleration-saas-acceleration-part-1/#comments Tue, 11 Feb 2020 13:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14221 Alison Conigliaro-Hubbard presenting at Riverbed SKOI was standing on a stage in front of an audience of sellers last week sharing a personal experience I recently had using some of Riverbed’s modern application performance technology. I was fortunate to be one of the early internal users of Riverbed SaaS Accelerator for Office 365 (O365) and Client Accelerator as IT rolls these out companywide—sometimes when you know people in the right places it works out quite nicely. Before the holidays I started using these on my system—my goal was to experience them for a few weeks myself before our annual Sales Kickoff (SKO) with the hope of being able to share my excitement!

In my role I do a lot of file sharing on OneDrive and SharePoint, and as we lead up to this very important annual event for our global sales teams, as someone responsible for the content, I share some really big files all day every day as we aim to hit deadlines. I also do not work from the office in San Francisco every day. Some days I work from my home office in Marin County. Sometimes I work from a coffee shop. Sometimes an airport or a client site. Like many of us these days, work doesn’t go away when I leave the office.  In order to stay customer-focused – the reality is I am generally available wherever I might be.

App experience can vary dramatically

Unfortunately, depending on where I am, the experience I have of the apps I need to collaborate and get my work done—and in my case O365 apps such as SharePoint and OneDrive—can vary dramatically. Technically speaking, networks change all the time depending on where I log in, and so like most of us, I end up with fairly unpredictable performance—sometimes slow, sometimes fast, sometimes not at all. Often inconsistent. Not exactly reliable.  And in the enterprise when time equals money for my company, consistent, reliable, and fast apps are a difference maker!

So leading up to Riverbed SKO, I am working with some very heavy files—ones you might equate to mega design files as a manufacturing company or AEC firm. These can be 900MB+ files! And I am uploading and downloading to and from OneDrive several times a day.

For a few weeks I had been working with Riverbed SaaS Accelerator in the background as I spent the holidays in a hotel in Southern California, and worked in a variety of locations. Riverbed SaaS Accelerator is cloud-based software that maximizes performance for enterprise SaaS apps, and in my case, specifically assigned by my IT organization as an insurance plan to make sure O365 apps perform as expected. I also have Riverbed Client Accelerator installed on my laptop. (If you’re reading this and have used Riverbed for WAN Optimization over the years, what you may not know is that today Riverbed makes is super easy to accelerate performance of critical SaaS applications like O365 and others, so that no matter where we may be working at any given moment, we are always set up to make things happen!)

Is this thing going to upload?

Anyway, it’s a couple days before SKO and I had to upload this 940MB file to OneDrive to share with my colleagues for final review. I’m working from home on this day and things are minute to minute, deadline-driven as we are only days ahead of the most important internal event of the year. I was a little nervous before pressing the upload button—almost wishing I could transport myself to the office by snapping my fingers just to access the network there! Is this thing going to upload???

Riverbed SaaS Accelerator for O365 - Uploading a Large File

Not only did it upload—but it was FAST! Now I didn’t clock it because I just did it, and from my personal user experience of it—it took no time at all—not even close to my low expectations. And this is when I checked in with myself and noticed something really interesting… it’s like I had this AH HA MOMENT!

I had SaaS Accelerator running in the background for a few weeks, and out of nowhere I felt like my entire experience of O365 had changed. I trusted O365 to just do what it was supposed to do—I was getting things done as soon as I wanted them to be done. It was just WORKING! It was fast and reliable, and it was consistent no matter where I was or how big a file I threw at it.

But wait, there’s more!

After the 940MB file upload to OneDrive, I ALSO had to upload this same file to and external Dropbox folder, because I needed to get the file to show organizers who did not have access to our O365. Unfortunately, Dropbox was NOT enabled with a SaaS Accelerator license. So for this file upload I needed to walk away and do other things because the upload just hung there. And hung there. Ultimately it took well over an hour.

And so this is the anecdotal story I shared on stage with the Riverbed sellers in my session last week. And if you like that… just wait until you read what happened when I got home from SKO and decided to CLOCK IT! 

]]>
https://www.riverbed.com/blogs/o365-performance-acceleration-saas-acceleration-part-1/feed/ 1
Increasing Visibility Into Network and Application Performance is Key to Driving Business Innovation https://www.riverbed.com/blogs/increasing-visibility-network-application-performance-is-key/ Tue, 11 Feb 2020 00:33:05 +0000 https://live-riverbed-blog.pantheonsite.io?p=14306 Rethink Possible: Visibility and Network Performance — The Pillars of Business SuccessThere is an indisputable correlation between having effective technology in place and company health. So much so, that seven in ten C-Suite decision makers (70%) believe business innovation is  driven by improved visibility into network and application performance. In addition, 86% of C-Suite and IT decision makers (ITDMs), and 87% of business decision makers (BDMs), believe digital performance is increasingly critical to business growth.

This is according to Riverbed’s latest report ‘Rethink Possible: Visibility and Network Performance – The Pillars of Business Success’, which surveyed more than 1,700 technology executives, across six countries, to discover their attitudes to innovation, productivity, human behaviour and IT capabilities.

Slow running systems and a lack of visibility directly impact growth

 A key aspect of digital performance is having a seamlessly working network infrastructure. Disappointingly, this infrastructure is often quite poorly implemented and maintained. Three-quarters of respondents to Riverbed’s survey reported feeling frustrated by their current network performance, with IT infrastructure being given as the key reason for the poor performance. This is believed to have a direct impact on productivity, creativity and innovation, with almost half of the C-Suite (49%) believing that slow running and outdated technology is directly impacting the growth of their businesses.

It’s clear that businesses can’t shy away from implementing new technology if they want their company to succeed. And this technology must go beyond improving performance, it needs to deliver full and consistent visibility into the whole digital journey itself, so issues can be identified and resolved without requiring input from users. At present, one in three ITDMs don’t have full visibility over their network and applications. This must change in order for user experience to be maximised, networks to be optimised and, businesses to be future proofed in order to remain competitive.

Business priorities and challenges are evolving, technology must too

An overwhelming 95% of all respondents recognise that innovation and breaking boundaries are crucial to business success. As a consequence, 80% of BDMs and 77% of the C-Suite believe that investing in next-generation technology is vital, while over three quarters (76%) of ITDMs acknowledge that their IT infrastructure will have to change dramatically in the next five years to support new ways of doing business. Given their understanding of the importance of embracing innovative technology for future proofing their business, all leaders must take action to ensure they have the infrastructure in place to support their company through the changing business landscape.

It’s time to rethink what’s possible and evolve the digital experience

In this vein, 80% of all leaders (82% C-Suite, 84% BDMs and ITDMs) agree that businesses must rethink what’s possible to survive in today’s unpredictable world. Technology is the enabler in this process, so all leaders must come together to invest in the right solutions that embrace visibility and optimised network infrastructure as the next frontier in business success. This is key to not only driving innovation and creativity but attracting new talent. As business priorities and challenges evolve, it will be the companies who are willing to embrace technology that will flourish, while their competitors fall by the wayside. Ensure your business is one of those primed for success.

To find out more, download the full report Rethink Possible: Visibility and Network Performance – The Pillars of Business Success, view the infographic, and join in the conversation.

]]>
Protecting End Users in an SD-WAN World https://www.riverbed.com/blogs/protecting-end-users-in-an-sdwan-world/ Mon, 03 Feb 2020 13:30:19 +0000 https://live-riverbed-blog.pantheonsite.io?p=14157 When it comes to an SD-WAN deployment we tend to spend a lot of time thinking about connectivity, reachability, protocols, traffic steering and so on. One area that we sometimes overlook is SD-WAN security. It’s easy to do. Take for example a network deployment with several MPLS branches. All traffic is backhauled to the data center and then pushed through our high-end firewalls. The security group handles the firewalls. The infrastructure group handles the WAN and routing. Everyone has their own lane to stay in. Life is okay. But now the infrastructure group is talking about SD-WAN and how it’s going to help save money. The plan is to replace our WAN-edge routers with Riverbed’s SteelConnect EX SDWAN solution. From that replacement, we gain the ability to move to multiple lower-cost Internet circuits, perform application identification and path-quality-path-selection. Our routing protocols are compatible. All the bases seem to be covered, or are they?

Does SD-WAN deployment require backhaul?

Once an SD-WAN deployment is in place and Internet circuits are in use we look at how we can improve performance for our end-users. Backhauling user data over an Internet-based VPN can add latency and cause the end-user to experience delays.

Backhaul

This obviously impacts the human experience and we need to avoid that. This happens to be one of the benefits of an SDWAN deployment. With Internet circuits deployed at each branch, we can shave some of that latency by sending select traffic direct to the Internet. An example of the type of traffic that is normally sent “direct-to-net” as it’s referred to, is Microsoft Office 365, Salesforce, or Workday bound traffic.

Direct-to-Net

What this translates to is the WAN-edge device now being required to perform Network Address Translation (NAT) and at minimum a Stateful Firewall service. This allows outbound sessions to be tracked in a state table. Inbound traffic is referenced against that table to determine if it is a valid reply to an existing outbound connection. If it is, the traffic can pass. If it’s not, the traffic is discarded. The good news is that the Riverbed SteelConnect EX SDWAN solution provides this capability, and a whole lot more.

SteelConnect EX SD-WAN security capabilities

The SteelConnect EX offers a rich security feature set that’s licensed-based. There are three license levels:

  1. Secure SD-WAN Essentials, includes Stateful and Next-generation firewall (NGFW) capabilities
  2. Secure SD-WAN Standard, also includes Stateful and NGFW capabilities
  3. Secure SD-WAN Advanced, includes Stateful, NGFW, as well as Unified Threat Management features.

We will discuss these capabilities in the following sections.

Stateful Firewall

The stateful firewall provides a mechanism to enable full visibility of the traffic that traverses through the firewall and also enforces very fine grain access control on the traffic. To begin making use of this capability you must classify traffic. This is the process of identifying and separating traffic in a manner that makes it identifiable to the firewall service. To classify the traffic, the stateful firewall verifies its destination port and then tracks the state of the traffic. SteelConnect EX monitors every interaction of each connection until the session is closed.

stateful firewall

The stateful firewall grants or rejects access based not only on port and protocol but also on the history of the packet in the state table. When the SteelConnect EX stateful firewall receives a packet, first it checks the state table for an established connection or for a request for the incoming packet from an internal host. For example, when an internal host establishes an HTTP session to an external server, it begins by establishing a TCP session. This is the process of SYN, SYN-ACK, ACK. Until that three-way-handshake is completed the flow of packets is not considered a “session.” Therefore, when a TCP SYN is sent from an internal host, outbound, this is entered into the state table. The returning SYN-ACK is verified against the information in the state table. If nothing is found then the packet’s access is subject to the access policy rule.

An access policy rule gives us a way to decide if traffic can pass even if it does not match an entry in the state table. An example of this would be ICMP traffic. ICMP is not a stateful protocol, so therefore we could say in an access policy rule that all ICMP traffic inbound is allowed, regardless of a state entry or not. Most of the time an access policy is used to allow inbound access to services such as Web and FTP servers. This is not very common for the branch office.

NGFW

The Next-generation firewall (NGFW) is a robust security module that has the intelligence to distinguish different types of traffic. Recall that the Stateful firewall made use of ports, protocols, and IP addresses to identify traffic and create an entry in the state table. The NGFW provides network protection beyond the protection based on ports, protocols, IP addresses. In addition to traditional firewall capabilities, the NGFW includes filtering functions such as an application firewall, an intrusion prevention system (IPS), TLS/SSL encrypted traffic inspection, website filtering, and QoS/bandwidth management.

next generation firewall

These features can all be enabled, based on your license, and applied to a group of devices. It’s expected to see some type of performance impact when implementing these features, however, this should be a nominal impact and you should weigh out the need for the feature versus the impact prior to rolling the feature out to a large number of sites. The way I like to look at these features is like a toolbox filled with specialty tools. Not every situation requires the use of a hammer, so figure out what tool you need for your situation and implement it accordingly.

Unified Threat Management

SteelConnect EX includes Unified Threat Management (UTM) capabilities, which can be turned on by configuring the threat profiles in the NGFW policy rules. This means that UTM requires the use of the NGFW first.

The following threat profiles are supported:

  • Antivirus
  • Vulnerability (IDS/IPS)

SteelConnect EX has a built-in antivirus engine. This engine will scan live traffic looking for threats. To accomplish this, the antivirus engine waits till the last byte of the file is received before processing the entire file at runtime. You will need to configure at least one antivirus profile to enable the scanning of files for viruses.

built-in antivirus engine

To enable and enforce and antivirus profile a NGFW policy rule must be configured. When configured, the antivirus profile is applicable to all traffic that matches the security policy rule. Taking things a step further, what you tell the antivirus profile to do, is extract files from certain types of traffic. This could include HTTP, FTP, and common email protocols. As you might have guessed, the protocols the antivirus engine extracts files from are commonly used to transmit these types of threats.

When a file is extracted from one of these protocols it is buffered, forwarded to the destination (with the exception of the last packet), and scanned. If a virus is found the profile action is applied, otherwise the last packet is sent.

An antivirus profile supports the following enforcement actions:

  • Alert—Alerts the user when a virus is found. Virus information is stored in a log file.
  • Allow— The antivirus profile does not scan the file. It just allows it.
  • Deny— The antivirus profile aborts the flow on which the virus file is received.
  • Reject— Both client and server connection is reset.

Final thoughts

In this article, we’ve discussed three levels of SD-WAN security capability featured in the Riverbed SteelConnect EX SD-WAN solution. Knowing that these features are available can help make the determination of how branch traffic is handled. If the decision is to backhaul all Internet-bound traffic to the data center, then there won’t be much of a need to employ these advanced security features, outside of basic protection for a device. If the decision is to enhance the user experience by sending specific traffic “direct-to-net” then these features should certainly be discussed and the degree to which they are implemented will need to be determined. All-in-all the SteelConnect EX solution provides a proper degree of protection for branch traffic when Internet uplinks are made available.

But what about performance? A decision to backhaul will provide some benefits but more is involved in ensuring the user experience is the best it can be. For example, Microsoft services are regional and users may still experience less than ideal performance. The same is true for other SaaS offerings, largely due to the location of the services. For this, I urge you to have a look at the Riverbed SaaS Accelerator service. SaaS Accelerator combined with SteelConnect EX provides the highest level of WAN connectivity, branch security, and end-user performance, focused on enhancing user productivity.

]]>
Are You Digitally Competent? https://www.riverbed.com/blogs/are-you-digitally-competent/ Tue, 28 Jan 2020 13:30:41 +0000 https://live-riverbed-blog.pantheonsite.io?p=14144 The role of digital competence in digital transformation

Everybody talks about digital transformation, but how can you be sure it’s working for your company?The role of digital competence in digital transformation In other words, how do you ensure you’re getting the business performance you expect from your digital investments?

This is where digital competency comes into play. This is how you translate the vague promises of digital transformation into on-the-ground, bottom-line digital performance, which in turn drives business outcomes that can make a real difference to your enterprise.

What are digital competencies? They are a whole spectrum of technology skills and processes you need to master to compete in the new economy. Digital competence includes everything from IT infrastructure automation and modernization to digital product and service innovation to digital talent management and much more.

From digital competence to business performance

Hundreds of respondents to a recent Economist Intelligence Unit survey said that 80% of digital competencies matter for the business, and two-thirds said that they’re producing positive business outcomes—such as faster speed to market, greater agility and innovation, more revenue, bigger margins, and perhaps most important, a better customer experience.

In fact, delivering a great experience for users—including both customers and employees—has become a reliable predictor of great business performance. For example, one study showed that improving UX by as little as 1% can lead to a 100X boost in business growth. A poor UX, on the other hand, can have the opposite effect. For example, the Aberdeen Group found that just a one-second delay in page load times leads to 11% fewer page views, 16% lower customer satisfaction, and a 7% loss in customer conversion.

Across cultures, there is a common desire for a simple and streamlined user experience. That’s why many companies are setting up internal app stores where employees can go and get what they need to be productive at work. I believe that more enterprises should start asking their employees, in effect, “how would you like to work?” and then try to deliver that experience.

Closing the gap between IT and other teams to improve digital competence

Still, building a great user experience—and developing other digital competencies—is harder than you think. The Economist survey, for example, revealed that misunderstandings between the IT department—which often plays a leading role in developing digital competencies—and other parts of the organization remain a stumbling block. In many cases, IT tends to overestimate the readiness of non-IT folks, while business leaders tend to assume that IT understands the business perspective. In the survey, nearly two-thirds of respondents said that poor communication between IT and other departments limits their organizations’ digital competencies. About 61% of IT people said their non-IT leaders do not understand the technical complexity of digital systems.

Business outcomesI believe IT leaders should take the lead in closing the communications gap. CIOs can start by forging a closer partnership with the CEO and helping to define their company’s business and technology strategies. In my opinion, the CIO should think and act more like the CEO. This will require tech leaders to learn another competency: translating the technology aspects of digital transformation into the business language that CEOs and board members can relate to.

That may explain the trend towards appointing chief digital officers, or CDOs, who are not only responsible for overseeing back-office IT tasks, but who also set the vision for and lead the company’s digital transformation. That’s one of the most important digital competencies you can have.

]]>