Riverbed Technology https://www.riverbed.com/blogs/author/riverbed-technology/ Digital Experience Innovation & Acceleration Thu, 04 Jul 2024 09:55:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 Key Benefits of Aternity’s Intelligent Service Desk for Digital Employee Experience https://www.riverbed.com/blogs/key-benefits-of-intelligent-service-desk-for-dex/ Thu, 29 Feb 2024 13:00:13 +0000 https://www.riverbed.com/?p=77342 In today’s rapidly evolving digital landscape, enterprises face the constant challenge of enhancing digital employee experiences while optimizing their IT operations. The surge in workspace applications and heightened service expectations among digital nomads has led to a critical rise in incident volume and complexity. Overwhelmed Service Desks struggle with ticket loads, causing inefficient resource allocation, inconsistent IT service, and increased costs. Short-staffed IT teams often focus on non-impactful monitoring events, prolonging issue resolution and increasing error rates. Traditional automated solutions often fall short, offering limited capabilities and narrowly focused remediation scripts. Furthermore, without streamlined user feedback, employee frustration goes unnoticed, preventing IT from gaining a complete understanding of the situation.

Enter Aternity’s Intelligent Service Desk, a game-changer in the realm of IT automation. Aternity’s AI-powered Intelligent Service Desk proactively addresses recurring device issues before they become tickets. With the use of its LogiQ Engine and customizable runbooks, Aternity replicates advanced investigations by correlating end-user impact and real-time performance data to pinpoint incident root causes. Aternity dynamically models expert decision-making and integrates sentiment survey into its remediation workflows, resolving issues before human intervention.

Here are eight compelling reasons why your enterprise should embrace this transformative technology:

Prevent incidents

With its AI-enabled issue detection and correlation, Aternity’s Intelligent Service Desk proactively identifies application and device issues before they escalate into full-blown incidents. Aternity employs the right combination of remediation actions, decision-making and user feedback to effectively resolve an issue before a ticket is raised. This prevents service disruptions, keeping your workforce productive while eliminating costs associated with raising a ticket.

Improve AI outcomes

With Aternity’s full-fidelity telemetry, embedded AI and intelligent automation, IT can expect superior outcomes in incident resolution. Many DEX tools often lack the granularity required to pinpoint underlying issues accurately. To make the most of AI models, companies need DEX platforms that can ingest and correlate large amounts of data across devices, applications, and the network. Furthermore, effective AI/ML models require data that is centralized, complete, granular, and stable to map dependencies and build contextual models. With its ability to process high fidelity data, Aternity delivers intelligence and precision for remediation.

Intelligently ticket with your ITSM tools

Aternity seamlessly integrates with existing IT Service Management (ITSM) tools, such as ServiceNow. For any unresolved issues that are more complex or nuanced, Aternity will create, escalate and route a ticket with the right priority to the right team. By feeding user-centric and dynamic insights directly into tickets, Aternity streamlines the ticketing process, significantly reducing time associated with manual diagnostics while ensuring swift resolution.

Empower human ingenuity

Human decision fatigue is a major challenge due to the overwhelming volume of tasks and information. AI offers a solution by enhancing decision-making through intelligent automation and insights. By automating repetitive, low-value tasks, organizations can reduce decision fatigue, empowering Digital Workplace teams to proactively address digital experience issues and expedite decision-making. With its Intelligent Service Desk capabilities, Aternity frees up time for employees so they can focus on innovation and creativity.

Improve the voice of the user

By integrating user feedback into its Intelligent Service Desk workflows, Aternity ensures that the voice of the user is heard. Effective response to user feedback is paramount in driving positive DEX outcomes. Traditional feedback mechanisms often suffer from inefficiencies, with critical insights getting lost in the noise of irrelevant data. Sentiment surveys enable organizations to correlate and streamline user feedback processes. By prioritizing and resolving issues based on user feedback and impact, Aternity improves user happiness.

Improve energy efficiency

As part of its Sustainable IT capabilities, Aternity offers automation and actionable insights for managing energy consumption and carbon emissions at both the individual and organization level. By proactively addressing device issues and optimizing performance, Aternity helps improve energy efficiency across your enterprise. With Aternity Intelligent Service Desk, enterprises can automate power settings on devices based on consumption patterns or the user’s profile.

Reduce IT costs

By automating and resolving recurring issues, Aternity helps reduce IT costs significantly through incident prevention. Aternity’s Intelligent Service Desk capabilities have helped enterprises save more than $10 Million dollars annually through a reduction in ticket volume, proactive outreach and decrease in manual tasks.

Implement a VIP Service Desk

With Aternity’s customizable runbooks and self-service remediation capabilities, enterprises can implement a VIP Service Desk tailored to the needs of valuable users. By delivering higher service levels and personalized support based on user status or location, Aternity helps enhance the digital experience for VIP users, driving increased loyalty and satisfaction.

By leveraging intelligent automation, AI-driven insights, and seamless integration with ITSM tools, Aternity Intelligent Service Desk empowers organizations and Service Desk teams. This technology sets a new standard for proactive, user-centered IT service, driving efficiency, reducing costs, and ultimately fostering a more productive, satisfied workforce. In a world where digital agility and resilience are paramount, Aternity’s Intelligent Service Desk is essential for enterprises aiming to thrive in the competitive digital landscape.

]]>
Creating a Sustainable Device Lifecycle Management Practice https://www.riverbed.com/blogs/sustainable-device-lifecycle-management/ Tue, 02 Jan 2024 13:02:19 +0000 https://www.riverbed.com/?p=76198 E-waste is quickly piling up. According to the United Nations University Global E-waste Monitor, e-waste is the fastest-growing waste stream in the world—with mobile phones and PCs making up nearly 10% of that total stream.

End-user devices, in particular, require organizations’ attention when it comes to mitigating environmental impact: Gartner reports these devices constitute a majority of IT’s carbon footprint.

Poor device lifecycle management is a major contributor to that footprint. There are several preventative, proactive practices they can take to extend the current lifecycle of their devices—which optimizes their use and curbs their environmental impact. In fact, 83% of business leaders report that successful sustainability initiatives create significant short- and long-term value for their organization.

IT teams may worry making changes to their device lifecycle management processes can result in downtime, inefficiencies, and performance issues. However, robust device lifecycle management actually enables higher productivity and better performance by mitigating the waste of time, resources, and actual physical devices. Here’s how organizations can ramp up their device lifecycle management to improve their environmental, social, and governance (ESG) outcomes.

Reduce intake: conduct a comprehensive device inventory audit

Organizations can use existing inventory to minimize their contribution to e-waste and optimize their current resources. By conducting a comprehensive device inventory audit, IT leaders can gain the visibility they need into the devices they have, which helps prevent unnecessary device additions or performance-affecting device reductions.More specifically, an inventory audit can help leaders:

  • Track lifecycle stages. Audits can catalog devices by their lifecycle stage, which purchase date, warranty status, and maintenance history all inform. This helps IT leaders gain a more accurate understanding of how usable or up-to-task each device is, which helps teams maximize their lifetime use. This can make the difference between throwing away a perfectly good device because it’s “old” or increasing an “old” device’s lifetime value, which preserves money and resources.
  • Optimize device usage. Understanding device inventory allows organizations to assess device usage. IT leaders can easily reallocate underutilized devices, prevent unnecessary new purchases, extend the lifespan of existing assets, and reduce e-waste.
  • Streamline device budgeting. When IT leaders know all the devices in their inventory, what they’re capable of, and where they are in their lifecycle, they can forecast future device needs with greater accuracy. This allows for better budget allocation and prevents overspending on unnecessary resources, which also reduces an organization’s carbon footprint.

Inventory audits also reframe corporate attitudes around older devices, and therefore, mitigate waste by extending their lifetime use. For starters, older devices are not necessarily useless devices. Based on Gartner’s research, while most organizations still set three to four-year refresh cycles for employee laptops, organizations have found that only a small fraction of those devices have performance metrics that would justify replacement within that time frame. Extending their life span represents millions of dollars in potential cost savings.

Repair and retain: Fix devices when you can

Organizations should err on the side of repair instead of throwing devices away. Unilaterally getting rid of devices when they get “too old,” even when they still have considerable lifetime use in them, wastes resources, IT support time (because IT teams need to replace them), and money. 

In fact, an overwhelming majority of older devices could have their lifetime use extended with simple repairs and maintenance. As such, IT leaders should institute priorities around:

  • Hardware performance insights. IT leaders can utilize digital experience management (DEM) to optimize device lifecycle management. DEM can help organizations focus on actual device performance as opposed to a static calendar timeline, which helps extend device lifespan.
  • Flexible life span policies. Treat each and every device as a unique circumstance. Organizations should avoid adopting singular life span policies that risk throwing away usable existing devices. IT leaders manage life span policies with DEM data that informs device performance.
  • Employee-focused energy reduction. How an employee treats and maintains their devices is a major factor in device reliability. Organizations can mitigate device wear and tear by instituting policies around employee use, including battery preservation, power management settings, and sleep settings.

Restructure and recycle: Bake sustainability into device selection and procurement

Incorporating sustainability goals into procurement strategies can build a stronger foundation of device lifecycle management by uplifting environmental priorities from the beginning. IT leaders can incorporate sustainability into their procurement processes by:

  • Seeking out vendors that ship devices in responsible packaging.
  • Ensuring devices have specific ecolabel certifications, such as 80 PLUS, Energy Star, and EPEAT.
  • Initiating tests to compare the energy efficiency of different device models.
  • Sourcing devices from responsible providers with commitments to sustainability.

Organizations can also leverage third-party assistance to collect and evaluate data needed to assess their vendors’ ESG performance to ensure they’re meeting ESG goals. They should also ensure that their vendors are engaging supply chains with similar priorities around sustainability to enable multi-level reduction in e-waste.

Improving device lifecycle management from the ground up

Incorporating sustainability initiatives throughout device lifecycle management can be helpful in reducing e-waste and optimizing performance. 

Few strategies can match the impact of implementing an eco-conscious mindset from the very beginning of every device lifecycle. IT leaders with greater sustainability ambitions can take device lifecycle management to the next level by keeping sustainability top of mind during procurement, defining employee responsibilities for energy conservation, and, of course, gaining and maintaining visibility of all devices for better insights into their utilization. Those three strategies combined shrink carbon footprints and maximize ESG outcomes.

Find out how to make sustainable IT a reality by checking out our white paper, The Role of Unified Observability in Sustainable IT.

]]>
What Does SEC T+1 Rule Mean for IT Teams in Financial Services? https://www.riverbed.com/blogs/sec-t1-rule-for-it-teams-financial-services/ Wed, 20 Dec 2023 13:53:07 +0000 https://www.riverbed.com/?p=76037 The financial services industry is no stranger to change, and the SEC T+1 rule is no exception. Effective from May 28, 2024, the new regulation will reduce the settlement time for U.S. securities transactions from two business days to just one.

In this blog, we delve into the SEC T+1 rule, explore how Riverbed’s Network Observability solutions can help IT network teams in meeting the associated challenges, and provide guidance for current Riverbed customers to prepare for the SEC T+1 changes.

A quick summary of the SEC T+1 rule

SEC T+1 is a rule amendment that will shorten the settlement cycle for broker-dealer transactions in securities from two business days after the trade date to one. The SEC believes this will benefit investors and reduce risk in securities transactions.

There are other nuances of this rule amendment that address processes and record keeping requirements but for the network teams at financial services institutions, cutting the allotted time to settle a transaction in half will have the most impact.

Challenges for IT network teams

The new T+1 rule puts a lot of additional pressure on IT and network teams at financial services organizations to ensure their networks can handle the increased network demands and data processing that will come along with the shortened transaction processing window.

This means it’s critical that financial services organizations have broad and deep visibility into their network so they can proactively identify and quickly resolve network performance issues. This visibility is also crucial for adhering to T+1 requirements, answering questions like “how much traffic is being consumed?” and “how is traffic being prioritized?”

Riverbed NetProfiler and AppResponse can help address those challenges. Riverbed NetProfiler provides network flow analytics that can quickly diagnose network issues before they impact performance. Meanwhile, Riverbed AppResponse offers the robust network and application analytics needed to shorten the mean time to repair network issues.

NetProfiler and AppResponse customers

For existing Riverbed customers using Riverbed NetProfiler and AppResponse, it’s important to note that adapting to new SEC T+1 rule may lead to increased data generation and has the potential to stretch the limits of your NetProfiler and AppResponse capacity. To ensure your network observability and data retention, now is a good time to double check your existing licensed capacity and system storage.

Determining if NetProfiler is oversubscribed

You can determine your flow status by going to the NetProfiler or Flow Gateway ADMINISTRATION > System Information link at the top of the screen, and then clicking on SYSTEM. The video below provides an overview and you can read this blog, Determining If NetProfiler Is Oversubscribed, for more detail.

Checking AppResponse capacity

To understand how much additional packet and analysis horsepower remains in your appliance, head over to Administration > Traffic Diagnostics from the appliance webUI. This built-in insight is packed with critical charts of hardware and software components that power the packet capture and analysis capabilities. On the “General” tab, the bottom four charts will indicate if you have reached your AppResponse.

If you are seeing packet drops in any of these charts you should investigate how much traffic is being fed to the appliance by visiting “ADMINISTRATION” > “Capture Jobs/Interface” page. This page will list all the network capture hardware cards (or software/virtual interfaces) that are installed on the appliance along with their rated link speed.

Once you are familiar with all the capture cards installed and their link speeds, head back to the “Traffic Diagnostics” insight where the top two charts titled “Throughput” and “Packet Rate” will show how much traffic is going through the installed interfaces. Each interface must only be fed traffic below its line rate at all times. If in these charts (which go back seven days) show spikes in traffic which surpass the line rate for an interface, work with the infrastructure feeding traffic to AppResponse and ensure it spreads the load of packets across the other interfaces. In some cases, you may need to add another AppResponse to handle the peak rate of traffic.

The video below provides an overview of the process for both NetProfiler and AppResponse customers in detail:

 

While the SEC T+1 rule amendment will benefit investors and reduce risk in securities transactions, it comes with some challenges for network IT teams at financial services organizations. Network Observability solutions can help provide these teams with the in-depth network visibility needed to address these challenges by providing proactive identification of network performance issues and faster mean time to resolution. It’s critical that existing Riverbed customers evaluate their current usage levels to ensure they are prepared to handle the increased network demands from SEC T+1.

Learn more about how Riverbed can help financial services organizations.

]]>
The Next Generation Workforce Demands Sustainable IT https://www.riverbed.com/blogs/next-generation-workforce-demands-sustainable-it/ Wed, 06 Dec 2023 13:16:40 +0000 https://www.riverbed.com/?p=75666 PwC reports that Millennials and Gen Z currently comprise 38% of the workforce—a number predicted to jump to 58% by 2030. However, Millennial and Gen Z workers have some critical differences from their Baby Boomer counterparts that require organizations to make major shifts in their tech status quo. Namely, both generations list environmental friendliness and sustainability as top priorities – especially when choosing where they work. 

51% of Gen Z U.S. business students stated they’d accept less pay to work at an environmentally responsible company—which means companies would do well to invest in sustainable IT for the new workforce. Implementing workflow optimization and automation is the best way for organizations to achieve the workplace that the next generation demands by improving digital employee experience (DEX) and integrating sustainable IT practices into daily operations.

How sustainability provides workers with the digital employee experiences they crave

Even if they’re not outright saying it, the Millennial and Gen Z workforces crave a positive digital employee experience (DEX). Meeting sustainability expectations is critical to providing that positive digital employee experience for Millennials and Gen Z. According to Deloitte, 50% of Gen Z workers and 46% of Millennials are currently pushing their employer to drive change on environmental issues. 

Here are a few ways sustainable IT can enhance DEX:

  • Improve engagement and morale. Gen Z and Millennial employees are a values-driven generation. When they feel that their values align with their organization’s, it can significantly increase overall engagement and morale. 
  • Enable better workflows with efficient and reliable technology. Sustainable IT inherently aligns with adopting more modern, efficient technologies and processes. When workflows are optimized and automated to mitigate unnecessary repetitive tasks, it can significantly reduce the chances of frustration or burnout. 
  • Enhance communication and collaboration. Sustainable IT practices promote the adoption of tools and platforms that help mitigate unnecessary and redundant human intervention, streamlining communication and fostering better teamwork.

The challenges of integrating sustainable IT practices

While integrating sustainable IT practices is necessary for organizations that hope to retain the next generation of talent, moving toward sustainability presents challenges that highly impact workflow productivity. These challenges include:

Data volume and velocity

Matillion and IDG found that organizations experience monthly data growth of 63% per month on average. More IT systems are connected to a single company than ever, creating a massive volume of data at high velocities. Managing and processing this data in real-time and manually without overwhelming teams and leading to performance bottlenecks is virtually impossible

Data consistency and quality

Organizations must often process data from diverse and disparate sources, which makes it difficult to standardize company-wide data aggregation, collection, and analysis. More likely than not, there will be inconsistent data formats, missing values, and errors seriously impacting the accuracy and integrity of any data collected—which then forces teams to sink time and energy into identifying and remediating mistakes.

Resource and expertise constraints

Data moves fast in modern digital ecosystems, so workers must move even faster. However, it can be challenging to build the infrastructure and expertise needed to aggregate data (which isn’t always accurate or high-quality) from disparate sources—and then mine that data for insights. While investing in certain tools, personnel, and training can help mitigate resource strain, companies with tighter budgets may still experience significant bottlenecks here. Additionally, more is not always better. Simply adding tools or solutions to a tech stack doesn’t necessarily guarantee greater productivity.

These issues all negatively impact a major value of Millennial and Gen Z workers: convenience. Our 2023 Global Digital Employee Experience Survey Report found that 68% of Millennial and Gen Z are likely to go elsewhere if their employer’s DEX, which includes convenience and ease of use, does not meet their standards.

How workflow optimization and automation can help sustainability and DEX

While balancing IT device performance and reducing environmental impact might seem like unrelated ideas, they are in fact deeply intertwined. These goals can functionally support each other when approached with this critical mindset: Sustainable IT is better IT. 

Here are a few places where workflow optimization and automation can help organizations meet sustainability goals and ideals:

Energy efficiency

IT workflow automation can help significantly optimize (i.e. reduce when it’s not necessary) energy usage. With the proper automation, teams can schedule maintenance tasks during off-peak hours or even turn off devices, lights, and other energy sources when they’re not in use or needed. Organizations can leverage AI to enable energy-efficient algorithms and other automated processes that reduce overall power consumption, shrinking their carbon footprint and lowering their operational costs.

Resource optimization

The right workflow optimization and automation strategies solve experience and resource gaps. Workflow optimization inherently involves streamlining processes by reducing unnecessary steps and mitigating the need for human intervention—which can also help eliminate bottlenecks. Optimization and automation are most efficiently applied here when they remove the need for humans to carry out repetitive tasks that machines could do. When organizations standardize these automation procedures, IT systems can utilize their existing resources more efficiently, thereby reducing waste and excess energy expenditure.

Automated data validation and real-time data cleansing

When organizations automate data validation, they can identify and rectify data discrepancies or anomalies with rigor and timeliness. This ensures higher data accuracy, significantly reducing unnecessary human intervention when vetting data for quality and relevance. Additionally, automated workflows can address real-time data cleansing and enrichment. This helps identify and rectify data inconsistencies with greater accuracy and speed, reducing the environmental impact of operational inefficiencies.

New generations, new expectations, new technology

Millennials are about to enter their prime earning years, while Gen Z prospects flood the workforce in droves. As both generations actively seek out organizations that meet their values (and are willing to leave those that don’t), companies will need to live up to expectations of sustainability.

Implementing workflow optimization and automation in the right instances can significantly reduce environmental impact by streamlining inefficient processes. With automation enabling sustainable IT practices, companies can eliminate unnecessary human intervention, creating a more positive, productive, and environmentally friendly environment for all generations of workers.

Want to learn more about the key to implementing sustainable IT? Check out our white paper, The Role of Unified Observability in Sustainable IT, to take a deeper dive.

]]>
Overcome Data Collection Hurdles to Empower Sustainable IT https://www.riverbed.com/blogs/overcome-data-collection-hurdles-sustainable-it/ Mon, 13 Nov 2023 13:45:56 +0000 https://www.riverbed.com/?p=75391 In the quest to reach sustainability goals, organizations are discovering a powerful ally in their IT departments. IT can play a pivotal role in curbing resource and energy consumption, thereby reducing carbon emissions, minimizing e-waste, and shrinking an organization’s environmental footprint. These sustainability efforts not only benefit the planet but also contribute to a healthier bottom line.

However, the path to implementing sustainable IT is fraught with challenges, and one of the most pressing is the issue of data collection. In this blog, let’s delve into the complexities of data collection in the context of sustainable IT and introduce a compelling solution: Unified observability, bolstered by Aternity Digital Experience Management (DEM).

The data collection dilemma

Effective data collection is the linchpin of sustainable IT solutions. Data empowers informed decision-making by providing insights into resource consumption and environmental impact. Also, it facilitates benchmarking, identifies optimization opportunities, and ensures the efficient allocation of resources for impactful sustainability initiatives. Additionally, organizations can leverage data to promote transparency, behavioral change, and compliance while enabling continuous improvement in the pursuit of greener IT practices.

Without robust data collection, the path to sustainable IT would lack direction and the means to measure and enhance environmental impact. However, collecting data is challenging for several reasons, including:

  • Data fragmentation: Data spread out across an array of platforms complicates the process of consolidating data into a unified and coherent format.
  • Compatibility issues: Cloud-based and on-premises systems often use different technologies and standards, making collection hard.
  • Data security and privacy concerns: Different data sources may have varying levels of security measures and privacy regulations, complicating collection.
  • Data volume and velocity: Managing and processing large amounts of data in real time can overwhelm infrastructure and lead to performance bottlenecks.
  • Data consistency and quality: Data from diverse sources may not always adhere to the same standards of consistency and quality.
  • Resource and expertise constraints: Building the infrastructure and expertise needed to aggregate data from various sources can be resource-intensive.
  • Scalability: Scalability challenges emerge when trying to accommodate the growing number of data sources and the increasing volume of data they generate.
  • Vendor lock-in: Vendor-specific data formats and APIs can make it difficult to extract data for aggregation or to switch to alternative solutions, limiting flexibility.

Unified observability to the rescue

Unified observability is a game-changing solution to the challenge of data collection. It offers a comprehensive and real-time perspective on IT systems, facilitating informed decision-making regarding environmental impact. Here’s how it works:

  • Comprehensive data foundation: Unified observability platforms meticulously collect granular, timestamped, and complete records of every event across the IT infrastructure. This data forms the bedrock for accurate decision models tied to sustainable IT initiatives.
  • Actionable insights: These platforms deliver user-centric, actionable insights with relevant context to the right stakeholders, enabling organizations to identify areas with the most significant impact potential.
  • Intelligent automation: Unified observability platforms leverage AIOps to provide expert decision-making and automation, resolving issues proactively before they escalate into incidents. This streamlines sustainable IT initiatives, enhancing operational efficiency and reducing the carbon footprint.

Practical applications of unified observability for sustainable IT

Unified observability isn’t just a theoretical concept; it yields tangible benefits for sustainable IT initiatives. The solution promotes energy efficiency by delivering granular insights into applications and infrastructure interaction. This empowers businesses to pinpoint inefficiencies, redundancies, and areas of over-provisioning. Also, real-time data analysis informs decisions about workload consolidation and virtualization, leading to reduced energy consumption.

The key to leveraging unified observability to drive sustainable IT lies within DEM platforms. Such solutions examine performance data and user feedback so organizations can gauge the environmental impact of routine tasks, establish sustainability benchmarks, and inspire employees to participate in sustainability initiatives.

Enhancing sustainable IT with Aternity DEM and prebuilt energy efficiency dashboards

Riverbed Unified Observability includes a mighty sidekick in the form of Digital Experience—a digital experience management solution that aggregates insights based on application and device performance data, human reactions, and benchmarking across industry peers.

Aternity DEM now features an energy efficiency dashboard that offers valuable insights by gathering and correlating detailed telemetry data from various devices. This dashboard provides a clear view of device uptime and energy-related metrics, enabling IT teams to pinpoint areas where avoidable energy consumption can be reduced. Additionally, it allows for the measurement of carbon footprint at both individual and organizational levels. By measuring uptime, IT organizations can identify opportunities to educate employees about conserving energy during idle device times.

Key features include:

  • Computation of essential environmental metrics such as device uptime, electricity usage, carbon emissions, and electricity expenses.
  • Granular breakdown of metrics by device usage duration, geographical location, power plan, business unit, and more.
  • The flexibility to customize calculation parameters to align with specific objectives and operational requirements. This customization empowers organizations to leverage Aternity as a robust tool for embracing and advancing sustainable IT practices.

Drive positive environmental impact

Aternity DEM is at the forefront of driving positive environmental change. With its real-time insights powered by unified observability, Aternity helps organizations overcome data collection challenges to promote more energy-efficient operations. As a result, we see that sustainable IT isn’t just good for the environment, it’s good for business. Check out our white paper, The Role of Unified Observability in Sustainable IT, to take a deeper dive.

]]>
How Network Analytics Boost Performance and Security https://www.riverbed.com/blogs/network-analytics-performance-and-security/ Thu, 14 Sep 2023 21:39:03 +0000 https://www.riverbed.com/?p=73876

At Riverbed, we often talk about “you can’t protect what you can’t see.” Having the ability to monitor everything that is happening in your network is the first step in improving the security, performance and reliability of your environment. But how you capture, interpret and respond to that sea of data that from your network allows you to truly take control of your operational environment. This is where real-time network analytics comes in to play.

This holds especially true for complex, overtaxed or high-security networks. Additionally, being able to capture and store network data allows historical network performance reports to be generated–a vital tool in maintaining system health, data security and optimized I/O transfer speeds between connected devices. IT teams can also quickly identify, isolate and quarantine incoming malware, viruses or worms by using real-time packet scanning to identify threats.

Network analytics help IT teams manage and secure data networks, improve security, fine-tune performance, troubleshoot network problems, predict traffic trends, perform forensic investigations for incidents and open new business opportunities in some cases.

Real-world network analytics applications

Though every enterprise network can benefit from analytics, for some industries the benefits can be manifold. For example, telcos can use network analytics to manage high volumes of user traffic in mobile communications and broadband connections. The same technology can assist mining and oil and gas companies to monitor remote IoT devices that regulate pipelines, drilling and reservoir facilities. The automotive and high-tech industries can extensively use real-time data analytics to develop self-driving vehicle networks and implement Artificial Intelligence (AI) and Machine Learning (ML) guidance for autonomous vehicle navigation.

Streaming real-time data analytics opens new innovation opportunities across all industries based on Big Data applications, AI and ML.

How does it work?

Network analytics works by providing insights into various aspects of network performance:

  • Latencies for traffic through its entire path with hop-by-hop analysis.
  • Bit rates through a particular network port, broken down by application.
  • Collision and packet drop rates at a port.
  • Number of packets or flows from any location, device, application, or identity.
  • Number of packets or flows affected by specific security policies.
  • Infrastructure monitoring for SNMP, WMI, and increasingly streaming telemetry.

The visibility and insights presented by network analytics can be used for several tasks, such as spotting bottlenecks, evaluating the health of devices, root-cause analysis, issue remediation, identifying connected endpoints, and probing for potential security lapses.

Safeguarding networks and driving business growth

Network analytics offers a wide range of benefits beyond traffic analysis:

  • Enhanced Security: Network analytics improves cloud resource and device security by allowing real-time scanning of data packet transmissions. Administrators can track I/O data packet resource consumption by IP address to detect anomalous changes in activity and quickly identify intruders, malware, and infected devices. It also speeds up the detection of security threats, preventing hacking attacks from spreading deep into the corporate infrastructure. Network analytics can not only track the path of a compromise through the network in real time but also can be used to retrospectively investigate once a new attack vector has been identified and understood.
  • SNMP and WMI Filtering: Data can be used to diagnose network device problems and reduce remediation time.
  • Real-time Analytics: Integration with AI and machine learning provides real-time and historical insights into network data, enabling tailored operations.
  • Streamlined Business Processes: Analytics optimizes enterprise-wide IT operations, security, and efficiency while streamlining business management.
  • Performance Monitoring: Administrators can monitor performance, including historical usage patterns that help predict future data center needs.
  • Track KPIs: Network monitoring tools can analyze KPIs and present them to administrators, simplifying complex cloud network reporting and alert processes. IT teams can track specific KPIs for their specific industry application.

At Riverbed, we have been deploying network analytics solutions for over 15 years. As networks have become more complex and security requirements have increased, we need an automated way to correlate, interpret, analyze and respond. Or, to put it another way, we need more “IQ” out of Network Analytics  solutions. That’s why we have more recently built out the Riverbed IQ to address the needs of todays’ complex and high speed environments.

]]>
Monitoring the Cloud for End User Experience https://www.riverbed.com/blogs/cloud-monitoring-for-end-user-experience/ Thu, 17 Aug 2023 18:31:22 +0000 https://riverbed-new.lndo.site/?p=73100 Cast your mind back to the last time you lost your house keys. I mean, really lost them. They weren’t in any of the usual places. You’ve checked about ten times. And when you did find them (in a sweaty panic and a flurry of overturned cushions because you were meant to leave the house 20 minutes ago), they were somewhere completely unexpected, like the laundry basket or in the fridge.

This scenario, so familiar to us all, is very much like monitoring application performance in the cloud. When it comes to finding problems and the underlying cause–the place you need to be looking is often very different to what you think.

Cloud adoption is standard for many businesses and is accelerating across all sectors. According to Gartner, 85% of organizations will embrace a cloud-first principle by 2025. While migrating to the cloud makes businesses more agile, resilient and able to provide a true remote/hybrid experience for employees, it also comes with its fair share of challenges.

When it comes to cloud, end user experience is the ultimate measure for success. But when organizations migrate to a cloud environment, the infrastructure on which business-critical apps run is no longer within your control, nor is it with the cloud vendor. Therefore, cloud monitoring tools play a critical role in alerting IT teams when something goes wrong.

As multi-cloud environments grow in complexity and the costs associated with app downtime grow, teams need more than an alert when there is a problem. They need insights into where the issue is, what has caused it, and how best to solve it. To deliver an optimal end user experience, cloud monitoring works best as part of a more holistic toolkit, which is where a more sophisticated jump to a unified observability platform may be a better option.

Benefits of cloud monitoring

Cloud monitoring plays an important role in making sure that service-level objectives are being met, which is essential for a consistent user experience. It offer an excellent option for growing businesses, as it allows them to scale resources up or down on demand and can track large volumes of data across different cloud locations. Yes, its core value lies in assessing system health, analyzing long-term trends and sending out alerts when things go wrong. It also provides insights into how well apps are performing and how they are being used over time.

Additionally, cloud monitoring tools offer the flexibility to be used across desktop computers, tablets, and phones, making it easy for teams to track application performance from any location. This is especially helpful for distributed teams and remote workers who need to access company data no matter where they choose to work. Monitoring also strengthens the security of applications by identifying potential risks.

As cloud infrastructure and configurations are already in place, installing a monitoring tool is relatively straightforward. It strengthens business resilience because even if local infrastructure fails, cloud-based resources will still function, ensuring continuity of operations.

What cloud monitoring can’t do

While cloud monitoring provides numerous benefits, it does have limitations. Firstly, tools in this space often only track application usage and consumption. They can provide an alert to a poor user experience but may not offer the insights into why it was sub-par. IT teams are obliged to investigate every alert without context, which often results in alert fatigue. War rooms need to be set up to deal with major outages, which are resource intensive because IT teams spend a lot of time chasing bad leads and looking in the wrong places.

To resolve problems impacting the end user experience quickly, IT teams need to ascertain both the location and cause of a problem to ensure that the problem doesn’t keep resurfacing. That is why cloud monitoring shouldn’t be used in isolation, but as part of a suite of tools that include network performance monitoring and diagnostics (NPMD), application performance monitoring (APM), infrastructure monitoring, and digital experience monitoring (DEM). This unified set of solutions tracks all moving parts in end user experience delivery, allowing IT teams to really zero in on the root cause of problems.

How unified observability fills the gap

Where monitoring tracks system performance and identifies known failures, observability goes the extra mile. If all the moving parts of delivering cloud based applications are thought of as a single system, Observability can look at the overall system with all it’s interdependencies and can identify the root cause of a problem by analyzing the data it gathers from many different sources. An observability solution not only assesses the health of that system but provides actionable insights as well. This allows IT teams to proactively address problems and resolve them faster.

Riverbed Unified Observability platform overcomes siloes to capture full-fidelity data from networks, applications, servers, client devices, cloud-native environments and end user devices. AI and ML are then used to analyze data streams, automating much of the troubleshooting work that would usually be carried out by IT engineers. This allows employees at any level to help solve user experience issues quickly. Insights are filtered, contextualized and prioritized, ready for action by the IT team.

Therefore, while cloud monitoring is crucial, meeting rising expectations for the end-user experience requires a more comprehensive and sophisticated solution. With a unified observability solution, you can set IT teams up for success by not only alerting them to problems but showing them where to look and automating the bulk of the troubleshooting process. This allows issues to be resolved before they escalate to outages, improving the end-user experience.

]]>
What Are the Three Major Network Performance Metrics to Focus On? https://www.riverbed.com/blogs/what-are-the-three-major-network-performance-metrics-to-focus-on/ Wed, 02 Aug 2023 20:36:01 +0000 https://www.riverbed.com/?p=76122 In today’s hyper-connected world, where businesses rely heavily on network infrastructure to transmit data and deliver services, helping your clients understand network performance metrics is crucial in starting conversations about how Riverbed solutions can improve performance. Network performance metrics provide insights into the efficiency, reliability, and overall health of a network. In this blog, we will delve into three major network performance metrics: ThroughputNetwork Latency (Delay), and Jitter.

By understanding these metrics, you’ll be better equipped to help your clients optimize your network and ensure seamless operations.

What is Throughput?

Throughput refers to the amount of data that can be transmitted through a network within a given time frame. It is commonly measured in bits per second (bps) or its multiples (Kbps, Mbps, Gbps). Throughput represents the network’s capacity to deliver data and is often associated with bandwidth. It measures how fast data can be transferred between devices, servers, or networks. Higher throughput indicates a network’s ability to handle larger data volumes and support bandwidth-intensive applications such as video streaming or large file transfers.

What is Network Latency (Delay)?

Network latency, also known as delay, is the time it takes for a data packet to travel from its source to its destination across a network. It is usually measured in milliseconds (ms). Latency can be affected by various factors such as the distance between network endpoints, network congestion, and the quality of network equipment. Lower latency signifies faster response times and better user experience. Applications that require real-time interaction, such as online gaming or voice/video conferencing, are particularly sensitive to latency. Minimizing latency is crucial to ensuring smooth and seamless communication.

What is Jitter?

Jitter refers to the variation in delay experienced by packets as they traverse a network. It is measured in milliseconds (ms) and represents the inconsistency or unevenness of latency. Jitter is caused by network congestion, routing changes, or varying levels of traffic. High jitter can lead to packet loss, out-of-order packet delivery, and increased latency, negatively impacting the performance of real-time applications. To ensure optimal performance, it is essential to minimize jitter and maintain a stable and predictable network environment.

Why are network performance metrics important?

Network performance metrics play a vital role in several aspects. Here’s how Riverbed can help.

Capacity Planning

Understanding throughput helps network administrators determine the network’s capacity and whether it can handle the expected workload. With Riverbed Network Observability solutions, organizations can proactively manage network and application performance. Additionally, NPM allows Network Operations teams to effectively manage costs by investing only in upgrading critical infrastructure, consolidating underutilize resources and managing assets of multiple business units. Riverbed Network Observability delivers the ability to auto-discover topology and continuously poll metrics, automate analyses, and generate capacity planning reports that are easily customizable to changing business and technology needs.

Performance Optimization

Monitoring latency and jitter allows organizations to identify and troubleshoot network performance issues. By pinpointing the root causes of delays or inconsistencies, network administrators can optimize network configurations and minimize disruptions. For performance optimization, Riverbed Network Observability provides cloud visibility by ensuring optimal use and performance of cloud resources and helps organizations manage the complexity of Hybrid IT with agile networking across data centers, branches and edge devices. Riverbed Network Observability helps overcome latency and congestion by proactively monitoring key metrics and their affect on application performance.

Quality of Service (QoS)

Network performance metrics enable the implementation of effective Quality of Service policies. By prioritizing specific types of traffic based on their requirements, such as voice or video data, organizations can ensure a consistent and reliable user experience. The Riverbed QoS system uses a combination of IP packet header information and advanced Layer-7 application flow classification to accurately allocate bandwidth across applications. The Riverbed QoS system organizes applications into classes based on traffic importance, bandwidth needs, and delay sensitivity.

SLA Compliance

Service Level Agreements (SLAs) often include performance metrics that must be met by network service providers. Monitoring and measuring these metrics allow organizations to hold providers accountable and ensure that agreed-upon performance standards are being met. Riverbed Network Observability monitors metrics associated with the service components that make up each SLA. By proactively monitoring the health of the network, issues can be identified and escalated quickly, before end users are impacted.

Help clients gain insights into their networks

Network performance metrics, including Throughput, Network Latency (Delay), and Jitter, provide valuable insights into the efficiency and reliability of a network. Riverbed makes it easy for your clients’ Network teams to monitor, optimize, troubleshoot, and analyze what’s happening across their hybrid network environment. With end-to-end visibility and actionable insights, Network teams can quickly and proactively resolve any network-based performance issues.

Riverbed Network Observability collects all packets, all flows, all device metrics, all the time, across all environments—cloud, virtual, and on-prem—providing enterprise-wide, business-centric monitoring of critical business initiatives.

]]>
Analyst Insights for Trailblazing the Digital Workplace Landscape https://www.riverbed.com/blogs/analyst-insights-for-dex-and-the-digital-workplace/ Thu, 13 Jul 2023 12:47:00 +0000 /?p=22122 Recently, I had the privilege of attending the 2023 Gartner Digital Workplace Summit in San Diego, where leading Gartner analysts shared their insights and predictions on the latest trends shaping the digital workplace landscape. When it comes to empowering employees, enhancing productivity, and fostering sustainable growth in the digital workplace, these are a few of the important messages that resonated with me during the event.

Digital dexterity for managers and employees

Gartner analyst Lane Stevenson predicted that organizations prioritizing digital dexterity enablement for both managers and employees will experience significant year-over-year revenue growth by 2027. Digital dexterity refers to the ability for employees to use and manipulate digital technologies effectively and efficiently. It involves understanding their proficiency to navigate digital interfaces, operate devices, and interact with digital tools and applications.

Leveraging employee productivity monitoring tools

According to Gartner analyst Tori Paulman, many employees are open to being tracked by productivity monitoring tools if there is a system in place that helps them improve their skills. Robust Digital Employee Experience (DEX) solutions, enable organizations to assess whether applications empower or frustrate employees. Understanding and enhancing the employee experience can have a positive impact on employee engagement, productivity and overall satisfaction.

Expanding the scope of DEX

Gartner analyst Dan Wilson asserted that DEX tool deployments focused solely on IT use cases will struggle to achieve sustainable ROI. It is crucial to consider non-IT use cases such as equitable experience and sustainability. DEX tools equipped with telemetry and sentiment analysis capabilities can identify digital friction experienced by individual employees or specific employee segments. This is particularly relevant for remote workers facing challenges like slow internet connectivity.

The intersection of sustainability and performance

During the conference, Gartner analysts emphasized the importance of evaluating DEX platforms based on their Green IT use cases. Autumn Stanish and Stuart Downes discussed the benefits of adopting a performance-driven refresh cycle for endpoint devices, rather than relying on calendar-based replacements.

DEX tools that monitor power consumption, optimize power-saving features, and encourage improved habits among employees lay the foundation for digital business leadership. However, the challenge lies in striking the right balance between sustainability and performance trade-offs. While extending the lifecycle of laptops reduces the annual total cost of ownership and carbon footprint, the number of performance risks for laptops increase. Examples include increased failure rates, compatibility issues with future OS and apps, insufficient hardware support for new workloads and more.

Create a strong digital workplace

The Gartner Digital Workplace Conference shed light on crucial aspects of today’s digital workplace strategy.

Prioritizing digital dexterity, leveraging employee productivity monitoring tools, expanding DEX beyond IT use cases, and embracing sustainability through Green IT initiatives are key considerations for organizations aiming to thrive in the digital age. By staying informed and implementing these insights, businesses can create a digital workplace that empowers employees, maximizes performance, and drives sustainable growth in the digital age.

]]>
Empowering Patient-Centered Healthcare with Visibility Solutions https://www.riverbed.com/blogs/network-visibility-solutions-for-patient-healthcare/ Tue, 20 Jun 2023 12:03:49 +0000 /?p=21530 High-performance healthcare can be achievable with a patient-centered approach guided by visibility solutions. In today’s healthcare environment, hospitals face the challenge of managing a complex IT infrastructure that must support a wide range of technology interfaces.

Some of the most significant challenges include:

  • Picture of surgical robot in hospital.Innovation: Patient-centered care means having the right expertise available wherever the patient may be located. Technologies such as the Da Vinci Surgical System allow remote surgeons to operate and perform complex surgeries remotely.
  • Security: Healthcare organizations must protect sensitive patient data, including medical records, personal information, and financial data, from unauthorized access or theft. As healthcare systems are digitally managed, the risk of data breaches and cyberattacks increases, making data security a top concern. For example, healthcare organizations are often top targets of ransomware and data breaches including the 2018 SingHealth data breach.
  • Interoperability: Different healthcare systems and applications may use diverse data formats and protocols, making it difficult to share information and coordinate care across healthcare providers and settings. This lack of interoperability can result in errors, delays, and redundancies in care delivery.
  • System Integration: Integrating disparate technologies, applications and devices is critical for healthcare organizations. However, many systems are not designed to work together, resulting in inefficiencies, data inconsistencies and difficulty in exchanging data and paralyzing hospital operations.
  • Regulatory Compliance: Healthcare IT networks must comply with numerous regulatory requirements and data privacy regulations.
  • Cost: Implementing and maintaining healthcare IT networks can be expensive, particularly for smaller organizations. Healthcare providers must also ensure that their networks can support the demands of the clinical setting, such as high-volume data transfer and real-time data processing. Learn more about how you can reduce costs for devices, software, cloud and network with Riverbed.
  • User adoption and training: Healthcare providers and staff may resist new technology or need more skills to use it effectively. Adequate training and support are critical to ensure the optimal use of technology.

 

The role of IT in managing change

Implementing and managing healthcare tech is an IT job, but the planning process calls for collaboration with clinical leaders to ensure optimal care delivery when and where needed. Investing in appropriate resources, including people, processes, and technology, is vital to providing exceptional patient experiences. To keep up with these changes, hospitals need to invest in building futuristic architectures that can support technological advances to enhance patient experience, empower healthcare workforce, and streamline operations.

End-to-end network visibility to harmonize healthcare IT

New-age healthcare tech such as heart monitors, biosensors, oximeters, BP monitors, etc. and the ability to view health reports online can help promote real-time collaboration and consultation with colleagues and specialists during hospital rounds or practice hours, from clinics in regional areas, or whenever and wherever needed. The benefits are not limited to accessing patient records and improving patient care. It can significantly help manage daily hospital operations such as staff rostering, equipment sterilization, bookings for surgeries, and more. However, these devices increase the load on the hospitals’ LAN and WiFi network.

For the successful integration of technologies to enable communication, it’s crucial to have a dependable underlying network that supports them. To achieve this, IT teams need comprehensive end-to-end network visibility to keep all the applications connected.

Addressing challenges associated with new-age healthcare tech

Substantial use of IT heightens the risk of a data breach. Managing various endpoints, including mobile users, medical devices, and applications, is complex—bring your own device (BYOD) could add to the complexity. An increase in the number of devices can also strain the infrastructure, bandwidth and IT resources.

Hospitals can deploy tools such as  Aternity Digital Experience Management (DEM) to address these challenges. Aternity DEM is a comprehensive platform that captures and stores technical telemetry from desktop and mobile endpoint devices. It enables IT teams to gain better visibility into the actual user experience and device performance, which can inform decisions on device replacement based on performance and help identify and eliminate redundant or underused software licenses. By curtailing shadow IT, IT teams can manage software usage more effectively, identify and eliminate wasteful solutions and utilize budgets more efficiently.

The Riverbed NetProfiler proved to be incredibly valuable for monitoring the complex hospital network, which requires constant communication between internal and external endpoints. With end-to-end network monitoring and visibility, the hospital can manage information flow, monitor patient health in real time, process insurance claims, maintain medical records, and improve overall operations.

The Riverbed AppResponse enables the hospital to monitor and analyze network-based application performance, allowing them to quickly resolve issues to avoid disruptions in daily operations. The Riverbed NetIM maps application network paths, providing granular-level monitoring and troubleshooting of the IT infrastructure. This mapping is particularly crucial in a hospital setting, where staff across various functions tend to use different applications. Lastly, the Riverbed Portal provides integrated network and application insights, enabling the hospital to gain control of their network and ensure that their IT systems are functioning properly.

Invest in operational excellence

In conclusion, end-to-end network visibility unleashes the hidden aspects of the healthcare ecosystem, allowing caregivers to deliver high-quality and personalized patient care. To keep up with these advances, hospitals can invest in adding futuristic tools and applications supported by a high-performance network to enhance patient experience and operational excellence. Learn how to get more out of your IT budget with Aternity DEM before you plan to integrate new technologies to your healthcare setup’s IT stack.

]]>
What Are the Three Major Network Performance Metrics? https://www.riverbed.com/blogs/what-are-the-three-major-network-performance-metrics/ Tue, 13 Jun 2023 12:42:00 +0000 /?p=21527 In today’s hyper-connected world, where businesses rely heavily on network infrastructure to transmit data and deliver services, understanding network performance metrics is crucial. Network performance metrics provide insights into the efficiency, reliability, and overall health of a network. In this blog, we will delve into three major network performance metrics: Throughput, Network Latency (Delay), and Jitter.

By understanding these metrics, you’ll be better equipped to optimize your network and ensure seamless operations.

What is Throughput?

Throughput refers to the amount of data that can be transmitted through a network within a given time frame. It is commonly measured in bits per second (bps) or its multiples (Kbps, Mbps, Gbps). Throughput represents the network’s capacity to deliver data and is often associated with bandwidth. It measures how fast data can be transferred between devices, servers, or networks. Higher throughput indicates a network’s ability to handle larger data volumes and support bandwidth-intensive applications such as video streaming or large file transfers.

What is Network Latency (Delay)?

Network latency, also known as delay, is the time it takes for a data packet to travel from its source to its destination across a network. It is usually measured in milliseconds (ms). Latency can be affected by various factors such as the distance between network endpoints, network congestion, and the quality of network equipment. Lower latency signifies faster response times and better user experience. Applications that require real-time interaction, such as online gaming or voice/video conferencing, are particularly sensitive to latency. Minimizing latency is crucial to ensuring smooth and seamless communication.

What is Jitter?

Jitter refers to the variation in delay experienced by packets as they traverse a network. It is measured in milliseconds (ms) and represents the inconsistency or unevenness of latency. Jitter is caused by network congestion, routing changes, or varying levels of traffic. High jitter can lead to packet loss, out-of-order packet delivery, and increased latency, negatively impacting the performance of real-time applications. To ensure optimal performance, it is essential to minimize jitter and maintain a stable and predictable network environment.

Why are network performance metrics important?

Network performance metrics play a vital role in several aspects.

Capacity Planning

Understanding throughput helps network administrators determine the network’s capacity and whether it can handle the expected workload. With Riverbed Riverbed’s Unified Network Performance Management (NPM) solutions, organizations can proactively manage network and application performance. Additionally, NPM allows Network Operations teams to effectively manage costs by investing only in upgrading critical infrastructure, consolidating underutilize resources and managing assets of multiple business units. Riverbed NPM delivers the ability to auto-discover topology and continuously poll metrics, automate analyses, and generate capacity planning reports that are easily customizable to changing business and technology needs.

Performance Optimization

Monitoring latency and jitter allows organizations to identify and troubleshoot network performance issues. By pinpointing the root causes of delays or inconsistencies, network administrators can optimize network configurations and minimize disruptions. For performance optimization, Riverbed NPM provides cloud visibility by ensuring optimal use and performance of cloud resources and helps organizations manage the complexity of Hybrid IT with agile networking across data centers, branches and edge devices. Riverbed NPM helps overcome latency and congestion by proactively monitoring key metrics and their affect on application performance.

Quality of Service (QoS)

Network performance metrics enable the implementation of effective Quality of Service policies. By prioritizing specific types of traffic based on their requirements, such as voice or video data, organizations can ensure a consistent and reliable user experience. The Riverbed QoS system uses a combination of IP packet header information and advanced Layer-7 application flow classification to accurately allocate bandwidth across applications. The Riverbed QoS system organizes applications into classes based on traffic importance, bandwidth needs, and delay sensitivity.

SLA Compliance

Service Level Agreements (SLAs) often include performance metrics that must be met by network service providers. Monitoring and measuring these metrics allow organizations to hold providers accountable and ensure that agreed-upon performance standards are being met. Riverbed NPM monitors metrics associated with the service components that make up each SLA. By proactively monitoring the health of the network, issues can be identified and escalated quickly, before end users are impacted.

Gain insights into your network

Network performance metrics, including Throughput, Network Latency (Delay), and Jitter, provide valuable insights into the efficiency and reliability of a network. Riverbed makes it easy for Network teams to monitor, optimize, troubleshoot, and analyze what’s happening across their hybrid network environment. With end-to-end visibility and actionable insights, Network teams can quickly and proactively resolve any network-based performance issues.

Riverbed’s  unified NPM collects all packets, all flows, all device metrics, all the time, across all environments—cloud, virtual, and on-prem—providing enterprise-wide, business-centric monitoring of critical business initiatives.

]]>
What is Digital Experience Monitoring? https://www.riverbed.com/blogs/what-is-digital-experience-monitoring/ Thu, 08 Jun 2023 12:12:00 +0000 /?p=21429 Digital Experience Monitoring (DEM) is a user-centric approach that focuses on improving the performance of digital platforms to enhance the user experience. As more interactions move online, the need for smooth, intuitive, and responsive digital experiences becomes increasingly important. DEM provides the tools necessary to measure, track, and optimize these experiences in real-time.

The building blocks of Digital Experience Monitoring

At its core, DEM involves monitoring digital services from the end user’s perspective. This means understanding how different elements of a digital platform, such as web pages or mobile apps, perform for the user.

There are two key components in DEM: Real User Monitoring (RUM) and Synthetic Monitoring. RUM involves collecting data from real users in real-time to understand their experiences with a digital platform. This data provides valuable insights into how users interact with a platform, helping to identify any potential performance issues.

On the other hand, Synthetic Monitoring involves simulating user interactions with a platform to identify any potential bottlenecks or performance issues before they affect real users. This proactive approach helps to maintain optimal performance levels and ensure a seamless user experience.

The importance of Digital Experience Monitoring

In today’s digital age, user experience is king. A smooth, seamless, and intuitive digital experience can be the difference between a one-time visitor and a loyal customer. By implementing DEM, businesses can gain a better understanding of how their digital platforms are performing, identify any potential issues, and take proactive steps to optimize the user experience.

A key benefit of DEM is that it provides actionable insights. Instead of just collecting data, DEM allows businesses to understand what the data means and how it can be used to improve performance. This could mean making changes to a website’s design to make it more user-friendly, or optimizing a mobile app to improve load times.

Moreover, DEM is not just about improving the user experience—it also has a direct impact on a business’s bottom line. A better user experience leads to higher customer satisfaction, which in turn leads to increased customer loyalty and higher revenue.

Implementing Digital Experience Monitoring

Implementing DEM involves a combination of technology and strategy. On the technology front, businesses need to invest in the right tools to collect, analyze, and interpret user experience data. These tools need to be able to provide real-time insights and identify any potential performance issues as soon as they occur.

On the strategic front, businesses need to have a clear understanding of what they want to achieve with DEM. This could be improving the load times of a website, increasing the responsiveness of a mobile app, or enhancing the overall user experience across all digital platforms.

Once the goals are defined, businesses can then use the insights gained from DEM to implement changes and track their impact. This iterative process of monitoring, implementing changes, and monitoring again ensures continuous improvement and optimization of the user experience.

Aternity’s role in Digital Experience Monitoring

Aternity Digital Experience Management (DEM) offers a unique, comprehensive solution for businesses aiming to optimize the digital experience for their end-users. Its key strength lies in measuring and analyzing every user interaction across applications, whether cloud-based, on-premise, or mobile. This provides businesses with a clear understanding of how their tech infrastructure affects daily productivity and customer satisfaction.

Aternity DEM captures data directly from the end-user’s device, providing a true “outside-in” perspective for a more accurate reflection of the user experience. Its flexibility allows for monitoring applications regardless of their delivery method, while proactive performance benchmarking and alert thresholds help identify potential issues before they significantly impact the user experience. This comprehensive, user-centered approach empowers businesses to enhance productivity, reduce downtime, and improve overall user satisfaction.

The future of Digital Experience Monitoring

As technology continues to evolve, so too will the field of Digital Experience Monitoring. New technologies such as AI and machine learning are already being used to provide more detailed and accurate insights into user behavior. These technologies will continue to play a vital role in the future of DEM, enabling businesses to provide personalized, intuitive, and seamless digital experiences that exceed user expectations.

Digital Experience Monitoring is a powerful tool for any business operating in the digital space. By providing valuable insights into user behavior and performance issues, DEM enables businesses to proactively optimize the user experience, leading to higher customer satisfaction, increased loyalty, and ultimately, greater success. Learn more about full-spectrum DEM here.

]]>
What Are the Four Main Areas of Digital Transformation? https://www.riverbed.com/blogs/what-are-the-four-main-areas-of-digital-transformation/ Thu, 01 Jun 2023 12:27:00 +0000 /?p=21401 In today’s fast-paced and interconnected world, digital transformation has become a critical driver of success for organizations across industries. It encompasses a profound shift in leveraging technology to enhance business processes, revolutionize customer experiences, and drive growth.

The digital transformation journey covers four key areas: domain transformation, process transformation, business model transformation, and organizational digital transformation. Riverbed’s Unified Observability platform, which offers complete visibility and actionable insights based on full-fidelity, full stack telemetry, provides companies the launching pad to a successful digital transformation program.

Domain transformation

Domain transformation focuses on redefining an organization’s core functions and offerings in the digital realm. It involves embracing technological advancements to deliver innovative products and services. This may involve leveraging artificial intelligence, the Internet of Things (IoT), cloud computing, or big data analytics to create new digital experiences for customers.

For example, traditional brick-and-mortar retailers are increasingly adopting e-commerce platforms, providing customers with seamless online shopping experiences. This domain transformation enables retailers to reach a global customer base, personalize product recommendations, and streamline logistics, ultimately enhancing customer satisfaction and driving revenue growth.

Riverbed offers several solutions to reduce the risk of change for domain transformation. For example, Riverbed offers a comprehensive cloud migration offering that helps organizations avoid performance issues, unexpected delays, and unplanned costs. Riverbed provides cloud visibility by providing insights into workload performance in hybrid cloud, multi-cloud, or SaaS environments, and ensuring the security of these workloads. It enables IT Ops teams to plan for seamless application migrations by mapping application dependencies, as well as predicting post-migration performance. Additionally, it helps reduce cloud costs by optimizing bandwidth utilization, resulting in up to 95% reduction in cloud egress costs. By understanding traffic patterns and associated costs, organizations can engage in more efficient planning. Furthermore, Riverbed’s solution optimizes cloud performance by delivering 33 times faster cloud app performance to users regardless of their location.

Whether it’s for cloud, Windows 11 or VDI, Riverbed offers a range of solutions to reduce the risk of IT change for digital transformation initiatives.

Process transformation

Process transformation entails reimagining and optimizing existing business processes by leveraging digital technologies. This involves automating manual tasks, improving efficiency, and enhancing collaboration through digital tools and platforms.

By implementing automation, organizations can automate repetitive tasks, thereby freeing up employees to focus on higher-value activities. Additionally, process transformation involves implementing cloud-based collaboration tools, enabling teams to work seamlessly across geographical boundaries and fostering innovation through enhanced communication and knowledge sharing.

Powered by the Riverbed LogiQ Engine, the Riverbed portfolio uses AI, correlation, and automation to streamline repeatable processes with minimal human intervention, lower costs, and improved user satisfaction. Riverbed uniquely offers broader automation use cases that extract insights across Riverbed monitoring data and existing 3rd party tool silos to enable faster time to resolution. With its powerful automation, analytical and integration capabilities, Riverbed delivers solutions such as automated incident response, intelligent ServiceNow ticketing, automated desktop remediation and intelligent incident response for IT Ops and Service Desk Teams.

Business model transformation

Business model transformation involves reinventing an organization’s fundamental approach to value creation and revenue generation. It requires identifying new opportunities and leveraging digital technologies to deliver unique value propositions to customers.

For instance, the rise of the sharing economy, powered by platforms like Uber and Airbnb, exemplifies business model transformation. These companies disrupted traditional industries by providing on-demand transportation and accommodation services, respectively, using digital platforms that connect customers with providers. By unlocking underutilized resources and delivering convenience and personalized experiences, they created entirely new business models and market opportunities.

Organizational digital transformation

Organizational digital transformation encompasses the cultural and structural changes necessary to support and sustain digital initiatives. It involves fostering a digital mindset across the organization, empowering employees to embrace change, and promoting a culture of innovation.

To successfully navigate organizational digital transformation, organizations must invest in a comprehensive Digital Employee Experience solution. Riverbed’s Aternity provides companies a complete view of the total digital employee experience by tightly correlating both quantitative and qualitative measures of experience. Aternity already offers the deepest quantitative insights, such as application and performance data, into the digital experience and the most powerful insights into the customer experience. With its ability to gauge employee feedback via Aternity Sentiment surveys and the ability to benchmark digital experience against industry peers, Aternity delivers aggregated insights based on application and device performance data along with human reactions, ultimately providing total experience management from an organization’s employees to their customers.

Start your digital transformation journey with Riverbed

By embracing domain transformation, process transformation, business model transformation, and organizational digital transformation, businesses can unlock new opportunities, enhance customer experiences, and stay ahead of the competition.

With its Riverbed and Acceleration offerings, Riverbed can guide companies in their digital transformation projects from start to finish. Before kicking off the project, Riverbed professionals will help organizations ensure that their new investments are targeted and prioritized based on issues that have the most impact on user experience. During the implementation, Riverbed will track progress, provide recommendations on strategy adjustments and provide guidance based on full data visibility.

So, let the digital transformation journey begin, and let innovation and growth propel your organization to new heights.

]]>
Ensuring Compliance for Better Business Resilience https://www.riverbed.com/blogs/ensuring-compliance-with-npm-for-business-resilience/ Thu, 25 May 2023 12:38:37 +0000 /?p=21041 In today’s hybrid environments, network performance management (NPM) is critical for any organization’s success. Networks are the backbone of modern businesses, enabling communication, collaboration, and information sharing. However, with the increasing complexity of networks and the rise in cyber threats, ensuring network performance can be a challenge.

Why compliance is a pillar of business resilience

Business resilience is the ability of an organization to withstand, adapt and recover from disruptions and challenges. One critical aspect of business resilience is compliance, which refers to adhering to legal, regulatory, and organizational standards that apply to your business. Compliance plays a crucial role in network performance management and can help organizations fortify their network.

What compliance looks like for your hybrid network depends on your industry. For example, highly-regulated industries like government, medical and financial services usually have more stringent compliance requirements. Careful adherence to security and operational standards, however, is a necessity to some degree in every hybrid network.

When your network fails to meet internal and external compliance requirements, you risk creating security gaps and incurring fines. A hybrid network actively managed to operational and security standards, however, is able to remain compliant even in instances of network disruption. This level of compliance allows organizations to effectively maintain resilience on older applications and services while introducing new technologies.

Compliance can help improve network performance by:

  1. Enhancing Security: Compliance regulations often require the implementation of security measures that can help protect against cyber threats and minimize the risk of data breaches. By implementing mandated security measures, organizations can improve network performance by reducing downtime caused by security incidents and ensuring the confidentiality and integrity of sensitive data.
  2. Reducing Network Downtime: Compliance regulations also require organizations to establish failsafe procedures to ensure business continuity in the event of a cyberattack or system failure. By implementing organization or governmental compliance measures, businesses can reduce network downtime, ensuring that critical business processes can continue even during an outage. This can help improve network performance by minimizing the impact of network disruptions on business operations.
  3. Streamlining Network Management: Compliance regulations often require the documentation of network configuration and management processes. By implementing standardized processes and procedures, organizations can streamline network management, making it easier to monitor and troubleshoot issues. This can help improve network performance by reducing the time required to identify and resolve network problems.

Compliance is not just a legal requirement but is also a strategic imperative for businesses looking to optimize their network performance. By prioritizing compliance and implementing the required security measures, policies and procedures, organizations can ensure their networks are performing at their best while also meeting regulatory requirements. This ensures a secure and reliable digital experience for employees and customers, safeguarding people, assets and overall brand equity.

Ensure operational governance and compliance

The Riverbed NPM portfolio helps network teams with oversight through orchestration and data management. Compliance, whether directed by organizational or governmental requirements, is a way to safeguard the network in addition to the business. With new operational governance features like automated orchestration, IT teams can stand up, take down and redeploy Riverbed NPM products to a known safe state seamlessly. Riverbed NPM also now accommodates governmental standards like the Federal Information Processing Standard (FIPS) and Section 508 to ensure uniform practice and accessibility.

Failure to comply with such regulations can result in major fines, loss in revenue and negative customer sentiment. So, whether you are addressing requirements for security, fiduciary, accessibility, or other standards, Riverbed NPM ensures business resilience by leading the industry with regulatory compliance requisites for the modern hybrid network.

For more information on business resilience and how the Riverbed Network Performance Management portfolio can help your organization, please visit this page.

]]>
How Do You Reduce Your IT Costs? https://www.riverbed.com/blogs/how-do-you-reduce-your-it-costs/ Tue, 23 May 2023 12:53:42 +0000 /?p=21177 IT is the backbone of every business. Without a strong and robust IT team that can maintain a high level of performance and reliability, business can suffer due to a lack of employee productivity, decreases in customer satisfaction, and overall poor performance.

However, even given its critical nature, the reality is IT is a huge expenditure for many businesses and there are often ways to reduce the IT budget without sacrificing digital experience, employee productivity, or customer satisfaction. The tricky part is determining where exactly those cuts can be made and where to start.

In this blog, we’ll provide three tips on reducing your IT costs: how to identify the right devices to upgrade, the importance of IT budgeting, and an IT cost reduction checklist.

Identify the right devices to upgrade

The first step to optimizing IT costs is to evaluate your existing infrastructure. This evaluation will help you determine which devices need to be upgraded or replaced.

Here are three tips to help you identify the right devices to upgrade:

  1. Extend the life of devices: While many businesses will replace devices based on their age, you can save money by focusing on device performance. Sometimes older devices are still performing well and don’t have to be replaced, this can result in huge device cost savings.
  2. Right size employee devices: Ensure you are providing your employees with appropriately powered devices. When refreshing your employee devices, evaluate their needs, if an employee is primarily using light applications, they may not need a high-powered device. On the other hand, an employee that spends their day using resource intensive applications will need a device that can support their use-case.
  3. Identify poorly performing devices: Like extending the life of older devices that are still performing well, some newer devices may not be performing as well as expected. By identifying these devices, you may be able to proactively fix performance issues to save on expenses.

Importance of IT budgeting

Once you have identified the devices that need to be upgraded, the next step is to develop an IT budget. IT budgeting is critical to managing IT costs effectively.

Here are a few key benefits of IT budgeting:

  • Optimize software licenses: Evaluate the software you use and the licenses you have. It is possible that you are paying for licenses that aren’t being used, employees are using redundant software, or shadow IT applications are increasing your software costs.
  • Assess network infrastructure: Assess the network infrastructure and determine where bottlenecks are occurring. This assessment will help you identify areas where you can either upgrade or streamline network infrastructure to reduce bandwidth costs.
  • Evaluate cloud spend: Cloud costs can rise quickly as you move to cloud-native or hybrid cloud environments. It’s critical you closely examine and understand the bills coming from your cloud provider and take steps to minimize unnecessary cloud traffic.
  • Prioritize spending: Identify the areas of IT investment that will provide the most ROI and focus on those spending areas first. Measure the impact of planned and ongoing changes on things like the digital experience, application performance, device health, and network performance to ensure you are getting the most bang for your buck.

IT cost reduction checklist

To help you reduce IT costs, follow this quick checklist to get started:

  • Align IT with business goals: Ensure your IT investments align with business goals, enabling you to optimize IT costs while driving business growth.
  • Determine a device refresh strategy: Identify the devices that need to be replaced, which ones can be fixed, and which ones can continue being used.
  • Identify savings opportunities: Look for ways to save on existing spend in areas like software licenses, cloud usage, and network bandwidth.
  • Automate IT processes: Automate IT processes to reduce manual labor and increase efficiency.

In conclusion, reducing IT costs is critical for every business, and the key is to optimize IT infrastructure while minimizing unnecessary expenses.

 

]]>
Enhanced Network Security for Better Business Resilience https://www.riverbed.com/blogs/enhanced-network-security-for-business-resilience/ Wed, 17 May 2023 12:17:55 +0000 /?p=21043 Imagine you’re the CEO of a business that relies heavily on your company’s network to keep things running smoothly. Hybrid network workflows combine both on-premise data centers and cloud environments, as well as users accessing applications from various devices and locations. All of these elements, as well as the data that passes through them, need to be protected. One day, you get a call from your IT department telling you that your network has been hacked. Panic sets in.

What do you do? Were you prepared for this? What is the financial or reputational impact to the business? Does your network have business resilience?

Why security is a pillar of business resilience

Business resilience is a critical factor in today’s fast-paced and dynamic business environment. Network performance management (NPM) plays a significant role in ensuring business resilience by managing network performance, compliance, and security. Security is one of the most crucial areas of focus for business resilience in the context of NPM. Improving your network’s security, making it more adaptable, can help it respond favorably to a rapidly evolving threat landscape. Not only will you weather potential attacks better, recovering faster and with less damage, but you may be able to avoid others altogether.

According to the Enterprise Strategy Group (ESG) 2023 Technology Spending Intentions Survey, 65% of IT professionals anticipate spending more on cybersecurity than any other area. Modern networks struggle to keep pace with an ever changing threat landscape. As threats and threat actors evolve and grow more sophisticated, you need a resilient hybrid network that leverages data to help your team find and fix issues faster, remediate threats, and avoid risks.

Taking steps to mitigate security risks

As this is often a daunting task for the IT organization to figure out and manage, NPM gives NetOps and SecOps teams to data and functionality to mitigate security risks. When evaluating potential NPM offerings in the context of security, identify solutions that have the following characteristics:

  • In the event of a cyber attack, look for NPM products that are able to be deployed, taken down and restored to a safe state automatically having no impact to the network.
  • Look for functionality like intelligent forensic analysis that can automate threat identification and reduce future risks.
  • For proactive threat hunting, full-fidelity data capturing every packet, flow, and device metric in your hybrid network without sampling.
  • And when finding and fixing security issues faster, look for products with anomaly detection backed by AI/ML to automate data analysis.

To build resilience into security in network performance management, businesses need to take a proactive and holistic approach. Here are some best practices:

  1. Develop a comprehensive security strategy: This should include clear objectives, metrics, and processes for monitoring and ensuring security.
  2. Invest in the right tools and technologies: Effective security requires the right tools, such as traditional threat prevention tools and methods as well as products that produce forensic telemetry that find threats that traditional security tools might miss. Businesses need to evaluate their needs and choose the tools that best fit their requirements.
  3. Monitor and analyze network traffic: By monitoring network traffic, businesses can identify potential security threats and take action before they cause damage.

Engage intelligent security methods against cyber threats

Riverbed NPM products play a strategic role in the overall security of hybrid networks. NPM products need to be seamlessly integrated into an organization’s automated processes to remove the potential risk from manual administration.

With new features like automated orchestration, IT teams have the ability to restore Riverbed NPM products to a known safe state without manual intervention in the event of cyber-attacks or other potential internal or external network threats. In addition, the Riverbed NPM portfolio provides full fidelity data by capturing every packet, flow and device metric without sampling for forensic purposes. This helps identify potential risk exposures that traditional security tools might miss. Solid security competencies drive business resilience by reducing both the risk of negative business impacting events and the magnitude of when they occur.

For more information on business resilience and how the Riverbed Network Performance Management portfolio can help your organization, please visit this page.

]]>
Optimize the Digital Employee Experience with Aternity Sentiment https://www.riverbed.com/blogs/digital-employee-experience-with-aternity-sentiment/ Mon, 15 May 2023 12:55:00 +0000 /?p=21180 In today’s fast-paced business world, delivering a superior digital experience is essential for driving employee productivity, satisfaction, and customer experience. IT departments are constantly seeking ways to improve digital experiences, but the challenge lies in understanding users’ perceptions of device and application performance.

A recent Forrester Report states, “while many organizations focus on tools to measure and enhance DEX, the path to success starts long before the tools discussions. Your strategy must embrace a flexible philosophy for happier employees. Then you can explore a variety of technologies to fulfill that vision.” To truly understand the complete digital experience, IT teams need to correlate qualitative employee feedback with full-fidelity quantitative performance metrics.

This is where Aternity Sentiment comes in.

Introducing Aternity Sentiment

Aternity Sentiment empowers IT teams to identify user experience issues, take targeted prescriptive actions, and enhance employee productivity, satisfaction, service quality, and overall business performance. By tightly correlating quantitative and qualitative measures, Aternity Sentiment offers the most comprehensive view of the digital employee experience, setting a new standard for DEX.

Watch the video to learn how Aternity Sentiment empowers total experience management from employees to customers:

Explore benefits of Aternity Sentiment

Empower employees and drive productivity to improve business performance

Aternity Sentiment significantly enhances employee engagement and productivity, leading to improved business performance. By capturing real-time feedback through tailored surveys, Sentiment complements existing Aternity application and device performance data, offering a comprehensive understanding of employee satisfaction. This approach allows IT teams to pinpoint areas that require improvement and implement targeted measures to optimize the digital experience. The use of flexible survey components ensures an accurate assessment of user satisfaction across various devices and locations. Ultimately, this empowers employees and drives productivity, resulting in better overall business performance.

Accelerate digital transformation adoption with targeted employee engagement

Digital transformation is a complex process that requires broad adoption of new technologies and processes across organizational boundaries. Employee acceptance is crucial for successful technology and process changes. Aternity Sentiment facilitates this acceptance by providing workflow integration of qualitative telemetry and analysis in the context of actual user data. Customized branding and precise timing of survey deployment to targeted user groups foster user trust and raise response rates. By engaging employees and addressing their concerns, Aternity Sentiment accelerates the adoption of digital transformation initiatives, ensuring your organization remains competitive and agile.

Deliver total experience management for a comprehensive view of employee and customer experience

 Aternity Sentiment enhances Aternity’s total experience management capabilities, providing a comprehensive view of both employee and customer experiences. Aternity’s unique click-to-render insights, end-user experience data, and user journey analytics offer valuable customer insights. By integrating Sentiment’s qualitative feedback with these capabilities, Aternity enables IT teams to rapidly isolate the cause of delays, uncover hidden issues, and optimize the overall digital experience. This holistic approach ensures a seamless and enjoyable experience for employees and customers alike, leading to higher satisfaction and loyalty.

Manage IT more proactively and efficiently with real-time feedback collection

 Aternity Sentiment extends Aternity’s proactive incident management by offering an early warning system through periodic, real-time feedback collection. As a result, IT Operation teams can quickly identify problems before they become systemic, widespread issues. This proactive approach reduces downtime, prevents loss of productivity, and helps maintain a positive user experience. In addition, Sentiment’s trending analysis of qualitative feedback helps identify patterns in user behavior, uncover recurring or common issues, and track service quality improvement efforts. This empowers IT teams to make data-driven decisions and manage resources more efficiently.

Improve IT service quality by implementing experience-level agreements (XLAs)

Aternity Sentiment supports organizations implementing XLA metrics, which focus on employee experience and understanding how IT influences productivity. Unlike traditional SLAs that measure transactional metrics by department, XLAs emphasize the importance of a positive employee experience. With Sentiment’s out-of-the-box and customizable surveys, organizations can analyze survey responses by various attributes and correlate employee satisfaction with device and application performance. This enables IT and LOB leaders to measure the productivity impact of technology changes, determine why a user (or group) may prefer one application over another, and analyze trends in the context of business processes. As a result, leaders can make informed decisions to improve policies, prioritize investments, and identify skills gaps, ultimately enhancing IT service quality and driving better business outcomes.

Sentiment is a game-changer for DEX, providing organizations with the ability to correlate qualitative employee feedback with quantitative performance metrics. This innovative approach empowers IT teams to deliver better digital experiences, drive productivity, and improve overall business performance.

Don’t let your organization fall behind—embrace the future of DEX with Aternity DEM. Learn more by visiting riverbed.com/DEX.

]]>
Riverbed Changes the Incident Response Paradigm with Intelligent Ticketing https://www.riverbed.com/blogs/intelligent-ticketing-with-alluvio-unified-observability/ Mon, 08 May 2023 12:24:00 +0000 /?p=21104 What if your solution could quickly identify and isolate the root cause of problems and provide intelligent recommendations for remediation before a ServiceNow ticket has been generated? What if the ServiceNow ticket was automatically assigned the right severity and routed to the right level based on the relevant context and insights? With Riverbed’s full fidelity insights, complex ticketing workflows become razor sharp, highly automated processes.

In today’s market, IT Operations teams need fast, context-driven insights to optimize business performance. The increasing complexity of managing incidents in multi-cloud environments, Service Desk and Network Operations teams has resulted in overwhelming volumes of data, alerts and tickets. However, siloed domain-specific monitoring tools fail to provide context or actionable insights. Additionally, limitations in automation scope, diagnostic information gathering and time-consuming steps for ticket documentation negatively affect first level resolution rates and costs, making it hard to effectively prioritize the flood of tickets.

Riverbed’s Unified Observability portfolio solves these problems by delivering full-fidelity telemetry and actionable insights for an organization’s entire technology stack, from applications and infrastructure to end-user experience. With its integration with ServiceNow, the market leader for ITSM platforms, Riverbed provides deep ServiceNow incident context to Service Desk agents and Network Operations teams. Riverbed’s triage, diagnostic and remediation automations streamline ServiceNow ticket creation and escalation.

Riverbed and ServiceNow cross-portfolio integration

Riverbed’s integration with ServiceNow aligns with ServiceNow’s vision of a single, unifying platform for companies advancing digital transformation programs. The combined solution delivers targeted incident response context and automation across Digital Experience Management (DEM), Application Performance Management (APM), and Infrastructure Management (IM) operational data domains. Riverbed offers direct integration within individual Riverbed portfolio products when event-driven ticketing makes sense, and delivers proactive ticketing with built-in intelligence across complex incidents.

Provide targeted, smarter incident response for IT operations teams

With its ability to easily integrate with third party observability tools, Riverbed IQ reduces the noise and eliminates the source of duplicated and false-positive ticketing, significantly reducing the volume of tickets created in ServiceNow. It replicates advanced investigative processes by correlating operational data across public cloud, private cloud, and data center infrastructure layers, looking for anomalous behaviors indicative of an emerging incident. When Riverbed IQ detects an anomaly, it automatically performs an investigation. If the behavior is identified as important based on anomaly thresholds, it creates a ServiceNow ticket with the right severity and assigns it to the right team, providing supporting incident context, cutting through the noise caused by event-based ticketing.

Empower L1 service desk agents to resolve issues faster

Riverbed Aternity provides Service Desk teams with extensive insights and tools to troubleshoot issues faster, make accurate decisions and resolve incidents without escalation. Aternity monitors end user devices, correlates device and application performance with user behavior, and identifies potential issues with the end user digital experience. When a degraded end-user experience issue is detected, Aternity automatically creates a ServiceNow incident and embeds employee-specific insights within the ServiceNow ITSM UI. With just one click, Service Desk agents can remotely perform investigative actions on any device to accelerate their troubleshooting.

Deliver higher-order incident response for network operations

Together with Riverbed IQ, the RIverbed Network Performance Management (NPM) suite revamps the reactive stance of NOCs, involving manual correlation of event data, by automatically correlating full-fidelity operational data, not just events, and surfacing actionable situations as ServiceNow tickets. Rivered IQ zeroes in on the root cause of a problem and provides the specific context immediately upon ServiceNow ticket creation. Riverbed offers end-to-end visibility and embeds actionable insights directly into the ServiceNow ticket, reducing escalations.

Riverbed’s Unified Observability portfolio, integrated with ServiceNow, empowers IT Operations teams to proactively resolve issues and optimize business performance. By delivering targeted incident response context and automation across operational data domains, Riverbed reduces noise and provides deep insights, enabling IT teams to resolve issues faster and reduce costs.

]]>
What Are Key Components of Digital Employee Experience? https://www.riverbed.com/blogs/key-components-of-digital-employee-experience-dex/ Fri, 28 Apr 2023 12:13:00 +0000 /?p=21075 The Digital Employee Experience (DEX) has become increasingly vital in today’s fast-evolving work landscape, particularly as organizations embrace remote and hybrid work environments. DEX encompasses every aspect of an employee’s interactions with digital tools, technologies, and resources that enable them to accomplish their tasks. Understanding and optimizing that employee experience and interaction with technology is essential for driving employee productivity, engagement, and satisfaction, ultimately leading to business success.

Key components of DEX

A comprehensive approach to Digital Employee Experience involves five critical components:

  1. Application performance and usability: Ensuring applications are fast, reliable, and user-friendly to support employees in their day-to-day tasks.
  2. Device performance and reliability: Providing employees with devices that are high-performing, dependable, and tailored to their specific needs.
  3. Connectivity and network performance: Facilitating fast and stable network connections that allow employees to work efficiently and collaborate seamlessly.
  4. Workspace environment and collaboration tools: Creating a digital environment that promotes effective communication and collaboration among team members.
  5. Security and data protection: Implementing robust security measures to safeguard sensitive company and employee information.

The importance of DEM solutions

To effectively manage and enhance DEX, Digital Experience Management (DEM) platforms are the vital tool in that technology toolbox for IT leaders. This is why organizations that are looking to improve their employee experience partner with Riverbed to implement Riverbed Aternity DEM across their enterprise. Aternity provides visibility into the actual user experience of performance of applications, devices, and networks, enabling IT teams to proactively identify and address issues impacting employee experience.

Overall, DEM solutions like Riverbed Aternity play a crucial role in improving DEX by:

  • Gaining real user insights into employee experiences with applications and devices
  • Proactively addressing performance issues to maintain a seamless, productive work environment
  • Optimizing IT infrastructure and resources to support employee productivity
  • Evaluating the impact of IT initiatives on employee experience and business results

Organizations looking to improve their DEX should adopt these four best practices:

  1. Establish a baseline: Measure the current state of application performance, device health, and network connectivity to create a foundation for understanding and improving employee experience.
  2. Identify and address bottlenecks: Use data from DEM solutions like Aternity and direct employee feedback to proactively resolve performance issues and maintain a seamless, productive work environment for employees.
  3. Prioritize user-centric initiatives: Focus on improving employee experiences with digital tools and resources, ensuring your organization’s technology investments yield maximum returns in terms of employee satisfaction and productivity.
  4. Measure and monitor: Regularly measure and monitor DEX metrics to track progress and ensure continuous improvement. Encourage employee feedback and promote a culture of open communication to identify areas for improvement and drive positive change within the organization.

Leveraging Riverbed Aternity for enhanced DEX

Being in Product Marketing at Riverbed, I’ve seen firsthand how our solutions have helped organizations measure and optimize employee experiences. Riverbed Aternity DEM offers valuable insights into employee interactions with applications and devices. It measures the employee’s true, actual experience, enabling IT teams to proactively identify and resolve performance issues, optimize IT infrastructure and resources, and measure the impact of IT initiatives on employee experience and business outcomes.

By leveraging Aternity’s DEM capabilities, organizations can:

  • Better understand the end-user perspective and identify opportunities for improvement
  • Foster a culture of continuous improvement focused on enhancing employee experiences
  • Streamline IT decision-making based on accurate and actionable insights
  • Enhance collaboration and communication across teams and departments

Watch Video

As remote and hybrid work environments continue to become the norm, organizations must prioritize digital employee experience. By implementing a robust DEM solution like Riverbed Aternity and following best practices, organizations can unlock the full potential of their workforce, driving overall business success.

Emphasizing the importance of DEX in decision-making processes and technology investments will lead to a more engaged, productive, and satisfied workforce, which in turn positively impacts customer experiences and business outcomes. By focusing on continuous improvement and fostering a culture of open communication, organizations can stay ahead of the curve and thrive in today’s rapidly changing digital landscape.

Forrester recently published a best practices report, Make Digital Employee Experience the Centerpiece of Your Digital Workplace Strategy, where they emphasize how optimizing the digital employee experience (DEX) has become a critical factor for today’s diverse, hybrid workforce, and how improving employee experiences translates to better business outcomes. Forrester states that “while many organizations focus on tools to measure and enhance DEX, the path to success starts long before the tools discussions. Your strategy must embrace a flexible philosophy for happier employees. Then you can explore a variety of technologies to fulfill that vision.”

A comprehensive DEX solution like Riverbed Aternity is crucial for improving employee experiences and driving success in hybrid and remote work settings. You can download a complimentary copy of the Forrester Report here.

]]>
Effective Network Performance for Better Business Resilience https://www.riverbed.com/blogs/effective-network-performance-for-better-business-resilience/ Mon, 24 Apr 2023 12:10:33 +0000 /?p=21033 Whether you are a small business or a major enterprise, network performance can make or break a business of any size. Now that networks are stretched far beyond the data center, maintaining a consistent level of performance in branches, campuses or even the cloud is massively challenging. To add to that pile, with users connecting from various locations, their expectation is that the network is always on and available.

With network performance and user experience as mission critical initiatives, IT teams are constantly under the microscope. When NetOps teams lack visibility into their applications, servers, and cloud-native environments, they’re unable to correctly troubleshoot network issues like unchecked security threats, application slowdowns and other performance issues. For hybrid networks, a lack of visibility often stems from insight latency. The speed and clarity with which insights are delivered can be the difference between prompt action and a large outage.

Why performance is a pillar of business resilience

Little do people know what it takes to keep these modern hybrid networks going! Performance across the network is critical. This makes performance a key pillar of business resilience. Business resilience is the ability of a company to adapt and recover quickly from unexpected disruptions.

In today’s digital world, network performance management (NPM) plays a crucial role in ensuring business resilience. By effectively managing network performance, companies can build a more resilient network infrastructure that can withstand unexpected disruptions and provide a consistent user experience.

Effective monitoring, testing, and optimization of the network can help identify and resolve performance issues, such as bottlenecks, latency, or packet loss. Ensuring that the network is performing optimally can help avoid disruptions and provide a consistent user experience.

Elevate your network’s visibility and performance

The Riverbed NPM portfolio delivers increased business resilience, enabling and accelerating operational transformation from legacy to hybrid and multi-cloud networks. Our solution helps IT teams adapt to disruptions while maintaining continuous operations and safeguarding people, assets, and overall brand equity.

Unlike other NPM solutions, Riverbed NPM delivers granular visibility across network domains with full-fidelity data, extracted from packets, flows and device metrics giving insight across hybrid environments. With new performance enhancements like increased data capabilities, faster processing rates and third-party vendor support, Riverbed NPM sees more telemetry than ever before allowing real time visibility across networks, servers, applications, and the cloud.

New Riverbed NPM performance enhancements help address growing network demands and mitigate compromising network events by delivering full fidelity insights at lightning-fast speed to NetOps and SecOps teams. Corporate mandates require an IT environment that is nimble to accommodate new business requirements, particularly now that networks are evolving beyond the data center. The shift to complex, multi-cloud networks is driving the need for greater scalability, accelerated insights, integration enhancements and increased performance.

For more information on business resilience and how the Riverbed Network Performance Management portfolio can help your organization, please visit this page.

]]>
Spinning Plates, Readiness and Business Resilience https://www.riverbed.com/blogs/business-resilience-network-performance-management/ Tue, 18 Apr 2023 12:13:49 +0000 /?p=20936 Keeping up with your hybrid network can be overwhelming. Nowadays, a mixture of on and off premise technology is the new normal. Users are accessing the network and applications from various locations. The network is stretched well beyond the data center and users are accessing applications from the cloud.

IT teams are constantly spinning plates. It’s only a matter of time before something breaks.

Keep the plates spinning with NPM

So, what can you do to keep the network spinning?  Strengthen your network to be more adaptable and responsive, delivering a better digital experience to your organization’s employees and users. This is business resilience.

A solid prevention or contingency plan for a possible damaging event truly tests the mettle of IT teams. Readiness for all possible negative scenarios seems an impossible task. Business resilience is crucial for companies to adapt and recover quickly from unexpected disruptions, such as natural disasters, cyberattacks, or economic downturns.

In today’s digital world, network performance management (NPM) plays a critical role in ensuring that objective. By effectively managing network performance, compliance, and security, companies can build a more resilient network infrastructure.

Three focus areas for business resilience

Network performance management is the process of monitoring and optimizing the performance of a company’s network infrastructure. Here are the key areas of focus for business resilience in the context of network performance management:

Performance

Performance is the cornerstone of network performance management. Effective monitoring, testing, and optimization of the network can help identify and resolve performance issues, such as bottlenecks, latency, or packet loss. Ensuring that the network is performing optimally can help avoid disruptions and provide a consistent user experience.

Compliance

In today’s regulatory environment, compliance is a critical concern for businesses. Compliance requirements vary depending on the industry and the region, but they all aim to protect the privacy and security of sensitive data. NPM can help ensure compliance with organizational or governmental regulations by providing visibility into network traffic, monitoring access controls, and delivering oversight and data management.

Security

With the increasing sophistication of cyberattacks, security is a top priority for businesses. A security breach can lead to data theft, financial losses, and reputational damage. NPM can help secure the network by monitoring for unusual traffic patterns, provides forensic analysis, and delivers granular network data for quick response and troubleshooting.

How to build resilience into NPM

To build resilience into network performance management, businesses need to take a proactive and holistic approach. Here are five best practices:

  1. Develop a comprehensive network performance management strategy: This should include clear objectives, metrics, and processes for monitoring and optimizing network performance, compliance, and security.
  2. Invest in the right tools and technologies: Effective network performance management requires the right tools, such as network monitoring hardware/software that focusses on packet capture, flow monitoring and device metrics. Businesses need to evaluate their needs and choose the tools that best fit their requirements.
  3. Automate routine tasks: Automation can help reduce manual effort (and mistakes from human intervention) and improve efficiency. This includes automating network configuration and patch management, as well as implementing machine learning and artificial intelligence to detect and resolve issues.
  4. Build a culture of security: Security is everyone’s responsibility. Businesses need to educate employees on security best practices, establish clear security policies and procedures, and regularly test and audit their security measures.
  5. Continuously monitor and adapt: The network environment is constantly changing. Businesses need to continuously monitor network performance, compliance, and security, and adapt their strategies and tools to keep up with the evolving threat landscape.

So in order to keep the plates spinning, NPM has to be a critical component of business resilience. By focusing on performance, compliance, and security, businesses can build a more resilient network infrastructure that can withstand unexpected disruptions and provide a secure and consistent user experience.

For more information on business resilience and how the Riverbed Network Performance Management portfolio can help your organization, please visit this page.

]]>
Eliminate Application Performance Bottlenecks to Improve User Experience https://www.riverbed.com/blogs/application-performance-monitoring-improves-user-experiences/ Mon, 10 Apr 2023 21:34:00 +0000 /?p=20929 Last week our house flooded. It wasn’t a major flood but we did get some damage in a couple of rooms. A storm came out of nowhere and deluged the house with water for about 30 minutes. Turns out that our storm water drainage had bottlenecks that we weren’t aware of and just wasn’t up to the task.

The same things can happen with Application Performance. How do you you know that your network and applications are going to give you the reliability and performance your business needs? Just like a plumber could have helped us find out where the bottlenecks were before disaster struck, Application Performance Monitoring (APM) can help ensure you identify where your applications are going to be slowed down.

Get a complete view of Application Performance

APM helps organizations improve user experiences by tracking key software application performance metrics using monitoring software and telemetry data. Without Application Performance Monitoring, teams struggle to identify and resolve the numerous problems that can arise, causing customers to become frustrated and abandon the app altogether, impacting revenue and brand image.

Application monitoring is a great way to gain a full view into the user experience, application performance, and database availability. Businesses of all sizes use various applications daily for different processes and need to deploy tools throughout the application environment and supporting infrastructure to monitor real-time performance. To get a complete view of application performance, you need to monitor the following:

  • Digital/user experience encompassing both real-user and simulated experience for assessing performance in production and non-production environments. This type of monitoring collects performance metrics, including load time, response time, uptime, and downtime, by analyzing the user interface on the end-user device.
  • Application performance monitoring involves overseeing the complete application and infrastructure. This comprises the application framework, database, operating system, middleware, web application server and user interface, CPU usage, and disk capacity. Monitoring applications at this level can help identify code segments that could be causing performance issues and check the availability of software and hardware components.
  • Database availability monitoring helps assess the performance of SQL queries or procedures and the availability of the database.

But is it any different for cloud-native applications? The rise of cloud-native applications poses several challenges despite their well-established benefits. Complex applications, composed of numerous microservices, generate huge amounts of data, which needs to be centrally managed and analyzed to proactively identify performance issues. The speed at which data is generated is also a challenge. These factors have made application performance management more challenging in cloud-native environments.

The many benefits of APM

A good Application Performance Monitoring solution offers many capabilities, such as:

  • Dynamically maintain real-time awareness of application and infrastructure components through automatic discovery and mapping.
  • Gain end-to-end visibility into the application’s transactional experience to comprehend its impact on business outcomes and user experience.
  • Monitor mobile and desktop applications on browsers to track user experience across different platforms.
  • View root-cause and impact analysis to identify performance issues and their impact on business outcomes for faster and reliable incident resolution.
  • Integrate and automate service management tools and third-party sources to scale up or down with the infrastructure.
  • Analyze the impact on user experience and its impact on business KPIs.
  • Monitor endpoint devices and application performance issues.
  • Monitor virtual desktop infrastructure to maximize employee productivity.

Clearly, businesses can benefit in many ways by gaining visibility and intelligence into application performance and its dependencies. Real-time monitoring helps detect performance issues before they affect real users, expanding the technical and business benefits list, which includes:

  • Increased application stability and uptime
  • A reduced number of performance incidents
  • Faster resolution of performance problems
  • Improved infrastructure utilization

Investing in a good Application Performance Monitoring solution ensures reliable intelligence and insights that enable teams to align faster, and identify and isolate issues for faster problem resolution. Performance monitoring has rapidly expanded to encompass a broad range of technologies and use cases. Modern applications, though built using microservices, are highly complex and run in containerized environments hosted across multiple cloud services, making it even more essential to get end-to-end visibility.

And just like good storm water drainage, you’ll get the performance when you really need it.

]]>
Transforming Global Financial Services with Riverbed Aternity DEM https://www.riverbed.com/blogs/digital-experience-management-for-global-financial-services/ Tue, 28 Mar 2023 13:22:00 +0000 /?p=20720 The global financial services industry has undergone a rapid digital transformation in recent years, driven by evolving customer expectations, remote and hybrid work, infrastructure modernization, and an increasingly competitive landscape. Financial institutions now face the challenge of ensuring exceptional digital experiences for their customers and employees while adhering to strict regulatory standards and maintaining optimal performance across a diverse range of digital services. This challenge comes especially as both customers and employees demand a lot more when they interact with technology—whether that is a customer interacting with the financial institutions site and mobile app or the employees who work for these organizations.

This is why the leading global financial organizations leverage the Riverbed Aternity Digital Experience Management (DEM) platform—a comprehensive solution designed to address the unique challenges they face while maintaining their competitive advantages. With Riverbed Aternity, companies can gain insights into customer journeys, both converting and non-converting, and track user experience at every step of their journey, identifying and optimizing the highest-converting paths and eliminating any roadblocks. In addition, the platform can monitor the employee experience for critical business applications used to support customers, reducing friction during customer journeys due to broken links and other issues.

Enhancing customer experience

At the heart of every financial institution’s success lies exceptional customer experience.  Riverbed Aternity DEM helps global financial services companies monitor and optimize the performance of their digital services, ensuring seamless and satisfying experiences for customers. With Riverbed Aternity’s advanced analytics and insights, financial institutions can proactively identify and resolve performance issues, thereby minimizing customer frustration and maximizing satisfaction.

One of these ways is with Aternity’s User Journey Intelligence (UJI) functionality. With UJI, Riverbed Aternity leverages advanced Real User Monitoring (RUM) technology to track user journeys and analyze web page load time, providing insights into top-line business metrics such as revenue, customer engagement, and customer abandonment. The platform also offers Synthetic Transaction Monitoring (STM) for proactive issue identification. This is unique to Riverbed Aternity, where financial services companies can gain a competitive edge over pure-play DEM vendors lacking RUM, STM, or both.

Watch Video

Boosting employee productivity & accelerating digital transformation

In addition to improving customer experience, Riverbed Aternity DEM also enhances employee productivity by providing insights into the performance of internal applications and systems. By identifying bottlenecks and performance issues, Riverbed Aternity allows financial institutions to optimize their internal processes and workflows, resulting in increased efficiency and reduced operational costs.

As the global financial services industry continues to evolve, organizations must be agile and adaptable to stay ahead of the curve.  Riverbed Aternity DEM has supported financial institutions in their digital transformation journey by providing the tools and insights needed to optimize and innovate their digital services. With Riverbed Aternity, financial institutions across the globe have confidently embraced new technologies and delivered exceptional digital experiences to both customers and employees.

When Swiss Re, a leading provider of insurance, reinsurance and other forms of insurance-based risk solutions, underwent their digital transformation strategy they aimed to simplify collaboration among global teams and between Swiss Re and its external partners and customers. However, their existing device performance monitoring tool couldn’t provide a comprehensive understanding of their workforce’s experience, making it hard to interpret and scale. With Riverbed Aternity, Swiss Re was able to remotely, proactively, and non-invasively measure actual end-user experience, improving visibility and efficiency gains.

“What we particularly liked with Aternity was the ease in which we could analyze and correlate data. Riverbed Aternity makes this insight easily available to a broader audience in a format that is scalable and sharable with our internal stakeholders,” said Joost Smit, Digital Workplace Solution Architect and Engineer at Swiss Re.

Looking ahead

These are just a couple of examples of how the Riverbed Aternity Digital Experience Management platform is the ideal solution for global financial services companies looking to navigate the complexities of today’s digital landscape. By providing real-time insights into application performance, end-user experience, and overall system health, Riverbed Aternity DEM empowers financial institutions to deliver exceptional digital experiences, maintain regulatory compliance, and drive business growth.

]]>
Help Your Customers Get More Out of Their IT Budgets with Riverbed Aternity DEM https://www.riverbed.com/blogs/help-your-customers-get-more-out-of-their-it-budgets-with-alluvio-aternity-dem/ Mon, 13 Mar 2023 20:46:01 +0000 https://www.riverbed.com/?p=76124 As we continue to hear and read about rising inflation, ongoing supply chain challenges, and a potential recession, enterprises around the world are tightening their budgets. IT teams are clearly feeling the pressure with CIOs and IT buyers predicting their tech spend will only increase by 5.5% this year—a meaningful deceleration from previous expectations and below last year’s annual inflation rate of 8.3%. In other words, despite rising costs, IT teams will spend less this year when adjusted for inflation, reflecting stagnant IT budgets that aren’t keeping pace with economic realities.

Having to make do with less purchasing power is challenging, but there are opportunities to help your customers generate efficiencies within IT and get more out of every penny. In this blog, we explore how Digital Experience can help your customers’ teams reduce costs while maintaining a flawless digital experiences.

What is Aternity DEM?

Digital Experience is a full spectrum, digital experience platform that provides insight into the business impact of customer and employee digital experiences. It achieves this by capturing and storing technical telemetry at scale from employee devices, business applications, and cloud-native application services.

Equipped with this comprehensive visibility into the actual user experience and device performance, IT teams can create better experiences for users and leaders can make informed business decisions on IT spend. Here’s how:

Smart Device Refresh

Typically, IT teams will refresh devices based on their age, say, every three or four years. But age alone doesn’t speak to the actual health or performance of a device. Some perfectly good devices may be thrown out too soon, and other faulty devices need to be replaced a bit sooner so an employee can optimize their productivity. Riverbed Aternity DEM offers insight into actual user experience and device performance, informing teams on when to replace devices based on performance.

What it means for your clients: intelligent device replacement helps save them money by refreshing devices exactly when they need to be replaced, and not a moment sooner.

Eliminated Software Bloat

We all keep subscriptions longer than necessary, and the same is true for enterprises. A SaaS trends report found the average company wastes more than $135,000 annually on unused, underused, or duplicate SaaS tools and this cost increases dramatically for large enterprises. Riverbed Aternity DEM gives IT the power to automatically identify software licenses that are going unused or aren’t used often.

What it means for your clients: Instantly reduce software bloat by cutting licenses that are going mostly unused and redeploy those savings in ways that can better help the business.

Curtailed Shadow IT

All too frequently, teams across an enterprise will purchase SaaS tools without going through the proper IT channels. This inevitably leads to redundancies, increased risk, and headaches for IT. But Riverbed Aternity DEM can identify shadow IT software, and either direct usage to an approved application to eliminate the additional expense, or leverage approved purchasing channels to better handle the spend.

What it means for your clients: By curtailing shadow IT, IT teams can better understand and manage the software being used by employees. At the same time, it helps IT identify and eliminate duplicate and wasteful solutions so budgets are more effectively and efficiently utilized.

Cut costs and improve performance

Many IT departments have room to gain operational efficiencies by eliminating waste, thus maximizing every dollar. These efficiencies don’t have to come at the expense of the user experience. On the contrary, reducing wasteful spending can add money back into budgets that can then be used to hire talent and fill labor gaps, reducing the burden on IT departments so they’re more productive. Riverbed Aternity DEM helps organizations save on their IT costs while at the same time enabling even better digital experiences. It’s a win-win.

To how you can put this into action for your team, register for our upcoming webinar “Budget Getting Tight? How IT Leaders Reduce Costs Without Sacrificing User Experience.”

]]>
Get More Out of Your IT Budget with Riverbed Aternity DEM https://www.riverbed.com/blogs/get-more-out-of-your-it-budget-with-alluvio-aternity-digital-experience-management/ Mon, 13 Mar 2023 12:24:58 +0000 /?p=20448 As we continue to hear and read about rising inflation, ongoing supply chain challenges, and a potential recession, enterprises around the world are tightening their budgets. IT teams are clearly feeling the pressure with CIOs and IT buyers predicting their tech spend will only increase by 5.5% this year—a meaningful deceleration from previous expectations and below last year’s annual inflation rate of 8.3%. In other words, despite rising costs, IT teams will spend less this year when adjusted for inflation, reflecting stagnant IT budgets that aren’t keeping pace with economic realities.

Having to make do with less purchasing power is challenging, but there are opportunities to generate efficiencies within IT and get more out of every penny. In this blog, we explore how Digital Experience can help IT teams reduce costs while maintaining a flawless digital experiences.

What is Riverbed Aternity DEM?

Riverbed Aternity DEM (Digital Experience Management) is a full spectrum, digital experience management platform that provides insight into the business impact of customer and employee digital experiences. It achieves this by capturing and storing technical telemetry at scale from employee devices, business applications, and cloud-native application services.

Equipped with this comprehensive visibility into the actual user experience and device performance, IT teams can create better experiences for users and leaders can make informed business decisions on IT spend. Here’s how:

Smart Device Refresh

Typically, IT teams will refresh devices based on their age, say, every three or four years. But age alone doesn’t speak to the actual health or performance of a device. Some perfectly good devices may be thrown out too soon, and other faulty devices need to be replaced a bit sooner so an employee can optimize their productivity. Riverbed Aternity DEM offers insight into actual user experience and device performance, informing teams on when to replace devices based on performance.

What it means for you: intelligent device replacement helps save you money by refreshing devices exactly when they need to be replaced, and not a moment sooner.

Eliminated Software Bloat

We all keep subscriptions longer than necessary, and the same is true for enterprises. A SaaS trends report found the average company wastes more than $135,000 annually on unused, underused, or duplicate SaaS tools and this cost increases dramatically for large enterprises. Riverbed Aternity DEM gives IT the power to automatically identify software licenses that are going unused or aren’t used often.

What it means for you: Instantly reduce software bloat by cutting licenses that are going mostly unused and redeploy those savings in ways that can better help the business.

Curtailed Shadow IT

All too frequently, teams across an enterprise will purchase SaaS tools without going through the proper IT channels. This inevitably leads to redundancies, increased risk, and headaches for IT. But Riverbed Aternity DEM can identify shadow IT software, and either direct usage to an approved application to eliminate the additional expense, or leverage approved purchasing channels to better handle the spend.

What it means for you: By curtailing shadow IT, IT teams can better understand and manage the software being used by employees. At the same time, it helps IT identify and eliminate duplicate and wasteful solutions so budgets are more effectively and efficiently utilized.

Cut costs and improve performance

Many IT departments have room to gain operational efficiencies by eliminating waste, thus maximizing every dollar. These efficiencies don’t have to come at the expense of the user experience. On the contrary, reducing wasteful spending can add money back into budgets that can then be used to hire talent and fill labor gaps, reducing the burden on IT departments so they’re more productive. Riverbed Aternity DEM helps organizations save on their IT costs while at the same time enabling even better digital experiences. It’s a win-win.

To how you can put this into action for your team, register for our upcoming webinar “Budget Getting Tight? How IT Leaders Reduce Costs Without Sacrificing User Experience.”

]]>
Deliver Total User Experience with Aternity Sentiment https://www.riverbed.com/blogs/enable-total-digital-experience-management-with-aternity-sentiment/ Tue, 07 Mar 2023 13:44:00 +0000 /?p=20243 As companies shift towards hybrid IT models, measuring device and application performance metrics alone is not enough to provide a comprehensive understanding of the employee experience. Directly engaging with employees can provide visibility beyond app and device performance data, providing a pathway to improve the digital experience.

Objective data can provide insights into digital experience. However, it doesn’t capture the actual feelings of the user that are necessary to truly understand how they’re interacting with technologies and capture frustrations that go beyond the device and app performance metrics. Service desks can send out email surveys, yet, these tend to have poor response rates, as users will either ignore or overlook those types of touchpoints. Having a blindside on what and how the employees feel on a day-to-day basis can negatively impact business outcomes—impeding digital transformation initiatives, adoption of new services or increasing turnover.

The solution needs to be frictionless and fast for users. To bridge this gap, RiverbedAternity has released Aternity Sentiment in public beta, a holistic solution for digital experience management that captures both quantitative and qualitative data. By integrating Sentiment with digital experience management (DEM) workflows, organizations can assess the total user experience and discover hidden issues tied to how users feel about the technologies they interact with, IT teams can analyze this data and prioritize where to make investments to meet XLAs.

How does it work?

By capturing both objective and qualitative data, Aternity Sentiment gives IT leaders a comprehensive understanding of the digital employee experience by adding the human element to the data they collect. IT teams create customizable surveys to capture accurate feedback from users and address issues. For example, IT can get details on how a certain application may be running on a user’s Windows 10 desktop after an update rolls out, or assess their experience with battery performance on a laptop that other users have reported.

IT leaders can even analyze the users’ feelings regarding a digital transformation initiative by having multiple checkpoints during rollouts of new services, enabling a direct, two-way communication of real-time information. They can even engage directly with employees on potential issues related to their systems and provide details to address those issues without the need to log in a ticket, saving time for both end-users and IT teams.

By also viewing Aternity Digital Experience Index (DXI) data, IT can identify hot spots that require employee engagement by gathering their actual experience where unaddressed issues could result in poor Aternity DXI scores. With built-in filtering capabilities that show performance by business unit, device manufacturer, and more, Aternity’s DXI capabilities show immediate, targeted performance insights and set IT on the right path to diagnose root cause and solve issues that go beyond the device and app metrics.

Aternity Sentiment survey
IT can create customizable surveys to capture accurate feedback from users.

For end-users, Aternity Sentiment empowers them to provide insights and give them a voice, giving them a channel to provide feedback on new technology rollouts, application and device experiences, and overall company initiatives. When they log into their machines, they will get a notification indicating a survey question the IT team has targeted them to address.

Sentiment survey response data
IT teams can analyze Aternity Sentiment survey response data.

From there, IT teams can analyze the results (OOTB or Create Your Own) or export them to their own tool via our REST API and augment that with the data that Aternity DEM already collects. From there, they can focus on what areas to improve. For example, when Aternity detects that Excel is performing poorly, do users notice or is it a background process that went undetected? Based on the responses, IT teams can prioritize accordingly. This ensures they’re leveraging all the tools at their disposal to ensure they’re meeting experience goals.

Aternity Sentiment is a game-changer in digital experience analytics. By capturing qualitative feedback along with objective performance data, Aternity Sentiment provides a complete understanding of the digital employee experience, enabling organizations to drive increased customer satisfaction and employee productivity. To learn more, check out the Aternity Sentiment Beta user guide.

]]>
Rubber Bands, Bad Apples and Automated Orchestration https://www.riverbed.com/blogs/rubber-bands-bad-apples-and-automated-orchestration/ Fri, 03 Mar 2023 13:24:13 +0000 /?p=20233 As modern networks stretch well beyond the data center, vulnerabilities are being exploited more and more by threat actors. Much like a rubber band, as you stretch it out and pull it tighter, there is always the risk of it breaking.

When networks were confined to just the data center, it was easier to monitor. But now that networks stretch significantly outside the data center—all the way to the campus, remote office or cloud—the threats to the network become more prevalent because your rubber band is stretched to its limits. In doing so, keeping your distributed network compliant and secure is a challenge.

Beware bad apples

Bad actors are always trying to exploit those vulnerabilities in the far stretched network. It only takes the baddies one time to find their way into a network by running a corrupt file, non-compliant application or old operating system. If an application or OS is out of compliance, no longer supported, or is riddled with security issues, this can translate to serious loss of productivity, customer sentiment and revenue. In addition, a company can end up paying millions in fines and can have additional financial impact just in the recovery process alone.

Many highly regulated industries like finance, government or medical are trying to police themselves when it comes to such issues of compliance and security. It’s better to monitor themselves and hold their businesses up to a higher standard instead of the government passing major regulations that cause wide sweeping change, often at great expense to the business. Government regulation is often a last resort so better to look after yourselves. If the government gets involved, it’s often due to a catastrophic event propagated by a single vendor effecting a client and millions of their customers. And those regulations ultimately could impact every network vendor in that particular industry.

One bad apple can financially ruin the apple cart for all parties involved.

Secure networks with Riverbed NPM

Riverbed’s Network Performance Management (NPM) portfolio recently implemented a feature enhancement across its products as a result of regulated institutions requiring this change of all their vendors. In response to the prevalence of Ransomware attacks happening across various sectors, industry leaders mandated their vendors who operated products within their networks to implement what Riverbed calls Automated Orchestration.

This recent feature integrated across Riverbed AppResponse, NetProfiler, NetIM and Portal allows any of these products to be stood up; and in the event of an internal or external threat, have the product taken down and redeployed automatically to a known safe state. This in turn saves time, money and mitigates risks associated with manual intervention. Automated Orchestration across your NPM portfolio will ensure compliance as well as security so that your network keeps running and avoid the risk of potential fines or negative financial impact.

For more information on the Riverbed NPM portfolio of products, please visit this page.

]]>
Rubber Bands, Bad Apples and Automated Orchestration https://www.riverbed.com/blogs/rubber-bands-bad-apples-and-how-automated-orchestration-solves/ Thu, 02 Mar 2023 03:59:07 +0000 https://www.riverbed.com/?p=76125 As modern networks stretch well beyond the data center, vulnerabilities are being exploited more and more by threat actors. Much like a rubber band, as you stretch it out and pull it tighter, there is always the risk of it breaking.

When networks were confined to just the data center, it was easier to monitor. But now that networks stretch significantly outside the data center—all the way to the campus, remote office or cloud—the threats to the network become more prevalent because the customer’s rubber band is stretched to its limits. In doing so, keeping their distributed network compliant and secure is a challenge.

Beware bad apples

Bad actors are always trying to exploit those vulnerabilities in the far stretched network. It only takes the baddies one time to find their way into a network by running a corrupt file, non-compliant application or old operating system. If an application or OS is out of compliance, no longer supported, or is riddled with security issues, this can translate to serious loss of productivity, customer sentiment and revenue. In addition, a company can end up paying millions in fines and can have additional financial impact just in the recovery process alone.

Many highly regulated industries like finance, government or medical are trying to police themselves when it comes to such issues of compliance and security. It’s better to monitor themselves and hold their businesses up to a higher standard instead of the government passing major regulations that cause wide sweeping change, often at great expense to the business. Government regulation is often a last resort so it’s better for your customers to look after themselves—with your help. If the government gets involved, it’s often due to a catastrophic event propagated by a single vendor effecting a client and millions of their customers. And those regulations ultimately could impact every network vendor in that particular industry.

One bad apple can financially ruin the apple cart for all parties involved.

Secure networks with Riverbed Network Observability

Riverbed’s Network Observability portfolio recently implemented a feature enhancement across its products as a result of regulated institutions requiring this change of all their vendors. In response to the prevalence of Ransomware attacks happening across various sectors, industry leaders mandated their vendors who operated products within their networks to implement what Riverbed calls Automated Orchestration.

This recent feature integrated across Riverbed AppResponse, NetProfiler, NetIM and Portal allows any of these products to be stood up; and in the event of an internal or external threat, have the product taken down and redeployed automatically to a known safe state. This in turn saves time, money and mitigates risks associated with manual intervention. Automated Orchestration across your client’s NPM portfolio will ensure compliance as well as security so that their network keeps running and avoid the risk of potential fines or negative financial impact.

For more information on the Riverbed Network Observability portfolio of products, please visit the Riverbed Network Observability solution section of the Partner Portal.

]]>
Solving Remote Work Visibility Challenges for NetOps https://www.riverbed.com/blogs/solving-remote-work-visibility-challenges-for-netops/ Mon, 13 Feb 2023 13:44:03 +0000 /?p=18791 When employees work from an office, the network team is responsible for application access and delivery. The network team is responsible for identifying issues where employees can not access applications or application performance is degraded due to network performance.

In a remote work or work-from-anywhere environment, the responsibility of identifying and troubleshooting access and performance issues still falls on the network team. When it comes to remote workers, Level 1-2 techs need to be able to identify network access and performance issues for end users accessing business applications.

They need to be able to:

  • Understand the scope and severity of the issue so that they can prioritize appropriately and understand if they need to escalate to level 3.
  • Understand the impact on end users so that they can document and communicate the incident to the affected end users.
  • Understand the cause of the issue so they can know which resources to call (ISP, CASB supplier, application owner, Security team, device issue, etc.) and understand when the issue might be resolved.

However, the problem space has changed. There are several environmental challenges that limit NetOps visibility into application performance.

Remote work visibility challenges for NetOps teams

Split Tunnels

In modern remote work environments, it’s common to have three different routing options for traffic: direct to internet (no tunnel), corporate VPN, and a Cloud Access Security Broker (CASB). There are often routing rules in place where specific applications use one route (such as the CASB) and other applications go direct to internet. The routing or tunnel being used can have a significant impact on application performance and end user experience.

CASBs

CASBs are widely adopted and create a bottleneck for performance while optimizing for security. CASBs are often implemented by the security team. They make it more difficult for the network team to troubleshoot as the tunnels add complexity and reduce visibility through encryption of traffic. In a few ad hoc tests, CASB bandwidth is as low as 3Mbps and there is added security scanning time for an additional slowdown.

Multiple Gateways

There are typically multiple gateways being used by each type of tunnel. For example, users in the northeast united states may have CASB traffic tunneled to gateway X, while users in central US are connecting to gateway Y. If only one gateway is causing problems, it’s difficult to determine that. This gateway issue is also applicable to corporate VPNs.

SaaS vs Corporate Applications

The percentage of companies using SaaS to meet their software needs is steadily increasing, with 80% of companies relying on SaaS apps in 2022. Whereas the remaining corporate applications are usually hosted in a data center. Remote user traffic traverses a physical network which can cause additional slowdown. This is still the responsibility of the network team to diagnose.

ISP Variables

Remote workers typically use their own ISP. This variability is an additional challenge when trying to identify root cause.

Home Network Variables

Remote workers are also responsible for their home network. Variables such as poor Wi-Fi or congestion on the home network is an additional challenge when trying to identify root cause.

Many Locations

Finally, in remote work environments, location is less specific than with on-premises users. There may be users in a general geographic area that are having issues due to an ISP or gateway, but it is not as easy to use a specific site or location to identify problems.

Alluvio IQ provides NetOps with rich visibility into remote work issues.
Riverbed IQ provides NetOps with rich visibility into remote work issues.

Riverbed IQ brings visibility to remote work

By adding Aternity end user experience metrics to Riverbed IQ, Riverbed’s SaaS-based unified observability solution, NetOps teams gain basic visibility into traffic that leave the home computer and goes to a data center or SaaS application.

IT teams can now answer problems such as:

  • Which applications are having network performance issues?
  • How many users are impacted? And how severe is the impact?
  • How are the impacted users accessing the application? (CASB, VPN, Direct to internet)
  • Which locations are affected?
  • What’s causing the problem? Is the CASB / VPN causing the problem? Or a specific gateway? Is it an ISP problem, a VPN problem, or a problem with the user’s device itself?

Visit this page to learn more about how Riverbed IQ helps organizations shift left.

]]>
Don’t Wait for Zero Day – Proactively Detect Threats with Riverbed https://www.riverbed.com/blogs/dont-wait-for-zero-day-proactively-detect-threats-with-alluvio/ Wed, 25 Jan 2023 23:00:00 +0000 /?p=19778

Your personal information being leaked or sold online is something that strikes fear into the hearts of most people. Identity theft takes this one step further and can destroy your credit ratings and land you on blacklists for services such as utilities, rental housing or mobile phone plan.

In September 2022, Optus announced that an unknown actor had compromised their systems and accessed current and former customers’ personal information (Passport, Drivers Licenses, Medicare numbers). The unknown actor then posted proof (about 10,000 out of 2.1 million) exposing this personal information in a bid to sell the remainder.

While the impact of this leak cannot be understated and is devastating for the people involved, there is some small comfort that various government agencies and Optus are offering assistance to replace exposed identity documents.

The reputational and financial damage to Optus (or any organization that has their customer data compromised) is massive. Some customers will want to discontinue services, and potential customers may reconsider their options. Even if an organization increases their security posture, the memory of this incident will last for decades to come.

Attacks steal the headlines, but threats lie in wait

What we know about the Optus cyberattack is that it wasn’t a sophisticated one, and they could have avoided it by securing all their ports and APIs. This is a very common slip-up—which occurs most often due to rushed development or integration—and one that shouldn’t happen, but when it does, it can become a major issue.

Alternatively, when an actor decides to attack a well-secured target, they become an APT (Advanced Persistent Threat). APTs do not make much noise, as their role is to stay under the radar so they can learn as much about the target as possible. The reconnaissance period can be long as a year—they take their time to learn the environment and find things such as:

  • Where is the sensitive information saved?
  • Where is the data backed up (in the case of a Cryptolocker ransomware attack)?
  • What cyber defenses are in play?
  • What are the skills of the DFIR (Digital Forensics and Incident Response) team?
  • What does a regular usage pattern look like?

With the average APT able to remain in an environment for over 200 days without being discovered, APTs can hide in plain sight using normal protocols and authentication standards to avoid detection by signature-based and machine learning defenses. This is where proactive threat hunting becomes a crucial defense in your arsenal. Threat hunting is the process of looking at traffic patterns, log files and other telemetry to identify unusual activities that could be an IOC (Indicators of Compromise).

Games make the process a bit more interesting

I like to talk about the gamification of threat hunting which can make the process more enjoyable. We use games that offer high value and leverage the power of Riverbed NetProfiler and AppResponse full fidelity data. If you have not already played cybersecurity games I highly recommend using them. These games are a testament to how real-life simulations can advance cybersecurity skills. While playing these games you learn to see failure as a learning opportunity and prepare for real-life incidents.

APTs often use zero-day threats, since signature-based tools do not detect them because the IOCs don’t exist until after the threat has been identified. It’s not enough to only detect these threats after they are known; we need to go back in time as well and see if they have happened in the past. NetProfiler is able to run historical reports on threats based on some types of known IOCs because of its full fidelity flow storage.

log4j, full fidelity data, cyber security

The other benefit of this game is that you’re going to be asked about something that’s in the news anyway.

Let’s look at how Riverbed is helping its customers proactively find such vulnerabilities so that they can safeguard the valuable data and privacy of their end customers.

With ATPs using normal traffic to blend into the environment, it’s a smart idea to monitor for administrative traffic in places and at times that you may not expect. Something that doesn’t make sense, such as loads of data transfer, open APIs or lousy passwords are signals that need to be picked up. Riverbed can help catch the red flags and send alerts notifying you about unusual activity, so you can take action before getting locked out of your network.

Riverbed to the rescue

In the following example we have used NetProfiler to detect SSH traffic between midnight and 6AM. While we might detect the occasional developer performing a late-night change, we might also find some things we weren’t expecting as well. Other examples might be database traffic directed to places it shouldn’t in an attempt to exfiltrate records.

NetProfiler full fidelity data

Security audit or threat hunting can easily become a full-time job, but with Riverbed you can invest some time and take care of a host of activities to keep adversities at bay:

Detect unencrypted data transfers

NetProfiler full fidelity data

 

 

 

 

Analyze DNS traffic

NetProfiler full fidelity data

Analyze certificates

NetProfiler full fidelity data

Dedicating a bit of time to these activities will help you understand your environment better and know what normal looks like.

Full fidelity observation speeds up recovery and saves millions in downtime when under attack. You can go back in time and look at everything to find the extent of damage—when it all started and what services/data have been compromised.

You don’t know today what you will need tomorrow. Make Riverbed Riverbed monitoring a crucial part of your overall cyber strategy.

]]>
Riverbed and Riverbed IQ on the Road https://www.riverbed.com/blogs/riverbed-on-the-road/ Wed, 19 Oct 2022 12:15:47 +0000 /?p=19131 Wow, what a month it’s been! Just over four weeks ago, Riverbed announced General Availability of our cloud-native, SaaS-delivered Riverbed IQ unified observability service that empowers IT with actionable insights and intelligent automation to solve problems more quickly and improve the digital experience for users everywhere.

At the same time, we kicked-off our Riverbed EMPOWEREDx user community road show across nine cities globally, and launched a new ‘Get Shift Done’ campaign that initially appeared on the NASDAQ digital board in New York City, and is now running on digital media platforms globally. The campaign is focused on the concept of ‘shift left,” in which all IT staff is now able to tackle jobs once only a very few experienced IT experts could handle. That’s the AI power behind our Riverbed IQ portfolio and Riverbed IQ. And this all follows our brand launch in April.

Riverbed Nasdaq Digital Billboard in NYC
Riverbed’s ‘Get Shift Done’ campaign launched on the NASDAQ digital board in New York City.

As a CMO and marketer, this has been a BIG moment for our company. We’ve been on a journey the past 18 months, driving innovation to deliver a differentiated unified observability solution to the market—one that contextualizes full-stack, full-fidelity telemetry across networks, applications and users, enabling customers to transform massive amounts of data into actionable insights. We believed we had something special—but to finally take the wraps off this solution, and bring it to customers live is what’s most rewarding.

With events starting to take place in person again, our CEO, leadership team, and technology evangelists have had the opportunity to engage face-to-face with our customers and partners to demonstrate the value of our Riverbed and Acceleration solutions. I was fortunate to travel to Paris to meet with customers at our EMPOWEREDx event, and to Dubai, where last week I attended the GITEX event, which is in full swing again! Riverbed also hosted EMPOWEREDx events in London, San Francisco, Washington DC, Dubai and Dallas, and in New York City yesterday, Melbourne on October 26, and Singapore in November.

CMO Jonaki Egenolf spoke with customers at Riverbed's EMPOWEREDx event
With Riverbed’s EMPOWEREDx events occurring in cities globally, we’ve had the opportunity to engage directly with our partner and customer community.

Here are some of the things we’ve heard the past few weeks from our customer community:

  • IT is now synonymous with business, and is top of mind for the C-Suite.
  • One of the biggest challenges organizations face is data overload, including receiving too many alerts without enough context; IT leaders say they need greater context around the data and various monitoring tools they have in place.
  • Resources in IT are tight and often scare, and there’s a need for more automation and enabling broader IT teams to fix issues faster and ensure digital service quality.
  • Acceleration of apps and networks, regardless of user location, still matters.
  • Before the pandemic, digital transformation was starting to take shape, but today it’s in full motion and delivering on the digital experience is central to organizations.

What we heard from our enterprise and government customers really validates our technology direction. At Riverbed, we’re fully focused on meeting critical customer needs, including delivering a unified approach to observability that unifies data, insights and actions across all IT. Ultimately, this empowers IT teams to empower digital experiences.

Riverbed team at GITEX
Many of our customers joined us at GITEX for open labs and live demos of Riverbed IQ.

Many of our customers joined us for open labs and live demos of Riverbed IQ at EMPOWEREDx, GITEX or other events. The feedback on this solution has been overwhelmingly positive. If you are in Melbourne or Singapore, please join us in person over the next few weeks to experience Riverbed for yourself. Otherwise, sign-up now for a Request Demo, or other Riverbed or Acceleration portfolio solutions. We’re ready to help you on your journey—to scale IT, turn data into actionable insights, and Empower the Experience. Let’s do this!

]]>
Transform Data into Actionable Insights to Empower Digital Experiences https://www.riverbed.com/blogs/transform-data-into-actionable-insights/ Tue, 13 Sep 2022 12:15:54 +0000 /?p=18856 Today we are living and working in a world that is digital-first and hybrid by design, with cloud, SaaS and legacy technologies working together, and employees working from everywhere.

In this world, a click is everything. That action comes with intent and expectation—of a flawless digital experience. These experiences are the heartbeat of the fierce and competitive landscape we all work in. And when digital services fail to deliver a flawless experience, it can impact your brand, and undermine your ability to achieve important objectives tied to revenue, cost, productivity and risk.

In this complex and distributed environment, many IT teams are finding it more challenging to deliver seamless digital experiences to customers and employees. IT organizations are overwhelmed by massive amounts of data and alerts flooding them from siloed tools that provide little context or actionable insights, when issues occur. As a result, IT teams rely on a few highly skilled individuals, who are in short supply and high demand, to manually investigate and troubleshoot issues.

This is one of the industry’s most daunting problems: how to provide seamless digital experiences that are high performing and secure in a hybrid world of distributed users and applications, exploding data, and soaring IT complexity.

Although observability was meant to solve these problems, current solutions fall short—failing to capture all relevant telemetry, and instead sample data to deal with the scale of today’s distributed environment.

Until Now. 

Riverbed saw the need for a differentiated approach to solve the challenges resulting from this IT complexity and to go beyond the basics of monitoring, testing, and management. For the past 18 months, we’ve been investing in a unified approach to observability—unifying IT data, insights and actions to empower IT to deliver exceptional digital experiences to users everywhere. Today, we’re proud to introduce you to Riverbed IQ—our new cloud native SaaS-delivered unified observability solution.

Riverbed IQ transforms the overabundance of data and alerts into actionable insights and intelligent automation for IT organizations. Powered by AI/ML correlation, Riverbed IQ’s scripted investigations replicate expert IT workflows to gather event context, filter noise, and identify the most business-impacting events to act on. With full stack, full-fidelity telemetry, intelligent correlation, and workflow automation, Riverbed IQ delivers actionable insights that empower all IT skill levels to resolve problems quickly and improve digital service quality. Enabling IT organizations to “shift left” allows all staff to do the job of more experienced IT experts, ultimately freeing up resources to focus on strategic business initiatives.

Riverbed IQ is the first service to be delivered on the Riverbed Unified Observability platform—a secure, highly available and scalable SaaS platform for cloud-native observability services. Riverbed IQ and the Platform are part of the Riverbed by Riverbed portfolio, which also includes industry-leading visibility tools for network performance management (NPM), IT Infrastructure Monitoring (ITIM) and Digital Experience Management (DEM), which encompasses application performance management (APM) and end user experience monitoring (EUEM).

The Riverbed IQ Unified Observability platform and  Riverbed IQ enable faster, more effective decision-making across business and IT. To learn more about Riverbed IQ, our approach to Unified Observability and how we can help you deliver on the click and the digital promise behind it, visit Riverbed IQ.

]]>
6 Proven Strategies to Protect Networking Teams from Burnout https://www.riverbed.com/blogs/protect-networking-teams-from-burnout/ Sun, 14 Aug 2022 22:00:39 +0000 /?p=18292 Is your team feeling overworked, undervalued and frustrated? You’re not alonethe pandemic has put more pressure on almost all of us, especially on those in charge of maintaining network performance. As well as dealing with relentless demands for more bandwidth, network and IT teams have a host of other problems on their plate. Many are already at full capacity, and now being asked to manage increasingly complex hybrid environments with tools that are no longer fit for purpose. It’s no surprise that burnout and attrition within networking teams are on the rise. How do you protect your networking teams from burnout?

The response to this crisis typically comes in the form of well-meant gestures: free lunches, or a set of guidelines that are high on good intentions but low on substance. These are band-aid solutions that don’t address the underlying problem. Businesses need to support and listen to staff and improve processes and tools to reduce the stress associated with network management. The key to that lies in first understanding the root cause of burnout and what it looks like.

Learn to identify stress when you see it

The corporate world has a mental health problem, and IT teams have it worse than most. In a survey by Harvard Business School, 84% of workers reported at least one workplace factor that had a negative impact on their mental health. Among Australian tech workers, the problem was pronounced: over half would not recommend their workplace, while three-quarters said they’d experienced stress at work that made them less productive.

Individuals struggling with their mental health in the workplace may go into survival, or “fight or flight” mode. They may become less productive, be increasingly absent or feel less engaged. They may display anxiety, anger or uncharacteristic behaviour. But in the hustle and bustle of everyday tasks, these problems are not always easy to spot.

Look beyond the symptoms

To help employees, IT leaders must start by listening. That may come in the form of one-to-ones, in which managers ask workers not just “How are you”, but also “How can I help”—and actively listen to their responses. Survey tools can help you build wider, more consistent feedback, and application and network logs can identify the technical blockers that are holding individuals back.

While your research may uncover company-specific issues, you’re also very likely to come across these common IT complaints:

  • Overwork: Long hours are common in many roles, and if a network outage hits, you can forget about a work-life balance.
  • Churn: With the job market stretched and conditions often challenging, teams can feel they are in a constant state of flux—particularly in startups, which lose staff every 1.2 years. Churn means lower morale and more overwork, as the team strains to pick up the slack.
  • Unrealistic expectations: Timelines for projects are often needlessly optimistic, and may not factor in extra tasks, such as employee onboarding and basic maintenance.
  • Manual tasks: Network and application issues are often raised via user help-desk requests because IT staff do not have the right tools or visibility to identify and resolve them early. The result? A time-consuming game of catch-up that never ends. Across most organisations, portals and tools are rarely unified, which means workers have to keep switching between different tools to get a handle on the big picture. The resulting work can quickly become repetitive and frustrating.

Once you’ve found the source of the problem, you can start to solve it

Whether it’s through one-on-one feedback or wider data, these learnings should help you prioritise issues, and make a case for the best way to solve them on an individual or structural level.

At company level, that may mean appointing a leader who is directly responsible for mental health, rather than treating it as a general HR responsibility. It might mean encouraging managers to include mental health checks into every catch-up and share their own experiences. By encouraging both top-down and bottom-up approaches to mental health, you can help make it part of both daily conversations and long-term strategy.

Other actionable steps may include better resourcing, setting boundaries around projects, using ticketing or Agile processes to break workflows into manageable pieces, and working to clarify job expectations.

Giving workers the agency to choose exactly where and how they tackle their workload is another crucial shift. It’s increasingly clear that remote and hybrid work (involving a mix of office work and remote work via both local networks and cloud services) can deliver real benefits. In a Riverbed survey, 94% respondents agreed that a hybrid work environment helped organisations recruit talent and remain competitive, with greater employee happiness among the main benefits. These flexible work practices can be a huge benefit to IT staff, but unless they’re planned properly and backed by the right tools, they will only exacerbate issues that network teams are all too familiar with.

IT workers need tools to suit the modern age

The more complex and widely distributed the IT environment, the more strain is placed on networks and applications. And the wider your organisation’s perimeter, the more attack surfaces cybercriminals have to exploit. Without the right tools to manage hybrid applications and networks, IT staff may be left screaming in frustration.
These issues are particularly damaging because they combine a feeling of powerlessness—since engineers struggling to get networks up to speed again may have little choice but to use tools that aren’t fit for purpose—with pressure from other staff who are desperate to get crucial operations back online. Network engineers and IT teams need tools, technology and training that match the demands placed on them.

Provide the right network support

Slow networks, poor monitoring, limited metrics and regular outages are common problems that heap pressure on even the most patient IT teams. They may be particularly noticeable when your organization uses platforms and monitoring that are a poor fit for the existing network architecture. However, it may not be necessary to overhaul the entire network. Depending on your situation, network performance can be improved by:

  • Optimising performance via application performance management platforms and application acceleration.
  • Using best-in-class network performance management and monitoring to ensure tools and hardware are working at peak capability.
  • Using software-defined WAN to increase efficiency while reducing bandwidth use.
  • Producing timely, relevant dashboards that can be customised and shared with different stakeholders.

Integrated, end-to-end platforms that manage multiple functions will be far easier to deal with than separate solutions. They should offer the flexibility to remedy problems and accelerate functions across multiple networks. And they should be able to produce and share data in real-time. The right alerts and metrics give IT staff a crucial advantage, and the chance to spot an issue before it turns into a network crisis that will hammer both your business and your team’s mental health.

Protecting networks and the teams that manage them

Workplace mental health issues can impact individuals whatever their role, but as we’ve seen, the specific pressures that make life difficult for IT staff have worsened in recent years. Companies must give IT staff and network engineers the right support and the right tools. That means listening and undertaking concrete actions to build a sustainable workplace. But it also means giving workers the right software and hardware to remove blockers and frustrations and help them keep data flowing.

The Riverbed Unified Network Performance Monitoring unifies device monitoring, flow monitoring, and full packet capture and analysis solutions. These solutions are tightly integrated together so that you can more quickly troubleshoot complex performance issues. They also integrate into the Riverbed Portal, which combines them into collated, easy-to-use dashboards that streamline the analysis of complex problems. Book a consultation with a specialist here.

]]>
Application Acceleration for Today’s Distributed Enterprise https://www.riverbed.com/blogs/application-acceleration-distributed-enterprise-today/ Thu, 11 Aug 2022 12:30:14 +0000 /?p=18219 Today’s IT teams are challenged like never before—expected to support work from anywhere and provide secure, fast access to needed applications from any location. Then there’s the matter of where those applications are based, which is complicated as it varies from app to app. This makes providing the acceleration needed to drive employee productivity on those apps more challenging than ever. And, let’s not forget the challenges presented by the network that these applications run on, which as it evolves becomes increasingly distributed and complex. They don’t just support on-premises MPLS solutions, but also mobility and internet-based applications as well.

The enterprise network needs a differentiated solution for networking, connectivity, and acceleration for every app.

 

It’s relatively easy to find vendors to address one issue or another. But, how can any one of them handle the complex set of issues you face? Could there possibly be a company with a holistic solution? Could that company address such a highly complex, multi-faceted challenge? You can dream, right?

Fortunately, the answer is, Yes! There IS one company uniquely qualified to provide a holistic solution—better yet, a fast, agile, and secure acceleration of any app over any network to users, anywhere. We are that company. We’re in the business of application acceleration. Our solutions are trusted by 95% of the Fortune 100 as well as 83% of the Forbes 500. To learn more about our application acceleration solutions, watch this.

Riverbed optimizes network performance & accelerates applications

Our acceleration solutions are based on 15+ years of industry leadership and innovation. They boost end-user digital experience and productivity by enabling up to 33x faster app performance anywhere. And, bandwidth consumption is also reduced by up to 95%—even under sub-optimal network conditions.

Application Acceleration Benefits

 

Riverbed maximizes cloud value

Our Application Acceleration solution can speed migration and access to workloads for multiple IaaS platforms. This includes Microsoft Azure, AWS, Nutanix, and Oracle Cloud. We also accelerate cloud-to-data-center replication flows by 50x or more through proven data transport and application streamlining innovations. Our fully managed cloud service accelerates SaaS performance by overcoming network inhibitors such as latency, congestion, and the unpredictable last mile of today’s mobile workforce for leading SaaS applications. These include Microsoft 365, Salesforce, ServiceNow, Box, etc.

Riverbed accelerates app performance for today’s remote workforce

Riverbed Application Acceleration boosts performance by 10x or greater direct from the user desktop. Workers get the application performance they need no matter where they’re working. We extend best-in-class WAN Optimization and the industry’s only application acceleration to remote users. We provide fast, secure access to on-premises IaaS and SaaS-based applications. And, we do this across any network.

Riverbed speeds video content delivery for today’s dynamic workforce

Riverbed provides a reliable, secure, and easy-to-deploy video distribution solution. And even better, we do this without the need to change or upgrade any existing network infrastructure. Our scalable, cloud-based platform speeds the delivery of bandwidth-hungry video content directly to users by up to 70%. We can also reduce bandwidth by up to 99%.

Riverbed Acceleration boosts performance, productivity, and digital experience.
Riverbed Acceleration boosts performance, productivity, and digital experience.

To learn more about our application acceleration solutions, go here.

 

]]>
Under Pressure: How Network Performance Takes A Mental Toll on IT https://www.riverbed.com/blogs/network-performance-takes-mental-toll/ Sun, 26 Jun 2022 22:00:00 +0000 /?p=18147 In our increasingly connected world, network slowdowns and outages can cripple a business. Outages hit organisations’ operations, reputation and profits, and the pressure to get the online wheels turning again is immense. The stress falls squarely on IT teams, and the impact on individuals’ mental health can be brutal. Companies must offer better support to teams that may be stretched thin even before an outage strikes.

Downtime can be catastrophic

No one is safe from network outages. Apple, the BBC, Coinbase and Reddit have all suffered in recent years. In October 2021, a seven-hour outage cost Facebook $100 million. Two months later, Amazon was out of action for hours, leaving customers unable to operate their networked fridges, doorbells, and speakers, and leaving thousands of robot vacuum cleaners to twiddle their smart thumbs.

Network outages can result from power failure, network congestion, cyberattacks, human error or configuration issues. They may be widespread (like the March DNS incident that saw 15,000 Australian websites taken offline) or specific to your organisation. Either way, they’re expensive, costing larger corporations an average of $144,000 per hour in revenue loss, and smaller organisations (those with fewer than 20,000 employees) $2,000 per hour.

Cost of downtime is rising year on year. Lost revenue, reduced productivity, customer complaints, regulatory issues and reputational damage can all throw your organisation into a tailspin. A blocker to one process can ripple right through your organisation, leaving staff baffled and senior executives fuming. And panicked stakeholders will be looking in one direction for resolution: the IT team.

Network teams are already feeling the strain

The pressure to get networks running again as your organisation haemorrhages money would be bad enough if it landed on an empty to-do list. But many IT teams are already overworked, under-resourced, and plagued by employee churn. Workers may be expected to deal with tickets at pace while procuring hardware and software, helping less technically minded staff, and advising on new technologies.

That pressure has been exacerbated by rising cybercrime and new COVID protocols, while the rise in remote work demands more bandwidth from IT infrastructure and the expectation of timely troubleshooting. Outage emergencies put the heat on people who may already be near boiling point.

Network outages pile on the pressure

Network outages can hammer your organisation’s bottom line commercially, but there will be a psychological impact for many workers, too. Internet addiction is recognised as a condition by the World Health Organisation, and for many of us being deprived of the internet feels deeply personal, more like losing a limb than having a tech problem. Being cut off from networks, key projects and messaging services can be deeply frustrating for employees. That great wave of emotion comes crashing down just as IT workers need to be at their most focused and methodical.

Network outages will generally mean team members being pulled out of key tasks, many of which are time-sensitive. Forget that once-urgent project, remote work or project visibility: even employers who are sympathetic to a work-life balance will crack the whip. Network engineers may need to travel to remote sites, expose themselves to COVID risks or spend long hours desperately tracing the problem while the clock ticks and panicked communications rain down from executives. Unfortunately, at this stage, it’s not just about fixing the technical problems: stakeholder management becomes a major factor. Individuals whose attention to detail and concentration makes them relish coding projects may suddenly be roped into a high-pressure environment that not only requires a quick technical response but also needs diplomacy and the ability to manage stakeholder expectations.

Recovery may be complex and frustrating

Network outages may be relatively short-lived, especially if you have a viable secondary connection. But all too often, recovery is complex and frustrating.

A single failure may trigger multiple issues, affecting different offices, partner organisations, and online portals in different ways. For network teams, that means reviewing everything from IoT endpoints to data packets and cloud-service infrastructure to trace the source of the problem. Outages can also result from cyber-attacks, operation errors, surges, network congestion, loose connections, or cables damaged by fire or water.

Teams may find themselves scrambling through dashboards and logs to get a holistic picture. Systems may need to be rebooted remotely. Weighing up possible causes and results of corrective action is hard at the best of times and is even harder while the business breathes down your neck. Without clear guidance, staff may use unsecured public wi-fi or shadow IT to keep projects on track, exposing data to theft and opening up your network to malware attacks—and the risk of further outages—further down the track.

Network outages can have a profound mental health impact

To get a sense of just how profound the impact of a network outage can be on IT teams, it can be revealing to consider it in the light of six classic causes of burnout:

  1. Workload: dealing with a network outage can take up all the hours in the day.
  2. Lack of control: outages are sudden, and the pressure to resolve them quickly gives workers very little agency.
  3. Lack of recognition: resolving an outage might be met with fanfare—or with cries of “What took you so long?”
  4. Poor relationships: friction is inevitable with emotions running high.
  5. Lack of fairness: the outage may not be anyone’s fault, but IT is likely to shoulder the blame.
  6. Values mismatch: your security team may be preaching safety first, while sales want channels reopening as soon as possible. Guess who’s caught in the middle? That’s right, the network team.

Preparing your organisation for network outages

So how can IT teams be better supported? One solution is to make network outages rarer. Better monitoring can mean you identify problems earlier and take steps to resolve them. The best Network Performance Monitoring (NPM) tools integrate device and flow monitoring with full-packet capture and analysis solutions, allowing you to assess data flow, security threats, and network issues. Smart, real-time dashboards take the strain out of assessment and troubleshooting.

Other changes that can help you stave off-network outages include using a backup connection, installing an uninterruptible power supply (UPS), and improving your organisation’s cybersecurity posture (particularly to mitigate Distributed Denial of Service attacks).

These measures will reduce the risk, but you should still ensure you have a clear, frequently updated disaster recovery plan—and that plan needs to be shared and agreed upon by relevant stakeholders.

Measures to improve staff wellbeing can make a real difference to mental health (and staff retention), but they need to address fundamental business processes rather than superficial signs of burnout. There’s no point in offering workers more time off if their workload remains unmanageable. And if you want to learn from the trauma of network outages, you should listen to the individuals who have worked to solve them, hear their pain points, and assess their resource needs.

Managing and preventing network failure

When an outage does occur, the pressure on IT teams can be unbearable, and that has an inevitable impact on mental health. Appropriate measures such as Network Performance Monitoring can help reduce the risk of an outage and give your network teams the tools they need to quickly resolve problems when they occur. With the right tools and policies, your organisation can support IT staff to quickly resolve network performance issues, even in the eye of the storm.

Riverbed Unified NPM unifies infrastructure monitoring, flow monitoring, and full packet capture and analysis solutions. These solutions are tightly integrated together so that your teams can more quickly troubleshoot complex performance issues. They also integrate into the Riverbed Portal that provides collated, easy-to-use dashboards to streamline the analysis of complex problems. Book a consultation with a specialist here.

]]>
Riverbed Empowering the Experience https://www.riverbed.com/blogs/empower-the-experience/ Wed, 27 Apr 2022 00:01:12 +0000 /?p=17959 Riverbed Empowering the Experience - Dan Smoot
Riverbed Empowering the Experience – Dan Smoot

Today marks an exciting new chapter for Riverbed, our partners, and our customers. With the launch of our new brand including Riverbed, and a new unified observability portfolio and strategy that unifies data, insights and actions across IT, we are embarking on a mission to enable organizations everywhere to deliver seamless digital experiences and drive enterprise performance. This launch reflects the evolution of the Company, our technology, and our intent to disrupt the market.

We are capitalizing on our trusted brand, and the dynamic growth and market momentum of our visibility solutions, to drive a differentiated approach to observability that solves one of the industry’s most daunting problems: how to provide seamless digital experiences that are high performing and secure in a distributed and hybrid world.

Let me set the stage. Today, users are everywhere. Applications and their components are everywhere. Data is everywhere and increasingly growing in terms of volume, variety, and velocity. In fact, data is projected to reach 180 zettabytes in 2025, up 3x from 2020. Modern IT architectures are exponentially more complex making it difficult for IT to manage performance effectively and proactively.

Yet every click represents an activity that is vital to your organization and there is a relentless expectation for a flawless digital experience. The quality of these experiences is the heartbeat of the fiercely competitive digital-first world we live and work in.

When issues occur, IT is overwhelmed by massive amounts of data and alerts from siloed tools that provide little context or actionable insights. Troubleshooting requires war rooms and the expertise of highly skilled IT staff to manually connect and interpret data across domains. And when tools limit or sample data, IT may not even be aware of other potential issues or opportunities for proactive improvement.

Observability is meant to solve these problems, but current solutions fall short. Even so-called “full-stack” observability solutions fail to capture all relevant telemetry and sample data to deal with the scale of today’s distributed environment. Most solutions only collect three or four types of data and are limited to DevOps or Site Reliability Engineers for cloud-native use cases. And they offer nothing beyond the alert, so IT still relies on their resident expert to manually investigate events.

Enter Riverbed—a different, unique and superior approach to observability. Our unified observability portfolio unifies data, insights, and actions across all domains and environments, enabling IT to cut through massive complexity and data noise to provide seamless digital experiences that drive enterprise performance for both the employee experience (EX) and customer experience (CX).

The Riverbed portfolio leverages our industry-leading visibility tools (available today) for network performance management (NPM), IT infrastructure monitoring (ITIM), and digital experience management (DEM)—application performance management (APM) and end-user experience monitoring (EUEM), used by thousands of customers around the world. Unlike other observability solutions that limit or sample data, the Riverbed portfolio vision for unified observability is to capture full-fidelity user experience, application, and network performance data on every transaction across the digital ecosystem, and then apply AI and ML to contextually correlate disparate data streams and to provide the most accurate and actionable insights. This intelligence will empower IT staff at all skill levels to solve problems fast. Visit our Riverbed page to learn more and read today’s press announcement.

Complementing the Riverbed portfolio, Riverbed Acceleration solutions provide fast, agile, secure acceleration of any app over any network to users, whether mobile, remote, or on-premises. Built on decades of WAN optimization leadership and innovation, Riverbed’s industry-leading acceleration portfolio delivers cloud, SaaS, client and eCDN (video streaming) applications at peak speeds, overcoming network speed bumps such as latency, congestion, and suboptimal last-mile conditions to empower the hybrid workforce. Additionally, Riverbed’s enterprise SD-WAN provides best-in-class performance, agility, and management of MPLS, LTE, broadband and Internet-based networks.

Only Riverbed provides the collective richness of telemetry, insight, and intelligent automation, from network to app to end user that illuminates and then accelerates every interaction. With the powerful combination of our Riverbed Unified Observability and Acceleration solutions, IT teams are empowered to provide a seamless digital experience for customers and employees, and end-to-end performance for the business.

We’re looking forward to helping support our customers on this digital journey.

Together, let’s Empower the Experience.

Read more on our new brand here.

]]>
A ‘Brand’ New Day for Riverbed… Meet Riverbed! https://www.riverbed.com/blogs/brand-new-day-riverbed-meet-alluvio/ Wed, 27 Apr 2022 00:01:10 +0000 /?p=17975 Today is literally a ‘brand’ new day for Riverbed. It’s a moment we’ve been preparing for the last several months, and eager to unveil to our customers, partners and the world.

First, as you’ll see in our CEO Dan Smoot’s blog and press announcement, Riverbed is launching a broad strategy to bring unified observability to customers globally and accelerate growth. As part of the strategy, we’re developing an expanded unified observability portfolio to unify data, insights and actions to solve one of the industry’s most daunting problems: how to provide seamless digital experiences that are high performing and secure in a hybrid world of highly distributed users and applications, exploding data and soaring IT complexity.

In conjunction with this announcement, I’m pleased to share that we’re launching the new Riverbed brand, including the introduction of Riverbed, our portfolio for unified observability. With a fresh and vibrant visual identity, and a sharpened articulation of our solutions, the brand refresh reflects the evolution of the Company, our technology, and the momentum we are driving in the market.

The launch of Riverbed’s new brand identity and Unified Observability strategy comes nine months after Riverbed reunited with Aternity—which had been operating independently—to capitalize on the tremendous market opportunity around unified visibility and observability. We initially went to market as Riverbed | Aternity to signify the unification of these companies and our industry-leading solutions. Collectively, the companies’ intense focus on NPM, DEM (Aternity) delivering actionable insights on performance and acceleration have positioned Riverbed to fully capitalize in both the Unified Observability and Acceleration markets.

Now is the right moment to emerge as the new Riverbed—a Company that is visionary but grounded; agile yet proven; dynamic while trustworthy. We understand that every click brings an expectation of a flawless digital experience. And Riverbed enables organizations to transform data into actionable insights and accelerate performance for a seamless digital experience.

Riverbed will go to market with two exciting product lines—Riverbed and Riverbed Acceleration.

Riverbed pays homage to Riverbed, while also underscoring our unified observability value proposition. The name Riverbed derives from alluvium—the place where riverbeds unite and create the most nutrient-rich environment to mine for gold—with the ‘o’ standing for observability. Metaphorically, it represents the coming together of discrete IT telemetry streams (network, application, end users) where insights that are hard to find, but worth their weight in gold, reside. And metaphorically the “o” represents how we apply observability as a process to harness the value across the streams of telemetry—ultimately finding the “gold” for our customers across the flood of data in their IT ecosystems

Our Riverbed unified observability portfolio of solutions helps customers find that gold as fast as possible, turning actionable insights into business value so companies can stay competitive, productive and satisfy users’ fierce appetite for seamless digital experiences.

Our second portfolio is Riverbed Acceleration, which provides fast, agile, secure acceleration of any app over any network to users anywhere. Built on decades of WAN optimization leadership and innovation, Riverbed’s industry-leading Acceleration portfolio delivers cloud, SaaS, client and eCDN (video streaming) acceleration, as well as enterprise-grade SD-WAN.

When we bring these solutions together, Riverbed enables organizations to illuminate, accelerate and empower the digital experience. As we usher in the new Riverbed, I welcome you to view our new brand, and learn more about Riverbed. We look forward to continuing to deliver on our brand promise and helping our customers empower the experience across their organizations.

]]>
Why You Need Wireless LAN Monitoring https://www.riverbed.com/blogs/why-you-need-wireless-lan-monitoring/ Thu, 07 Apr 2022 19:08:31 +0000 /?p=17814 As employees begin returning to the office and enterprises adopt hybrid work policies, enterprise IT teams are being forced to accommodate a more unpredictable workforce. To provide more flexibility and foster collaboration, many enterprises have done away with assigned desks and offices in favor of hoteling and more communal work areas. This has placed an emphasis on the need for strong and reliable Wireless LAN Monitoring to ensure mobile and unpredictable employees maintain constant, uninterrupted wireless connectivity.

Let’s start at the beginning. First: what is Wireless LAN? Wireless LAN is a cordless computer network that links multiple devices using wireless communication to form a local area network within a specific space, such as an office. Basically, when you’re sitting at your desk and move to a conference room for an important meeting, Wireless LAN is the reason you stay connected to the closest and most effective access point.

To address the need for consistent Wireless LAN health, Riverbed has released NetIM 2.5 to give users predefined support to identify and fix Wi-Fi stability. NetIM 2.5 can achieve this through insights into access point status and quantity by model, OS version, and controller.

In this blog, we explore the importance and benefits of Wireless LAN monitoring and how the availability of Riverbed NetIM 2.5 will improve your Wireless LAN monitoring capabilities.

Why you need Wireless LAN monitoring

Every time a user’s connection falters, their productivity takes a hit. Not only does this demand time and attention to diagnose and fix the issue, but it also takes time away from an employee that would have otherwise been spent on important business-related tasks. This can cause significant employee and customer frustration, stalled projects, and loss of revenue. In fact, employees lose an average of 71 productive hours annually because of poor network connectivity. And whether you have two or 20 access points, connection issues can be hard to diagnose, especially since the primary form of connectivity is wireless, not tangible cords and cables.

So, if your network connection falters, do you know how to identify which access point is causing the issue?

Wireless LAN monitoring provides visibility into which controllers are being accessed across your device network, and which users are connected to specific access points. If the quality of your Wi-Fi connection suddenly decreases or is lost altogether, a Wireless LAN monitoring tool can find and mitigate the connectivity issue efficiently and effectively.

The data provided by Wireless LAN monitoring—which includes information on access point status, quantity, etc.—helps IT teams to answer questions like:

  • Do any access points have stability issues?
  • Are issues correlated with specific models or OS versions?
  • Do issues occur at specific times of the day?
  • Are issues correlated with the number of active clients for the access point?
  • Do we have too many clients connected to an access point?
  • Are any access clients down? If so, how many APs were connected before going down?

To fully realize the benefits of Wireless LAN monitoring, you need an easy-to-use platform that continuously identifies access point issues and provides proactive solutions.

Introducing: Riverbed NetIM 2.5

NetIM provides a scalable network and server infrastructure monitoring platform to help customers detect, diagnose, and troubleshoot infrastructure availability, performance, and configuration-related problems and outages. Network traffic data can be displayed within NetIM to help IT understand how device outages/slowdowns are affecting broader network and application performance. It is often combined with NetProfiler and AppResponse as part of Riverbed’s Network Observability. NetIM provides visibility into your devices (physical, virtual or cloud) giving insight into the health and status of your network environment, translating to what your user is experiencing.

In the latest update to Riverbed NetIM 2.5, Wireless LAN metrics are available with predefined support for Cisco and HP-Aruba Wireless LAN Controllers, along with Wireless Access Point Views

The update also features new security-related capabilities to ensure that NetIM operates within the security parameters of customers’ IT environments, including TLS 1.3 support for all communication activities and the ability to add or update SSL certificates via the web UI.

Learn more about the benefits and new security features of Riverbed NetIM 2.5.

]]>
Accelerating Enterprise Video Delivery https://www.riverbed.com/blogs/accelerating-enterprise-video-delivery/ Tue, 22 Mar 2022 12:35:00 +0000 /?p=17734 Content delivery networks, or CDNs, have been used by content providers for years to deliver high-quality video to people’s homes over an unpredictable public internet. Video consumes a lot of bandwidth, and high-definition audio requires low latency with minimal jitter. So, to deliver great video and clean audio, CDNs deployed content servers in strategic points of presence. That brought the content closer to the customer and solved the problem of delivering a bandwidth-intensive application over the public internet.

Today’s dynamic, hybrid workplace is facing a very similar problem. Enterprise organizations rely on video content — live, recorded, and collaborative — to operate efficiently and effectively. Even the most mission-critical activities now rely on video conferencing and reliable, high-quality video content.

The problem, however, is how much bandwidth video consumes at the local branch office or at someone’s home. This is especially in locations where many end-users consume the same content, a network connection can be overwhelmed, and the quality of the video stream will suffer. This is where eCDNs (enterprise content delivery networks) shine.

The Riverbed eCDN Accelerator

Riverbed eCDN Accelerator solution solves this problem by mimicking the distribution of content throughout a local region. Instead of placing a rack of content servers in data centers every 100 square miles, Riverbed eCDN Accelerator makes use of the WebRTC peer-to-peer technology to deliver a single stream to a dedicated computer in one location, which then distributes the content to its local peers.

With this method in place, WAN (wide area network) traffic for video content is dramatically reduced up to 99% while speeding video delivery up to 70%. This means video distribution can scale without needing additional service provider connections or locally installed hardware.

End-users working in the same office will immediately benefit from the eCDN peering relationship, and so will end-users working remotely over a VPN connection or when video traffic is being backhauled. In fact, it doesn’t matter where the source of the video content originates. The solution works just as well with video content delivered from the cloud or a SaaS provider.

In the graphic below, notice the difference in the volume of streams between a network using Riverbed’s eCDN Accelerator and one that isn’t.

 

A cloud-based solution

The eCDN solution is cloud-based with computers peering with each other via a browser or a software agent, not a piece of hardware. This provides several benefits to both IT operations and to the end-user.

First, computers peer with each other via a browser for certain applications, making zero-touch deployment quick and easy. Some applications benefit from an agent that can also be deployed via a browser eliminating the need for manual installations or some sort of local hardware.

The policy is pushed from the cloud providing IT centralized control over the environment. From there, IT can manage the ports that are used, manage the cache size on each local computer, filter the bitrate for video streams and configure location-based eCDN parameters to accommodate security requirements.

Second, for the end-user, there’s nothing to do but enjoy high-quality video. Deployment and ongoing operation happen behind the scenes, so for an end-user consuming recorded or live video content, it’s completely hands-off with instant results.

Video Consumption is on the Rise

Streaming for live video on Microsoft Teams, live virtual events hosted on platforms like ON24, and recorded video-on-demand has ballooned in the last few years. In fact, 82% of all IP traffic is expected to be related to internet-related video by the end of this year.

Without a solid enterprise content delivery solution in place, this increase will overwhelm many internet connections and crush the video quality needed to conduct business in today’s world while also impacting other applications running across the same networks.

Riverbed’s eCDN Accelerator is a powerful solution that improves the digital experience for end-users and ensures optimal video delivery.

For more information, reach out to your Riverbed representative and visit riverbed.com to learn more.

]]>
Facebook and Slack Outages Show That Visibility Is Mission Critical https://www.riverbed.com/blogs/facebook-and-slack-outages-show-e2e-observability-is-mission-critical/ Tue, 21 Dec 2021 21:00:00 +0000 /?p=17477 In early October, two major outages related to DNS configuration changes affected the customer experience for the users of digital giants, Slack and Facebook. The failure of Facebook and its WhatsApp and Instagram services extended several hours and was catastrophic in nature–a routine maintenance change effectively took down Facebook’s global backbone. The company was forced to communicate with its own staff and customers via its rival service Twitter.

The Slack outage was less widescale, affecting only a proportion of corporate users for up to 24 hours, but it was also as the result of an erroneous maintenance command. In both incidents, time to recovery was extended, as DNS servers had to be rapidly reconfigured and BGP replicated across the internet, and multiple data centres powered up again. In Facebook’s case, this put an extensive strain on power systems.

Mistakes like these do and will always happen–so what’s the best way to mitigate them and minimize outages when they occur? You have two choices: passive or proactive.

1. Wait for customers to complain (or leave)

The Facebook outage caught front-page news because it affected so many individuals and businesses. For many smaller organisations, Facebook and Instagram are their primary digital connection with their customers, often because they’re cheaper and easier to maintain than a standard website, even if they have one. One example is the number of retailers and restaurants offering click-and-collect or delivery during lockdown via their Instagram accounts. Influencers–one of today’s growth businesses –would also have lost revenues.

In some countries, WhatsApp has become the de facto call and SMS service provider—even for government departments. Inability to access it (and its stored contact details) would have put many millions of people out of touch.

Further, Facebook is used for authentication for accessing other online services, making it the ‘digital front end’ for millions of other businesses. It is also the greatest connector of family and friends for the western world. While a single outage is unlikely to lose the behemoth a large swag of disciples, few digital platforms as resilient, especially where there are alternatives.

Overall, Facebook’s outage is variously estimated to have cost the company US$60-100 million in ad revenue and wiped US$40 billion off its market capitalisation. Other estimates reckon the outage could have cost the wider economy hundreds of millions each hour.

You are unlikely to have quite as many customers dependent on your digital services as Facebook, but such a catastrophic failure could cost your business considerable revenues. And worse, you could lose customers for good. Banking customers, for example, often operate accounts with multiple providers. If your service goes down, it could be the last straw that will see them walk.

2. Be proactive through early warning and diagnosis

Whether an outage is due to erroneous commands, as in these cases, or due to hacking, you need the tools to pinpoint the precise issues so you can fix them fast. As Facebook’s engineers reported, “All of this happened very fast. And as our engineers worked to figure out what was happening and why, they faced two large obstacles: first, it was not possible to access our data centers through our normal means because their networks were down, and second, the total loss of DNS broke many of the internal tools we’d normally use to investigate and resolve outages like this.”

Border gateway protocol (BGP), for example, can go down in just 90 seconds–or potentially sub-second, depending on how it’s deployed. Using Riverbed’s Unified Network Performance Monitoring platform of integrated online services, you can set synthetics at the packet level to post alarms if any changes occur. NetIM can monitor BGP passively, while AppResponse can look at packets to detect failure. This enables you to be on the front foot–before people complain.

In the 11.12 release of AppResponse, we’ve introduced DNS Reporting and Alerting. AppResponse 11.12 includes brand new DNS analysis which previously required inspection using tools like SteelCentral Packet Analyzer or Wireshark. These new insights allow us to identify problems with DNS performance as well as compliance. This means that we can identify quickly, and accurately which clients are making what queries to which DNS servers, and if they are responded to.

The AppResponse DNS policies also allow us to identify when we see changes in our DNS traffic profiles. For example, we can alert on clients making connections to foreign DNS servers as an indicator of compromise. Another example could be increased DNS timeouts or errors.

These new features are included in AppResponse 11.12 and included if you are running the ASA feature license.

Here's an example of the types of metrics you will find with the DNS Servers Insight.
Here’s an example of the types of metrics you will find with the DNS Servers Insight.

Stronger security makes it harder

As Facebook found, the strong security measures they have in place slowed their ability to bounce back up: “We’ve done extensive work hardening our systems to prevent unauthorized access, and it was interesting to see how that hardening slowed us down as we tried to recover from an outage caused not by malicious activity, but an error of our own making.”

As I wrote in my recent blog, Customer Experience Lessons from the Akamai Outage, major outages highlight the importance of redundancy for essential services like global load balancing. Moreover, they emphasise the need for end-to-end visibility to pinpoint any network, application or third-party service fault within minutes rather than hours. In today’s economy, digital customer experience and business continuity are what it’s all about.

]]>
Log4J Threat Hunting with NetProfiler and AppResponse https://www.riverbed.com/blogs/log4j-threat-hunting-with-netprofiler-and-appresponse/ Wed, 15 Dec 2021 17:29:45 +0000 /?p=17530 A recently discovered vulnerability in the Java logging utility Log4J (CVE-2021-44228)1 enables remote code execution exploits in a variety of common software. This happens through the download and execution of malicious code embedded in the Java utility, sometimes nested in such a way making it difficult to identify.

Compared to a more directed malware campaign, this vulnerability has many potential exploits. However, Microsoft is maintaining a list of IPs believed to be taking advantage of this vulnerability as detected in their Azure service. Keep in mind, though, that because these bad actors are also scanning systems that are not vulnerable, we should be careful to examine positive indicators closely. The scanners are looking for vulnerable systems, and so receiving an incoming communication from them is not as conclusive as an outgoing communication would be.

Because this vulnerability can affect so many systems, it’s very important to examine network history immediately. We can use Riverbed NetProfiler to identify flows and hosts affected by this threat both now and in the past, and Riverbed AppResponse to analyze web application traffic to identify the vulnerability in action.

Log4J Threat Hunting with NetProfiler

Using NetProfiler with the Advanced Security Module, we can go back in time to verify exposure to the Log4J vulnerability. NetProfiler uses a frequently updated threat feed as its source for vulnerability information including the specific criteria to run against newly captured and stored flow data.

In the graphic below, notice that we can select Log4Shell Known Exploits from the threat feed and run a report to see if we are impacted when it first appeared on the network, and which hosts are affected.

Notice in the graphic below that we’re running a report for the last week.

Traffic Report showing a week's worth of traffic. New Connections can be suspicious traffic.
Traffic Report showing a week’s worth of traffic. New Connections can be suspicious traffic.

 

Some experts believe this vulnerability first appeared around December 1, so we can extend our search further back in time. In the graphic below we’re searching back to December 1.

Extending our Traffic Report back to Dec. 1 when Log4H is thought to have started.
Extending our Traffic Report back to Dec. 1 when Log4J is thought to have started.

Log4J Threat Hunting with AppResponse

The Web Transaction Analyzer module, or WTA, within AppResponse 11 allows us to search for the Log4J vulnerability from an application perspective using byte patterns to search the HTTP header and payload.

In the graphic below, notice that within WTA we can use some custom variables to search within the body, URL, and header of application traffic and report back if any of the conditions are true. Here we’re looking specifically for the JNDI lookup since it’s a key part of the exploit mechanism used by the vulnerability. These conditions could be extended to exclude certain source IPs that are legitimately running vulnerability scans as part of your security posture.

Using AppResponse WTA to set custom variables to detect for certain conditions within the body, URL, and header of application traffic.
Using AppResponse WTA to set custom variables to detect certain conditions within the body, URL, and header of application traffic.

Mitigation Steps

 After a thorough scan using NetProfiler and AppResponse, also consider these steps to protect systems from the Log4J vulnerability.

  • Enable application layer firewalls such as AWS WAF which would significantly reduce the risk by protecting cloud-based and some SaaS applications
  • Block outbound LDAP to the public internet
  • For log4j versions greater or equal to 2.10 set log4j2.formatMsgNoLookups to true
  • And of course, always remember to install the latest patches in both internet-facing and internal systems.

The old tech adage “you can’t secure what you can’t see” has never been more poignant than now. Visibility is the cornerstone of network security, and by using powerful visibility tools such as Riverbed NetProfiler and AppResponse, we can gain a deep and wide awareness of everything going on in our network both in real-time and historically.

 

1https://www.cisa.gov/news/2021/12/11/statement-cisa-director-easterly-log4j-vulnerability

]]>
A High-Performing Hybrid Workplace: Are You Ready or Too Late? Executive Insights from Our Hybrid Work Global Survey https://www.riverbed.com/blogs/high-performing-hybrid-workplace-executive-insights-hybrid-work-global-survey-2021/ Mon, 01 Nov 2021 18:59:32 +0000 /?p=17437 Hybrid work is the new norm and critical to organizational success—but the question is, are you ready to support the hybrid workplace? To assess the benefits and challenges of a hybrid workplace and the role technology plays in enabling or impacting its long-term success, Riverbed | Aternity conducted a global survey across eight countries in September 2021 of nearly 1,500 business decision-makers and IT decision-makers. The findings are eye-opening and a reality check for us all. A full 83% of decision-makers believe 25%+ of their workforce will be hybrid post-pandemic and 42% say 50%+ will be hybrid.

We get it, hybrid work is important and here to stay. But shifting to a high-performing hybrid work model is challenging and elusive for most with only 32% believing they are completely prepared to support the shift to hybrid work. What do we do? Address both human- and technology-related barriers NOW!

As the survey noted, 80% of business decision-makers believe technology disruptions negatively affect them, their teams, and employee job satisfaction. To gain the maximum benefits from hybrid work, organizations must invest in technologies and modernize their IT environment. Under-investing in technologies that ensure IT services are performing and secure can have severe consequences to business success and the employee experience.

Now the good news. More than 90% of respondents agree hybrid work helps with recruiting talent and competitiveness and 84% agree hybrid work will have a lasting and positive impact on society and the world. So, they are investing in critical capabilities such as end-to-end visibility, cybersecurity and acceleration technologies to enable long-term success. This is important, as the need for end-to-end visibility and actionable insights intensifies in a hybrid workplace. And when networks, digital services and SaaS applications operate at peak performance, so do employees and the business.

Are you ready? Take a look at the full Riverbed | Aternity Hybrid Work Global Survey 2021 and discover key executive insights and investment areas to create a high-performing hybrid workplace.

For the success of your business, happiness of your employees and satisfaction of your customers, do it today, before it is too late.

]]>
Visibility and Performance from the Client to the Cloud: Riverbed at Microsoft Ignite 2021 https://www.riverbed.com/blogs/visibility-performance-client-cloud-riverbed-microsoft-ignite-2021/ Wed, 27 Oct 2021 21:00:00 +0000 /?p=17422 Today, we can work from almost anywhere. Sometimes we’re at home, sometimes we’re at the office, and other times we’re in a coffee shop or an airport. This poses two significant problems for IT departments:

  • First, how can we have visibility into a network we don’t manage?
  • Second, how can we ensure peak application performance over a network we don’t own?

Riverbed’s visibility and performance solutions address these modern problems head-on to provide end-to-end visibility from the client to the cloud as well as optimal application performance no matter where people connect to do their work.

The two sides of a conversation are no longer static. In the past, end-users were grouped together at an office managed by the IT department, and applications were down the hall in a server room or in a data center owned by the organization. That’s not at all the case anymore. Now, end-users could be virtually anywhere there’s an internet connection, and they’re accessing applications that live in the cloud.

We’ll be addressing these two problems during three short demonstrations prepared for Microsoft Ignite 2021. Read on for details on what we’ll be covering in each of the demos below:

Aternity provides remote worker visibility

In the first scenario, we’ll see how Aternity gives us historical visibility of our remote client computers when they’re working remotely over a VPN. Aternity’s VPN usage and trends dashboards provide very specific information on how end-users are connecting, over which VPNs, and how connections perform.

Application Acceleration ensures peak performance of on-premise applications

In our second scenario, we’ll see Riverbed Application Acceleration in action as it dramatically improves the performance of an on-premises application. Because it’s an agent-based solution, we can provide the benefits of application acceleration regardless of where end-users are located.

SaaS Accelerator optimizes SaaS performance for remote workers

In our last scenario, we’ll see a real-time demonstration of how Riverbed SaaS Accelerator optimizes Microsoft Azure and Sharepoint traffic, again regardless of where end-users work. In this case, our end-users are remote, and they’re accessing applications in the cloud, delivered by our Microsoft SaaS provider. Riverbed SaaS Accelerator was designed for this very scenario – ensuring top performance of SaaS applications for end-users at the office or working remotely.

For today’s hybrid workforce, a simple internet connection just isn’t enough anymore. We don’t manage the network many of our end users are connecting to, but we still need the visibility and application performance we had when everyone worked at the office. As the workplace continues to be in a state of flux, our visibility and performance solutions help to keep your end users and applications productive from anywhere.

Check out Riverbed’s virtual booth at Microsoft Ignite 2021 where you can watch our demos videos and learn more about our solutions.  To register, visit the Microsoft Ignite Registration page here.

]]>
Visibility, Actionable Insights and Performance for the Modern, Hybrid Enterprise https://www.riverbed.com/blogs/visibility-actionable-insights-performance-modern-hybrid-enterprise/ Mon, 04 Oct 2021 14:35:50 +0000 /?p=17410 This is an exciting week, as we host the Riverbed Global User Conference with thousands from both the Riverbed and Aternity user communities joining us at our annual conference.

As we look forward, the New Horizon is upon us—it’s digital-first and hybrid by design, with cloud, SaaS and legacy technologies working together and employees collaborating and engaging with customers anywhere, anytime on any device.

To achieve the needs of today’s enterprise, businesses and customers are demanding IT organizations take the following steps to successfully move forward:

  • Embrace a hybrid culture
  • Modernize IT
  • Leverage end-to-end visibility to improve security postures

Embrace a hybrid culture

Embrace a hybrid culture: hybrid networks, hybrid workplace and hybrid workstyles. There is an acceleration of network and application usage by users internal and external to every organization. IT must deliver seamless virtual and physical experiences that are consistent, reliable and secure for all employees and customers, regardless of when and where they work or how they choose to connect.

Modernize IT

Digital acceleration requires IT modernization and the use of public and private cloud infrastructure. The challenge is how to combine legacy environments with new cloud infrastructure and validate the efficacy of this digital transformation. With the goal of creating modern, hybrid cloud environments, you must overcome the complexity of operating resources on-premises, in the cloud and at the edge. This requires investing in technologies that give you end-to-end visibility and control—empowering your teams to deliver rock-solid, secure performance and digital experience.

Leverage end-to-end visibility to secure performance and digital experience

There have never been greater challenges for IT than the current complexity of the network and the aggressiveness of cyberattacks. IT teams must gain full visibility and control over the security and performance of their modern, hybrid environment. To accomplish this organizations must capture full-fidelity data—not sample—across networks, apps and end-users, which is what our visibility solutions enable.

End-to-end visibility and the rich, broad set of data it provides is more important than ever in our modern hybrid enterprises to ensure productivity, end-user experience, high-quality digital experiences and security, but the treasure trove of data is more valuable when analyzed in context to deliver actionable insights to the multitude of stakeholders that are driving the organizational goals—transforming IT operators into technology leaders who connect insights into business outcomes

Our vision is to deliver actionable insights that extend from the technology infrastructure through the network all the way to the customer to protect and extract the value behind every click.

To help you maximize this transition, we’re hyper-focused on bringing unique value in two critical areas: Network Performance and Acceleration, and End-to-End Visibility.

Network performance and acceleration

Riverbed | Aternity application acceleration and WAN optimization technologies overcome the effects of latency, bandwidth saturation and network contention to ensure the fastest, most reliable delivery of any application—including popular SaaS applications—to any user, regardless of location or network type. Delivering performance is vital in a hybrid environment. Today there is a race to the edge. The agility and efficiencies gained by investments in cloud and SaaS won’t matter if your mobile and remote workforce productivity suffers as a result of unacceptable application performance. Networks must optimize to the edge and accelerate applications; on-prem to the edge, cloud to the edge and application to the edge while securing edge access.

End-to-end visibility

Today, there is an abundance of data being generated. Many organizations are burdened with mountains of siloed data from disparate monitoring tools that are difficult to normalize and interpret for immediate action. More digital devices and sensors, plus broader, more complex networks means more data sources. Decision-makers are being inundated by data and alerts, still lacking business intelligence and actionable insights, which is even more important with hybrid networks, cloud and dispersed workforces. However, pulling all that siloed data, from the user, the application and the network up into the cloud allows you to contextualize the data and provide actional insights.

We recently made a strategic decision to bring more closely together the best-in-breed assets of Riverbed and Aternity (previously a division of Riverbed) to bring to our customers the most comprehensive end-to-end visibility solution in the industry. This is powerful and brings together full-fidelity visibility that will enable us to deliver unified observability of your entire digital ecosystem: application, network, servers, cloud, devices.

If you are an organization that is seeking to Optimize Performance—Maximize Productivity—Reduce Risk—Eliminate Waste—Improve Customer Experience—you need actionable insights that extend from the technology infrastructure through the network all the way to the customer to protect the value behind every click. Technology organizations, especially IT, become invaluable when they can gather information that can be easily understood and used to make faster, better, more accurate decisions—fueling innovation, which drives the business and organizational performance. IT can be an accelerator and not an inhibitor to your organization’s productivity.

Unified observability provides a single source of truth of your data providing actionable insights that transform IT operators into technology leaders who can drive value for the business by delivering:

  • Frictionless Performance
  • Unyielding Productivity and Efficiency
  • Seamless business continuity

We are here to help our customers and the market prepare for this new horizon that is hybrid and all about digital experience and performance. Join us this week at the Riverbed Global User Conference to learn more about our strategy, vision and how you can master the New Horizon.

 

]]>
Riverbed at Networking Field Day 26: Demonstrating End-to-End Visibility from the Client to the Cloud https://www.riverbed.com/blogs/demonstrating-visibility-client-to-the-cloud/ Thu, 30 Sep 2021 15:41:00 +0000 /?p=17375 Riverbed has presented at Networking Field Day a bunch of times, but for the most recent event, we took a different approach than usual. We wanted to show a real example of how Riverbed’s solution provides both deep and wide visibility from the client to the cloud.

Rather than present 100 PowerPoint slides, we walked through troubleshooting an actual application performance problem. We still had a slide here and there to introduce the tool we’d use in that segment, but other than that we wanted our presentation to be as much demo as possible.

We built an environment of real client computers running over a real SD-WAN to real web servers in three AWS regions. We connected everything to our SQL backend, and we stood up internal and external computers running internal and public DNS. And to set the stage for our presentation, we purposefully caused poor application performance of our demo web application.

Level 1 helpdesk

I started with Portal, similar to how a level 1 helpdesk person would. We immediately saw a problem with our AWS East region and no indications that there was a problem with our SD-WAN. So, just like in an actual troubleshooting workflow, I escalated the ticket to the next engineer.

Visibility from the client perspective

Jon Hodgson, VP of Product Marketing at Aternity, analyzed the client-side with Aternity. Aternity uses agents installed locally on endpoints, whether those be workstations, mobile devices, servers, or even containers. Jon used the Aternity dashboard and DXI, or the Digital Experience Index, to confirm poor application performance on all computers, but he also discovered an unauthorized crypto miner on three machines.

Investigating a security breach

This was a security breach, so it was time to escalate to John Murphy, Technical Director at Riverbed, who played the role of a security engineer. John used NetProfiler to dig into the crypto miner application flows to determine where they were going, when they started, and what else on our network was infected. We believe that visibility is the foundation for robust network security, so to us it’s only natural to incorporate automated security investigation functions into our flow analyzer.

Though John got some great info in terms of the breach, he didn’t find the root cause of our application performance problem. So he escalated the ticket to the network team to see if there was a problem with the network itself.

Escalating to the network team

Brandon Carroll, Director of Technical Evangelists, used NetIM to look at the path in between clients and AWS. SD-WAN gateways looked healthy, core switches looked fine, and all our regions showed green in the dashboard. It was time to get more granular, so Brandon introduced Riverbed’s synthetic testing tool, built right into NetIM.

Several tests were already running – in this case, HTTP tests which monitored successes, failures, and response times to our web servers. The metrics didn’t look good. Response times were high, and success rates were around 80%. And using some synthetic monitoring tests he created on the fly, he began to see strange DNS issues.

With this red flag, Brandon escalated the ticket to our last engineer, Vince Berk, CTO at Riverbed. Vince used AppResponse to analyze the specific TCP connections between our clients, DNS servers, and web servers.

Digging deep to find the root cause

AppResponse is a powerful analytics tool. It gives us the macro view of how applications are doing using visualizations of server response time, retransmission delay, connection failure and setup time, and an entire host of metrics that can be looked at individually or taken together as the application’s User Response Time. And since AppResponse gathers every single packet we throw at it, it’s also a full-fidelity visibility tool down to the most granular micro level.

And that’s exactly how Vince used AppResponse. He analyzed TCP flows, looked at individual packets, and ultimately found that DNS wasn’t load-balancing but was instead pointing all requests to the AWS East region. All this unexpected traffic overwhelmed our AWS East web server which negatively affected the performance of our application.

Remember that Portal is our macro view and usually our first step in troubleshooting, so the helpdesk may have figured out the root cause right away.

You can visit our NFD26 presentation, to see each of our visibility tools used independently to analyze different pieces of the puzzle.

Riverbed’s end-to-end visibility solution operates at the macro level to provide high-level metrics of application performance, but when it’s time to roll up your sleeves and get into the weeds, our tools provide the depth and breadth of end-to-end visibility at the micro level from the client, through the network, and to the cloud.

Visit Tech Field Day’s event page to watch our entire presentation at Networking Field Day 26, and visit the Riverbed Community to join in on the discussion!

 

 

]]>
Using the Network to Contain Supply Chain Attacks https://www.riverbed.com/blogs/using-network-contain-supply-chain-attacks/ Fri, 06 Aug 2021 12:30:00 +0000 /?p=17188 These days we’re hearing more and more about ‘supply chain attacks’. That’s when a component of an application has a weakness with the potential to make the entire system or service vulnerable.

Consider a soft drinks manufacturer. If a competitor wanted to damage its market shares, rather than targeting the bottling plant, it would be easier to target the supplier making the bottle caps. Loss of fizz, unhappy customers switching to the ‘other’ cola—all achieved without needing to hack highly guarded systems and the ‘secret recipe’.

Lurking in Linux in plain sight

On 10 June 2021, a security specialist reported a serious bug that had been sitting in Linux code for seven years. Located in polkit, an ‘under the hood’ system service used by default in many Linux distributions, it effectively allows an unprivileged user to assume administration rights. It’s also quite easy to execute with just a few command lines.

Obviously, the first thing for any organization using the relevant releases is to close this dangerous breach with a patch. But, given it took the extensive Open-Source community seven years to spot, how can you know if and when it was exploited on your own systems?

It’s just one example of potential vulnerabilities you may not be aware of within your application infrastructure—and it won’t be the last. Many applications encompass a thousand or more components, and you can’t possibly test them all against your own security posture. Products are built by product managers and developers of varying quality; there is plenty of scope for human error, or someone deliberately creating a back door for attacking a service or software product made up of multiple components. Until a new zero-day is announced, there won’t be patches available. So until then, you’re running blind.

The security community is well aware of the risks. So-called White Hatters have been deliberately introducing duplicate software with typos in the name of software components which alert the developers to the fact they have included the similarly named albeit benign software package. The intent is to alert developers to the problems and risks of supply chain vulnerabilities.

How does Unified NPM help?

Riverbed’s Unified Network Performance Monitoring (NPM) platform is typically used by NetOps and application teams to troubleshoot, pinpoint, identify then resolve performance issues, whatever their cause. But it is also proving invaluable to an increasing number of SecOps teams by enabling them to go back and collect empirical evidence of data breaches in order to deal with any consequences.

Because Unified NPM records all data flows, all of the time and maintains historical records, it makes it easy to go back and see whether any data was breached after an event. It does this by recording ‘indicators of compromise,’ which may be IP addresses associated with an attack, or command and control mode activities indicating where attacks are coming from.

Essentially, Unified NPM retains comprehensive flight data that enables you to discover in the future both if and how your security has been impacted.

Making unknown unknowns known

Another vulnerability resulting from supply chain attacks is endpoint software. Unless you only allow users to access corporate applications via strictly controlled SOEs (standard operating environments), you have no way of managing what people are using—devices, services or applications—and potentially bringing into your environment. In the current ‘from-any-device, from-anywhere’ world and considering the prevalence of Shadow IT, it is extremely difficult to know your level of risk.

At least with Unified NPM deployed, you will have the ability to identify indicators of compromise, enabling you to spot and investigate external reconnaissance of your systems or illegitimate data exfiltration. In addition to proactively reducing the impact of performance issues across your environment—on-premises or in the cloud—it’s another extremely useful weapon in your cybersecurity armory.

If you’d like to know more about the security potentials of Network Performance Monitoring, our recent webinar Why Network and Security Monitoring are Merging is available on demand.

]]>
Detection vs. Protection: Painting a Complete Picture of Your Security Position with Unified NPM https://www.riverbed.com/blogs/detection-vs-protection-unified-visibility-for-cybersecurity/ https://www.riverbed.com/blogs/detection-vs-protection-unified-visibility-for-cybersecurity/#comments Mon, 02 Aug 2021 14:49:00 +0000 /?p=17140 I’ve spent 20 years trying to help people understand IT problems (and solutions) and to dispel confusion. I really enjoy finding new ways to map IT to the physical world and analogies that turn on that lightbulb in people’s minds. My favorite analogy today is describing how network Performance is a huge part of ensuring cybersecurity for your business.

First, we need to clear up one thing. The way we approach security needs to change from WHEN, not IF, your network and data will be attacked. We have seen a huge rise in ransomware attacks. We have also seen major supply chain attacks. What does this tell us? Even if you follow the best security principles and have excellent perimeter security solutions in place—you are still at risk. If you download a digitally-signed, verified software patch that happens to contain malware, the attackers are in. There isn’t much your perimeter security tools can do to help. You have effectively, if unwittingly, opened the door to the attack.

Now that attackers are in the network, how do we know they are there and what they are doing? Here is my analogy: think of an art gallery with priceless works hanging on the walls. The gallery has:

  • An outer wall or fence (firewalls)
  • External doors (controlled internet connectivity)
  • Security personnel at each entry point (IPS/IDS systems)
  • Internal doors that permit or deny entry to secure areas (application security)
  • Cameras (particularly around high-value items), and
  • Sensors that detect motion, pressure, etc.

The gallery is designed to have people come and go as they please, with the perimeter security teams checking visitors for potential risks (bag searches, etc.) and tracking their arrival (logbooks, camera systems, etc.). Vehicles arriving at the loading bay will undergo additional checks on arrival and departure.

Security Guard and Visitors at Art Gallery
Security Guard and Visitors at Art Gallery

It is normal and expected to have people standing a few feet from a Van Gogh masterpiece at 3pm on a Thursday and the museum security will not be alerted by that. However, if someone were detected in the same place at 2am on a Sunday morning, this would raise the alarm as abnormal behavior.

If someone got into the gallery and removed an item from the wall, we would spot it is missing the following day by noticing the gap in the exhibition. But what if the intruder stole a second item, swapping it with a forgery? There would be no gap on the wall to alert us. A gallery would have lots of cameras though, revealing the intruders’ actions.

Back to the world of IT…

If we assume that the perimeter security solutions merely make it harder to access the network and that we are going to be attacked, understanding the attackers’ actions within the network is crucial to both detecting the damage and preparing a recovery plan.

The sensors and the cameras are the equivalent of Network Observability tools, alerting us to unusual activity (the 2am Sunday moment) on the network and telling us where people have been and what they have been doing (the forgery swap). It’s like having a recording so you can play back the whole incident.

If we think of a scene in a film where thieves move acrobatically between laser beams across a room, the sensors and the cameras in the room are there to detect the activity, not stop the heist. You could easily walk past the cameras and through the beams, take the painting off the wall and walk out again. NPM is the same—it is not a security tool, it does not stop the attack, but it does alert you when abnormal behavior occurs.

IT security threats come in all shapes and sizes, and there are attacks that you can’t really protect against, such as state-sponsored activity. Others are just hard to secure against.

You have users on the network (just like a gallery has staff and visitors) and you expect them to be there—in fact, you want them to come in! They need to access systems and data to do their jobs. Hopefully, you have security tools in place to check the identity of the users and allow them access to the right places (applications and data).

What if a user, who has legitimate access to a system, starts to engage in malicious activity? Would your perimeter security tools detect this? Perhaps not. However, because NPM understands normal behavior on the network, it can alert you to abnormal behavior, too. Perhaps, the user usually transfers a few hundred MBs a day, in the office, between 9 and 5, Monday to Friday. But suddenly, they access 10GB on a Sunday afternoon from home. What are they doing with this data? Perhaps it’s nothing, just a mistake, or maybe they are going to sell it to a competitor or take it to a new company? Either way, it is an anomaly that needs to be investigated.

As a final thought, if you are subject to ransomware attacks and systems are encrypted and data is stolen, you have to report the breach to the relevant authorities and may be exposed to significant fines. These attacks are typically two-fold now: 1) pay to get access to the data and 2) pay to stop the stolen payload from being released to the public. You need to know exactly where the attackers went and what they did, and this may help you make the decision on whether to pay the ransom or not.

In summary, security threats are going to happen. Attacks come in a range of types and traditional security measures may not protect you. To better prepare for the inevitable, it’s vital you have complete visibility of all activity on the network to detect rogue behavior and enable a quick recovery. And, as an added benefit, NPM tools (as a primary function) also track the performance of applications on the network helping to give your users the best possible performance.

Unified NPM from Riverbed

Networks are mission-critical to business success. Digital businesses need secure, reliable networks more than ever before. But, with today’s hybrid cloud architectures, maintaining a high-performing and secure network requires a broad view across IT domains.

Relying on a hodgepodge of narrowly focused, siloed performance monitoring tools does not provide the breadth and depth needed to diagnose complex network performance problems. Network Performance gathers all packets, all flows, all device metrics—all the time. The solution maintains visibility across all environments, on-premises and cloud, to enable business-centric views across all your domains. It also integrates with end-user experience and application performance monitoring so that you can understand the impact of network performance on critical business initiatives.

Identify, remediate and protect against cybersecurity threats

Today’s enterprises, with modern applications migrating from the data centre to cloud and SaaS platforms, are facing an uphill battle when it comes to cybersecurity. Despite heightened awareness, high-profile breaches continue to occur at alarming rates.

In order to quickly diagnose and respond to a full range of attacks, IT teams need visibility to identify threats of all shapes and sizes, from campus to cloud. Riverbed’s full-fidelity network security solution provides essential visibility and empowers users with fast, secure connectivity to the resources they depend on for business execution. The results: stronger security and better business performance.

 

 

]]>
https://www.riverbed.com/blogs/detection-vs-protection-unified-visibility-for-cybersecurity/feed/ 1
Customer Experience Lessons from the Akamai Outage https://www.riverbed.com/blogs/customer-experience-lessons-akamai-outage/ Wed, 28 Jul 2021 12:30:00 +0000 /?p=17154 On Saturday, 17 June 2021, a small configuration error by a usually ‘invisible’ cloud service provider had a massive impact on some of the world’s leading businesses. The Reserve Bank of Australia plus three of the Big Four were severely affected, along with Australia Post and Virgin Australia. Online services halted, staff couldn’t access the internet, contact centres went down, planes couldn’t take off–evaporating end-user experience and damaging brand reputation with their customers.

What happened?

Big brands are constant targets for a range of ideological, political, commercial or sheer criminal reasons. They must remain proactive against persistent cyber threats, including Distributed Denial of Service (DDoS) attacks originating from anywhere in the world. DDoS scrubbing is a powerful form of defence, and Prolexic from US-based global content delivery network (CDN) Akamai is a leading choice.

Prolexic monitors traffic entering large networks—such as web queries or mobile apps—then establishes whether it is valid or malignant. If valid, traffic is forwarded to the network of the bank, airline or other business. If not considered valid, the traffic isn’t allowed in.

Unfortunately, an erroneous value in a routing table caused a failure in Prolexic which affected around 500 organisations globally. Some were automatically rerouted, while for others it was a manual operation.

All up, it took from around 30 to 120 minutes for services to be restored, causing widespread angst and frustration for the customers of affected brands. All-points apologies via social media were reputation damaging. “We’re aware some of you are experiencing difficulties accessing our services and we’re urgently investigating,” tweeted CBA. “We’ll be back soon… We are currently experiencing a system outage which is impacting our website and Guest Contact Centre,” said Virgin Australia. For some consumers, it might even have been the last straw, causing them to switch providers.

How would Unified NPM have helped?

Customers with Riverbed’s Network Performance platform have the advantage of visibility in both directions: up and down. The cause of the fault would be quickly placed outside of the network as no traffic would have been detected in the GRE tunnel. In other words, “Everything’s fine, but there’s no load!” This would have sped remediation by simply turning off the Akamai DDoS scrubber or switching over to another one.

Unified NPM is able to protect customer experience by monitoring all key metrics—packets, flows and device data—all of the time. This gives you end-to-end visibility to:

  1. Understand what normal looks like. How much traffic should we be expecting? Where is that traffic coming from, or not coming from?
  2. Baseline the traffic leveraging passive (packets/flows) and active (synthetics).
  3. Alert on KPI deviations to help isolate the problem.
  4. Implement a mitigation or business continuity strategy.

This level of granularity delivers NetOps and SecOps teams with quantitative, empirical evidence of precisely where faults lie, so they can be remediated fast. If, as in the Prolexic case, the fault lies beyond the network, the indicated service provider can be alerted and have services diverted or switched off.

Unified NPM also provides valuable forensic information after an event. Once systems are up and running again, you have solid evidence to use in the development of mitigation tactics internally between teams and with your external service providers—with the objective of avoiding such outages in the future.

What have we learned?

The Akamai incident highlights the importance of redundancy for an essential service like DDoS scrubbing and a ready-to-go mitigation strategy. Once network and applications teams worked out that Akamai was the problem, they could have switched to an internal DDoS scrubber. In fact, many organisations principally use these less costly options and only switch to cloud providers like Akamai and Fastly when they are overwhelmed by a high level of incoming threats.

Network, application and security engineers could have been saved extended, high-intensity troubleshooting on a Saturday afternoon, if they had been able to pinpoint the fault in minutes rather than hours. Most importantly, faster recovery would have meant fewer consumers suffering a poor customer experience.

If you’d like to know more about Network Performance Monitoring, our recent webinar The Art of Troubleshooting is Back! is now available on-demand.

]]>
Expanding Gig Economy Raises Security Concerns https://www.riverbed.com/blogs/expanding-gig-economy-raises-security-concerns/ Fri, 16 Jul 2021 14:40:00 +0000 /?p=17171 COVID-19 has fundamentally changed traditional labor models and employment conditions. Many 9-5 office workers, having proved they can be just as productive working from home, expect flexible arrangements to continue post-pandemic, including the option to work from anywhere. And, at a time when all organizations are carefully managing human capital expenses, the demand for gig workers to fill resource gaps grew at an exponential rate. In fact, amid the pandemic, 23 million new participants—in the US alone— joined the gig economy to supplement their income or to become full-time independent workers.

According to a study by ADP Research Institute, the gig economy accounts for a third of the world’s working population and includes a wide variety of positions. Whether hiring artistic labor or deep technical expertise, or arranging for the short-term help of personal assistants, the gig economy enables organizations to be increasingly nimble and efficient in making use of outside talent at just the right times with as few hurdles or delays as possible.

As the demand for alternative labor arrangements grows, the use of software and web-based platforms to facilitate and automate gig work has evolved. Early examples include the use of technology to facilitate peer-to-peer transactions (e.g., Airbnb, Uber). Today, gig platforms support a wide array of digital transactions involving the exchange of goods and services, as well as sensitive data.

Gig workers are unique insider threats

While the benefits of the gig economy are evident for both employers and workers, the practice of hiring outside talent or leveraging unvetted platforms fundamentally clashes with the business imperative to monitor and safeguard sensitive data. Existing large-scale breaches of corporate networks have been tied to outside contractor and vendor firms. For example:

  • In 2013, the large-scale hack of retailer Target was traced back to their HVAC vendor
  • In 2018, cybersecurity firm BitSight found that over 8% of healthcare and wellness contractors had disclosed a data breach since January 2016, along with 5.6% of aerospace and defense firms
  • In 2020, a ransomware attack on Visser Precision exposed NDA and product plans for Tesla and SpaceX

In these cases, firms rather than individuals were implicated, but the threat is clear and known that trusted insiders of any stripe pose a security risk. Unfortunately, gig workers who require remote access to corporate data to do their work, are least visible to security teams.

To complicate matters further, gig workers often use their own equipment and network connections to perform work for multiple companies at the same time. This means traditional visibility instrumentation such as client agents or VPNs may be restricted. Direct oversight in many cases is not feasible, resulting in a reliance on automation to provision, facilitate, and de-provision appropriate network and application access.

Machine learning has become increasingly utilized to help security teams grapple with increasing scale and decreasing visibility. Here too, gig work poses unique problems: how does one produce behavioral baselines for an actor who only uses the network for a few days or weeks and then never again? Once produced, how can they be effectively managed and utilized?

Are your security controls adequate?

Despite these challenges, organizations still need effective strategies to determine whether their data is safe and to feel confident that they can identify and deal with any threats.

Emerging security approaches such as Secure Access Service Edge (SASE) and Zero-Trust Network Access (ZTNA), coupled with well-defined, role-based access control (RBAC) will be necessary to effectively manage gig workers according to principles of least access. But provisioning access is only part of the security story.

Network Performance has always been a critical component of ensuring that security controls are effective. New sources of telemetry will be needed to complete the picture, coupling events from SASE components with traditional packets and flows to paint a full picture of interactions from start to finish. Policy-aware visibility and population-based machine learning techniques will be needed to help analysts make sense of what they’re looking at—alongside, perhaps, techniques not yet dreamed up.

In addition to technology-based controls, organizations should establish clear, contractually-imposed requirements for gig workers, covering basics like antivirus software on their laptops to expectations for handling data upon finishing their assignments. Essentially, when it comes to gig workers, organizations can’t sacrifice proper vetting and due diligence for speed.

Flexible, distributed work is here to stay

The gig economy has brought dynamic growth to companies and flexible opportunities to workers. But business and IT leaders need to be prepared for the visibility and security challenges posed by gig workers—as well as their own employees who are working remotely—because these trends represent the future of work.

At Riverbed, we see our role as trusted visibility advisor to our customers to help guide them through the challenges of maintaining visibility—and thus security and auditability—while staying nimble. We continually monitor, plan and innovate to address these trends so that our customers can take full advantage of modern work practices, as well as transformative technologies, without giving up control over security and performance.

]]>
Simple, Secure SSL Certificate Management at Scale https://www.riverbed.com/blogs/making-ssl-certificate-management-simple-secure/ Thu, 01 Jul 2021 22:31:00 +0000 /?p=16957 SSL and TLS traffic are among the most common forms of secure network traffic in today’s enterprise. The Riverbed Application Acceleration solution has been ensuring optimal service delivery of SSL and TLS traffic for years. Our solution optimizes SaaS application traffic, internal traffic, and even traffic used for service chaining with CASBs, IDS solutions, and so on. On one side of our bookended solution is a SteelHead appliance in a data center or in the cloud, and on the other end is a SteelHead in the branch or installed as an agent on an end-user’s computer. However, creating, deploying, and managing the certificates we need for each internal or external HTTPS application can be a lot of management overhead for a network operations team.

Optimizing secure traffic

When we optimize SSL and TLS traffic, all these components need to be part of the organization’s PKI, or in other words, the method we use to secure digital communication. Typically, that’s done by using certificates deployed on the server-side SteelHead and the branch SteelHead. And, each HTTPS application uses its own unique certificates.

Think about how many and how often new applications get rolled out these days—especially SaaS applications. That means manually installing certificates and updating expiring certificates whenever there’s a change or a new application is deployed.

Simplifying certificate management

To solve this, we’ve integrated a certificate management component into the Client Accelerator agent already installed locally on an end-user’s computer. With this simple software update, the Client Accelerator has the ability to generate, host, and manage the certificates we need.

There’s no longer a requirement to host certificates on the server-side SteelHead. There’s also no longer the management overhead of manually creating, configuring, and storing certificates. And since certificates can be generated locally right on the computer, we eliminate the need for a central certificate authority.

We still use the Client Accelerator controller to manage all the agents deployed in the organization, but now we also use it to manage the certificate peering, certificate rules, and installation packages. What we end up with is a simplified, modular, and largely automated method for managing all the growing number of certificates we need to optimize SSL and TLS traffic.

Optimizing SSL and TLS traffic is a no-brainer. It’s one of the most common types of secure traffic on the network, and we’ve been doing it for years. And, with Riverbed’s latest update to the Client Accelerator agent, we’ve removed the complexity and overhead for managing certificates making it that much easier to deliver SSL and TLS traffic at peak performance.

Check out this video diving into the solution in detail here: 

To learn more about other ways in which you can strengthen your security posture with Riverbed, visit: riverbed.com/security

]]>
Riverbed + Microsoft: A Force Multiplier for Public Sector Mission Success https://www.riverbed.com/blogs/force-multiplier-for-public-sector-mission-success-microsoft-partnership/ Thu, 24 Jun 2021 21:51:06 +0000 /?p=17111 The public sector is built on collaboration. The vast services of the nation would cease to exist if it weren’t for the collective effort between the public and the private sector to rally around the mission of governing.

This unique dynamic is something that the Riverbed Public Sector Team thrives off because it forces us to be extraordinarily collaborative and creative in both our approaches and solution offerings. Public sector networks are some of the most complex in the world. There is no downtime and often the words life or death have very real implications.

This has taught us to look beyond the network needs and strategically approach customer engagements with partners like Microsoft who bring complementary solutions to bear and ensure our public sector customers have the network and applications needed to drive their missions forward. In short, we’re better together.

This is critical because agencies and organizations throughout the public sector are in the middle of massive digital transformation initiatives, either planned or brought on by the pandemic. They’re modernizing legacy infrastructures, adapting how they deliver and deploy IT resources, and moving staggering amounts of data, workloads, and applications into the cloud.

This transformation was accelerated at a breathtaking pace during the pandemic. The entire public sector displayed incredible resilience in adapting their networks to ensure that their dispersed users could continue to provide, and in many cases expand, critical services here at home and across the globe.

As Microsoft’s CEO Satya Nadella, recently said, “organizations underwent two years of digital transformation in two months.”

Ensuring mission success requires partners that share a unified vision of what it takes to help agencies accelerate out of the pandemic, build on silver-lining successes, and transform from where they are now to where they want to be tomorrow. We’ve been incredibly fortunate to have an active and engaged partner in Microsoft in this endeavor. I think a lot of our experiences over the past year could serve as valuable lens to look through in other customer engagements across both Riverbed and Microsoft.

Partnerships Proven by the Pandemic

Almost overnight, entire public sector agencies were accessing networks through TICs designed for fractions of users, laying bare the inefficiencies of legacy network environments.

While agencies were able to nimbly adapt and leverage Microsoft’s SaaS-deployed Modern Workplace, Collaboration, and Cloud solutions, many quickly discovered that those solutions were only as capable as the networks they traverse. Networks that hadn’t been designed for SaaS or cloud suffered from reduced visibility, network congestion, latency, poor application performance, and poor user experiences.

This is where our partnership and telework solutions, including Unified NPM, SteelHead, and Client Accelerator, were able to act as a force multiplier for success. We were not only able to give agencies crucial visibility into their network, applications, and users, but we allowed them to unlock desperately needed capacity on their overburdened networks so they could optimize the performance of their Microsoft solutions.

If you want to dive deeper into some tangible examples of this partnership in action, I encourage you to view our Pairing IT Investments Webinar: How Riverbed and Microsoft Create Greater Value for Public Sector Networks where public sector leaders from both Riverbed and Microsoft outline the strengths of our partnership and how our complementary solutions have prepared federal agencies for post-pandemic realities.

Jointly, we’ve enabled our public sector customers to not only respond to crisis in front of them, but helped them reimagine telework in the future and the role of the network as a driver of mission success.

The Government-from-Anywhere is Here to Stay

So what is next?  Whether public sector employees continue to work-from-anywhere, return to the office in reduced capacity, leverage hoteling and collaboration spaces, or a combination of all, it’s safe to say that a hybrid environment is here to stay.

This is truly a paradigm shift in the public sector because the benefits of telework are not just anecdotal, the data is clear. The public sector was not only more productive teleworking during the pandemic, but they were more engaged, collaborative, and content. So how do we pivot from the short-term strategies that got the job done, to long-term solutions that are sustainable, capable, and easily implementable?

Recently, we gathered leaders from across the federal landscape at Riverbed’s Network Transformation Summit where we dissected this very question. The consensus among the federal CIO, CTO, and CISO speakers was that even with the rapid onset of the pandemic and shifting demands of agency networks, the public sector workforce not only survived, but thrived!

The benefits of telework are simply too large to ignore. It allows a for agencies to reduce costs, improve delivery of services, recruit and retain a modern workforce, and create new avenues for citizen engagement.

Even the Department of Defense, an agency that has historically been risk adverse to telework, conceded during a Panel Session with Javier Vasquez, Microsoft’s GM of Technology and Solutions, that it is looking at telework as a permanent part of the agency’s operational plan.

It should be noted that a reimagining what it takes to ensure mission success, isn’t just being discussed at the federal level. We recently co-hosted a Government Technology Webinar with Microsoft where we examined how SLED organizations fared responding to the pandemic. Their challenges and outlooks were extraordinarily similar to their federal counterparts. As they prepare for this next stage of the pandemic, they too are evaluating what worked, what didn’t, how their networks can adapt, evolve, and improve delivery of services to users and citizens alike.

Better Together

As agencies redefine and reimagine operations for a post-pandemic reality and look beyond the network needs to ensure mission success, it’s more critical than ever to maximize current IT investments and to proactively deploy solutions that enable the peak performance and productivity of the network, applications, and users alike.

For over 12 years, Riverbed and Microsoft have worked together to deliver seamless network and app performance, visibility, security, and the successful convergence of legacy architectures and the cloud. We’re stronger together and we’re fortunate to have the honor of providing innovative public sector solutions to reimagine Government-from-Anywhere and ensure mission success.

]]>
Parallels Between the Roman Empire and Zero Trust Network Security https://www.riverbed.com/blogs/roman-empire-and-zero-trust-network-security/ Mon, 14 Jun 2021 20:39:18 +0000 /?p=17088 It seems that the term “zero-trust” is emerging as the latest buzzword in network security and cybersecurity communities. To explain it, one can look to the Days of Antiquity, at the height of the Roman Empire when its borders encompassed most of Europe, Northeast Africa and the Middle East. Much of the early years of the Empire was focused on what was known as “Preclusive Security,” which was an expansionist approach of fighting opponents either in their own lands or at a heavily fortified border.

The problem was that as the Empire expanded, so did its borders, which increasingly proved difficult to staff and resupply with loyal legionnaires, and ultimately became significantly harder to defend. Once invaders like Attila the Hun were able to breach the heavily guarded border, there was little that stood in their way from nearly capturing both Constantinople and Rome.

These challenges associated with the ever-sprawling border precipitated a shift in the Empire’s strategy to what’s called “defense-in-depth,” which established a series of lightly-defended sentry posts at the borders instead of heavily fortified outposts.

While the border may not have been hardened any longer, the sentry posts served as the eyes and ears of the Empire. In the event of an enemy invasion, instead of holding their ground and fighting their opponents at the border, sentries retreated to reinforced positions within their own territory for a better chance to repel invaders.

Fast Forward Two Millenia

In the 1980s and beyond, we began applying this same defense-in-depth philosophy to our IT networks, layering protection and redundancies to reduce vulnerabilities, instead of a hardened border. In “those days of antiquity” with .rhosts files and unencrypted telnet protocols, often simply penetrating the firewall could lead to a total compromise of an entire network.

As our networks evolved into their modern-day software-as-a-service-heavy, hybrid-cloud infrastructure equivalents, much like the Romans, we find our networks further at the edge than ever before. Many contend that they are so far and distributed that it is difficult to clearly define a border to defend.

Nemo Sine Vitio Est (No One is Without Fault) – Seneca the Younger

At its core, zero trust is the idea that your networks are already compromised. From simple malware running cryptominers to advanced foreign nation-state attackers who are carefully working to stay hidden to sabotage or steal your data, much like Attila, the invaders are inside your networks.

Complicating matters is that for every line of code written worldwide, new vulnerabilities may be introduced, hackers create more capable malware, and the number of possible attacks, backdoors and persistence tricks grows as well.

The defenses that we have traditionally erected—like firewalls, UTMs, IDS/IPS, and malware filters—remain critical but are no longer sufficient without greater visibility. While they create barriers and tripwires, a zero-trust environment requires acknowledging that these will be scaled, circumvented and tip-toed around to gain access to your networks. Think of these traditional static defenses as barriers that force your adversary to change their behavior, giving you a chance to identify. This only works, however, if you are paying attention.

Despite the efforts to protect, visibility is often poor in dispersed, hybrid, network environments. Without either a well-defined border to defend or cybersecurity sentries keeping watch, it may be difficult to determine exactly when or where intruders have penetrated your networks.

It should not escape anyone that the complex supply chain SUNBURST attack from last year went undiscovered for the better part of a year despite having dozens, if not hundreds, of organizations and agencies compromised. The alarm bells simply did not go off as the attack vectors were never seen.

Nil Desperandum (Never Despair) – Horace

So how does one defend a sprawling network with shifting borders and an ever-increasingly number of ways in which the adversary may slip in and stay in? It takes a paradigm shift in thinking and approaches.

With the network border blurry at best, we no longer have a single and convenient point of telemetry collection to force the attacker in the open. Instead, we must rely on a patchwork of overlapping barriers and telemetry sources over the entire network stack.

Endpoint detection solutions must be combined with endpoint forensics and log collection. Infrastructure as a service requires a more traditional firewall approach while enabling the capturing packets and flows for cyber hunting. SaaS solutions will increasingly need to expose usage and security APIs to detect and gain insight into potential adversarial behavior.

The mantra of the next decade is going to be overlapping angles—do not deploy a defensive solution without sources of forensic visibility. Apply policy on the endpoint, the data center, IaaS and SaaS while collecting, storing and creating visibility angles on all.

Visibility telemetry, much like the Roman sentries of yesteryear, are the eyes and ears of the cyber hunter. This is how we spot the most dangerous of all threats: The one that knows how to stay hidden.

 

]]>
The Transformative Power of Technology: Understanding the Business and Economic Impact of Digitization https://www.riverbed.com/blogs/the-transformative-power-of-technology/ Tue, 01 Jun 2021 13:10:14 +0000 /?p=17070 According to the Harvard Business Review, only 23% of companies are non-digital with few, if any products or operations that depend on digital technologies. The vast majority of organizations are technology organizations. They have seen the benefit of automating tasks with computer-based systems, monitoring manufacturing environments with IP-based tools, migrating applications to the cloud, and using technology to streamline business processes.

Technology is no longer viewed as a cost center. It has become integral to almost every facet of business. Today, it’s not about embracing the latest and greatest technology for its own sake. Instead, today’s digital transformation is about evaluating how technology can help businesses do things faster, better, and cheaper.

In a 2020 Deloitte survey, digitally mature companies were three times more likely to report annual net revenue growth significantly above their industry average—across industries. And it’s not just about top line growth. Digital technologies create economic value in multiple ways. Here are three examples based on my professional experience:

Shifting from CAPEX to OPEX

Early in my career I worked with a large law firm in the New York area that wanted to go paperless. The goal was to reduce how much space they used to store thousands of boxes of files. At first glance that may seem like a minor initiative, but they owned a commercial building in a New York City suburb for just this purpose. The cost of the mortgage and maintenance was a huge drain on the business, and the logistics of moving and searching for files resulted in an incredible amount of lost time.

In other words, needing so much physical space along with this glaring inefficiency in their operational workflow was costing them money.  

Their end goal wasn’t to adopt a new technology. No one cared about cloud-based file storage. Their goal was to reduce costs and improve business processes. Technology, in this case, was a means to decrease expenses thereby increasing the law firm’s monthly bottom line.

The law firm had no desire to build out a data storage solution because it was too expensive. However, they were immediately able to see the direct benefit of deploying a cloud-based storage solution that saved them the enormous cost of the building and the cost of implementing a physical data storage solution of their own.

For many organizations, technology isn’t a profit center in the sense that it directly generates profit for the company. Instead, technology is a means of decreasing the cost of doing business in the first place. In the case of this law firm, I saw them transition from seeing technology as a capital expense to an operational one.

Improving Efficiency

This same idea applies to non-business entities, too. Several years ago, I helped design and implement a sensor network for a city’s wastewater treatment facility. The initiative to implement the new system was because of a major failure of one of the main intake pumps the year before. The root cause of the failure boiled down to unreliable, tedious, and manual inspection of each treatment basin and the associated pumps and machinery.

The new sensors were IP-based, wireless and wired, and some with LTE backup connections. Readings would be taken programmatically and continuously relayed to a centralized sensor management system.  Almost no amount of manual intervention would be necessary. The sensor rollout included new infrastructure, collaboration endpoints, ruggedized tablets for plant workers, and an on-premises sensor management system with cloud backup.

The result was a highly efficient, reliable, and safe mechanism to manage the city’s entire facility. No one who ran the treatment facility cared about the cloud-based disaster recovery design. No one cared about the latest silicon chip the sensors used. No one cared what methods we used to collect packet information for the visibility tools. Instead, plant managers cared about reducing risk and improving operational workflow.

The results were an immediate decrease in incidents, far fewer calls to the pump manufacturer’s TAC, and visibility into systems operations they never had before. 

Competing on the World Stage

The examples above involved large organizations. However, remember that today all organizations are technology organizations. Consider a financial services firm in upstate New York with only 14 employees. The only way to survive during the recent pandemic was to rethink how they used technology to compete with much larger companies and generate more profit for the business.

We often think of financial services companies as huge organizations that span the world and have the most sophisticated technology running behind the scenes. However, there are also many small companies—even sole proprietors—that offer many of the same services, and these small companies have to find a way to compete with some of the largest financial services names in the world.

My goal was to work with this small company of 14 to do just that. We developed a new web platform with self-service functionality for their customers. Managing one’s own financial account isn’t a luxury anymore, it’s a standard. And part of the new platform was a collaboration solution for customers to engage a financial expert in a high-definition video chat from the comfort and safety of their home. For a small company to offer these features put them on the same stage as the global companies they competed with.

We also moved as many applications as we could to the cloud so that all 14 financial experts could sell and process transactions for any product, for any customer, from any location. This small company now had the ability to sell the same products their huge competitors offered, and they could serve their customers quickly, reliably, and with that special touch only a small company could provide.

There was a lot of new technology as part of that project. We used the latest hardware, software, and cloud solutions. However, all of it was centered on one thing—making the company more competitive and ultimately creating more revenue.

They saw an immediate benefit to sales, a dramatic increase to inbound leads, higher customer retention, and they were able to expand their portfolio of financial products.

This small upstate New York company is not alone, either. In fact, according to a McKinsey report in 2020, 38% of executives plan to invest in technology to make it their competitive advantage.

Digital Transformation to Transform the Business

Digital transformation used to be centered on the latest and greatest technology. Maybe it was upgrading analog to VoIP. Perhaps it was installing a new wireless network. Those technologies in themselves may be great, but today, the question isn’t how sophisticated or cutting edge a technology is.

Today’s concern is laser-focused on how we can use that technology to improve business operations, increase efficiency, decrease unnecessary expenses, and generate revenue. In other words, today’s digital transformation recognizes that technology is no longer a cost center even for the smallest organizations. Indeed, it’s one of the main tools we have to help businesses do things faster, better, and cheaper.

]]>
Securely Optimize SMB Traffic with the Riverbed WinSec Controller https://www.riverbed.com/blogs/securely-optimize-smb-traffic-riverbed-winsec-controller/ Mon, 17 May 2021 15:30:00 +0000 /?p=16936 Server Message Block (SMB) traffic is a very common type of network traffic in most organizations, and it’s one of the most common types optimized by Riverbed’s application acceleration technology. For years we’ve been able to ensure optimal delivery of SMB traffic using our SteelHead WAN Optimization solution. However, dealing with SMB in a Windows domain poses some problems.

A security and administrative problem

SMB optimization requires the server-side SteelHead to interact with the domain controller as a Tier 0 device. Many domain admins consider this a security and operational concern.

The Microsoft Active Directory Administrative Tier Model (recently renamed the Enterprise Access Model) is used to organize domain elements. The framework is made up of three tiers:

  • Tier 0 is made up of the most valued and secure elements of a Windows domain. Normally these are domain controllers, ADFS, and the organization’s PKI.
  • Tier 1 devices are domain-joined servers and domain admin accounts with reduced privileges. These could be application and database servers, but they could also be a variety of cloud services as well.
  • Tier 2 is comprised of the remaining domain-joined elements such as workstations and user accounts. Tier 2 elements are considered the least secure and by extension the least valuable in the operation of the domain.

For SMB optimization to function, the SteelHead appliance needs to interact with the domain controller as a Tier 0 device right alongside domain controllers.

SMB optimization also requires the SteelHead to use the replication user account to communicate with the domain controller. The replication user account has elevated privileges within a Windows domain compared to standard user and computer accounts or mundane utility accounts. It’s not best practice for a network device to use this type of account, especially when that device isn’t managed by domain administrators.

This leads to our second problem.

A SteelHead appliance is normally managed by the network operations team, not domain administrators.

This poses a problem for the overall IT operational workflow. Normally, Tier 0 devices are managed by domain administrators.

The solution

Riverbed solves these problems by introducing a proxy in between the domain controller and SteelHead appliance.

The WinSec Controller is a completely dedicated, non-network appliance that interacts with the domain controller as a Tier 0 entity. It isn’t used for unrelated daily network operations tasks, and it’s meant to be managed by a domain administrator.

To optimize SMB, the SteelHead intercepts the authorization request the client computer makes to the file server. Then the SteelHead interacts with the domain controller as a Tier 0 device using the replication user account to retrieve the server key from the file server. With the server key, the SteelHead can decrypt the user session key, the SMB flow, and ultimately optimize the traffic.

Sitting between the SteelHead appliance and the domain controller, the WinSec Controller proxies requests and responses between the server-side SteelHead and the domain controller. And, to secure communication between server-side SteelHead and the WinSec Controller, we use a standard IPsec tunnel.

Currently, the WinSec Controller has a physical form factor only, though there are plans to develop a virtual deployment option with complete feature parity.

SteelHead WAN Optimization appliances are the cornerstone of SMB traffic optimization. However, maintaining proper operational, administrative, and security workflows is also extremely important. The WinSec Controller gives us the opportunity to accommodate our Windows, systems, and security teams while at the same time providing the same level of optimization we’ve benefited from for years.

Watch the video below to learn more about Riverbed’s WinSec Controller solution.

]]>
NetIM Simplifies Alert Notifications For Splunk Users https://www.riverbed.com/blogs/netim-simplifies-alert-notifications-for-splunk-users/ Thu, 13 May 2021 20:01:00 +0000 /?p=16915 Application performance is significantly influenced by the performance of underlying infrastructure. IT organizations constantly monitor alerts originating from thousands of network nodes to ensure the highest degree of performance. Riverbed NetIM and Splunk integration allows enterprises using Splunk’s data platform for operational and security intelligence to ingest infrastructure alerts easily.

Built on microservices architecture and a Kafka messaging framework, NetIM delivers the scale and performance necessary to monitor large hybrid enterprise infrastructure. NetIM simplifies operational workflows and day-to-day monitoring with plethora of advanced capabilities, some of which include:

Splunk Alert Notification

Customers can send infrastructure alerts to Splunk Enterprise or Splunk Cloud through HTTP Event Collector (HEC) APIs. Splunk integration allows NetIM to consolidate infrastructure alerts for Security Ops, IT Ops, and DevOps workflows. NetIM provides an out-of-the-box template for Splunk notification and also provides the flexibility to customize the Splunk template to meet specific business needs.

NetIM Splunk Integration
Customizable Splunk Alert Notification Template

Windows Visibility

Gain deep visibility into Windows environments by gathering instrumentation and telemetry from Windows computing systems. NetIM supports Windows Management Instrumentation (WMI) using PowerShell and aggregates Windows system information with other network metrics such as those obtained through SNMP or CLI. Data from NetIM can be presented through Riverbed Portal, a comprehensive operations dashboard across the hybrid enterprise.

NetIM WMI Data Collection
NetIM WMI Data Collection with PowerShell

Self Diagnostics

When NetIM deviates from its expected performance, the root cause can be within the NetIM application or hosted container environment. Isolating the cause is especially challenging when microservices are distributed across multiple physical hosts. NetIM provides self diagnostic tools to isolate issues across guest and host environments or even at the individual container or microservice level for faster resolution.

Analytics

NetIM provides powerful analytics capabilities and simplifies troubleshooting through automation, real-time monitoring, and identifying anomalies and violations. NetIM’s unique health scoring system for every device and interface can quickly communicate health status based on multiple metrics. Site level or group level summarized scores allow users to see global health status at a glance spanning the entire enterprise. NetIM’s intelligent analytics algorithms guide users to where performance issues are originating to save time and effort.

Automation

NetIM provides both northbound and southbound APIs to integrate with other IT systems and automate everyday tasks. Through these APIs, IT teams can automate adding, deleting, and updating devices, groups, interfaces and many other functions. By automating repeated and structured tasks, IT staff have more time to focus on projects of strategic importance to the organization.

NetIM is built from the ground up to tackle complexities of monitoring hybrid enterprise infrastructure. NetIM simplifies NetOps, SecOps, and DevOps workflows with capabilities such as Splunk integration, WMI and supportability tools. NetIM is a component of the Riverbed Network Performance solution, which tightly integrates device monitoring, flow monitoring, and full packet capture and analysis, for faster troubleshooting of complex performance problems.

 

 

]]>
The Challenges of Enterprise Monitoring with TLS and PFS https://www.riverbed.com/blogs/enterprise-monitoring-with-tls-and-pfs/ Tue, 04 May 2021 19:47:38 +0000 /?p=16689 One thing that’s certain today is that network security is a moving target. As attackers become more sophisticated there is a need to adjust the protocols we use and offer better data protection for end-to-end communication. This is true in the case of TLS. It’s long been a practice of many vendor products to use the method of loading the server private key on a network device in-path so that the device can decrypt the payload in transit, do whatever it needs to, and re-encrypt and send it along its way. Most security organizations don’t recommend this practice, however, it’s interesting to note that many security vendors themselves use this method to provide IPS/IDS and other functionality.

So, what adjustments have been made in TLS to improve overall security? In previous versions of TLS, up to TLS 1.2, Perfect Forward Secrecy (PFS), also known as forward secrecy, is optional, not mandatory. In TLS 1.3, PFS becomes a mandatory function of the protocol and must be used in all sessions. This is significant because PFS negates the ability to load the server private keys on the in-path devices to perform decryption. Before getting too far along, let’s cover a few TLS points from a high level.

What is TLS?

For most who find this article, you’ll probably be familiar with TLS. TLS stands for Transport Layer Security. It’s a protocol that sits behind the scenes and often doesn’t get the credit for the work it does. When you navigate to a secure website and the URL has “https” at the beginning, it’s TLS that gives you the “s.” In fact, some may even refer to it as an SSL (Security Socket Layer) connection, but it’s been TLS for quite some time now. The idea behind TLS is that it provides a secure channel between two peers. The secure channel provides three essential elements:

  1. Authentication
  2. Confidentiality
  3. Integrity

Authentication happens per direction. The server side is always authenticated. The client-side is optional. This happens via asymmetric crypto algorithms like RSA or ECDSA but that’s beyond the scope of this article. Confidentiality is another way of saying “encrypted.” The Integrity portion of TLS is used to ensure that data can’t be modified.

There are several major differences between TLS 1.2 and TLS 1.3, namely that Static RSA and Diffie-Hellman cipher suites have been removed in TLS 1.3 and now all public-key exchange mechanisms provide forward secrecy. This begs the question, “What is PFS?”

PFS is a specific key agreement protocol that ensures your session keys aren’t compromised even if the server’s private key is. Each time a set of peers communicate using PFS, a unique session key is generated. This happens for every session that a user initiates. The session key is used to decrypt traffic.

The way it works without PFS is that during session establishment, a Pre-Master Secret is generated by the client using the server’s public key. It is then sent to the server and the server can decrypt it using its private key. From there each side generates a symmetric session key known as the Master Secret. This is used to encrypt data during the session. If an attacker gets the server private-key, then it can also generate the Master Secret, meaning it can then decrypt any traffic from that session onwards.

To oversimplify the function of PFS, the client and server use Diffie-Hellman to generate and exchange new session keys with each session. Make sense from a security perspective? It should. PFS makes it much more difficult to get at user traffic and that’s the goal. But what does that mean to enterprise IT?

TLS, PFS, and the Impact on Enterprise IT

With the idea of PFS in mind, what’s the impact on Enterprise IT? First off, the task of not only securing traffic within an enterprise but also providing the required performance and monitoring of said traffic puts things in a bit of a grey area. We don’t want to let attackers get to the data so we protect it, but we also need to see the traffic to monitor it and accelerate it. So, what is an enterprise to do?

Traditionally, we’ve used a proxy to rip open the packets, do something to the packets, pack them back together and securely forward them. If you’ve worked with firewalls and IPS devices, then this isn’t a new concept. In fact, it’s something that is quite common in today’s networks. Considering the idea of performance and security monitoring, the process is no different.

Still, even though security departments make use of this traditional method of exposing encrypted data, it often comes with pushback from IT security departments when the request to do so comes from the infrastructure and monitoring teams. A visibility platform should be able to employ the same tricks as our security products (Riverbed NPM solutions do for what it’s worth), and up until TLS 1.3 came about they could.

But now we have a new hurdle to tackle. TLS 1.3 is making its way onto the scene and the traditional methods of exposing the data are no longer an option. In fact, since PFS is optional in TLS 1.2, if it’s used, current NPM solutions will have problems even prior to TLS 1.3 being rolled out. Why? Because the use of specific keys are only used for limited data transmission sessions. This makes packet inspection very challenging.

Addressing the Challenge with AppResponse

Despite the challenges that we face with advancements in security protocols, here at Riverbed we have continued to look for new ways to overcome these challenges. Before AppResponse 11.9, we supported RSA Key Exchange. In this scenario, private keys get uploaded to decrypt the traffic​, the pre-master secret is transmitted, and we decrypt it using the provided private key​. This is pretty much how any vendor would do it.

However, in some cases with TLS 1.2 and down the road with TLS 1.3 this is no longer a possibility. Therefore, we’ve started to make significant changes to how we handle the decryption of packets. With our feature enhancements we can still provide deep visibility and performance monitoring for today’s IT organizations while maintaining the level of security that TLS 1.2 provides. How can we do this?

The PFS API

When a Diffie-Hellman key exchange is used, a unique master secret is used for each session. Using integration partners, we can retrieve the required keys, giving us the ability to decrypt and inspect the traffic. A new PFS API in AppResponse, communicates with external sources and retrieves the Master and Session Keys. These external sources could be a load balancer or an SSL Proxy. I won’t go into the details here, but it solves the problem in an elegant way. A glimpse of the functionality is seen in the image below. Here AppResponse has a PFS API that is always on. Master and Session keys are retrieved from an external source, which could be a load balancer or SSL Proxy.

PFS API
The PFS API Process for Retrieving Master and Session Keys.

As the traffic is being sent to AppResponse, you might want to buffer it until the keys are received. If not, you’re going to lose visibility into those packets. For this reason, there’s a new feature that you can toggle on as seen below.

Buffering for PFS option in AppResponse

PFS processing in AppResponse may require a lot of data to be buffered while waiting for the key. For this reason, buffering is not enabled by default even if SSL encryption is enabled. Finally, if you disable PFS then all the keys received by the REST API will be discarded.

Wrap-up

It’s evident that the world is changing. When it changes in the name of security, enterprises can’t compromise visibility. Riverbed understands this and continues to provide innovative ways to address these unique challenges. To take a detailed look at Riverbed AppResponse and the capabilities discussed in this blog, watch our webcast Breakthrough Visibility & Performance for Encrypted Apps & Traffic and let your packets, whether sent in plain text or encrypted with TLS, start providing actionable answers to what’s happening in your network.

]]>
The Elevated Role of IT: Driving Business Forward and Beyond COVID-19 https://www.riverbed.com/blogs/the-elevated-role-of-it/ Wed, 21 Apr 2021 13:46:38 +0000 /?p=16885 At the end of April, I will celebrate my one year anniversary at Riverbed. I joined the company when Riverbed had already started working remotely due to the COVID-19 pandemic. To date, I have only met one person face-to-face—our CEO, Rich McBee—who I met when I was interviewing to join Riverbed. I don’t have an employee company badge (an extreme rarity in technology) and I’ve only met my team and fellow leaders at Riverbed via computer screens and cell phones. But today, almost 365 days later, I can honestly say that this last year has been both a challenging and incredibly rewarding journey. So, what are some of my biggest takeaways?

IT plays a critical role to make everything work together

As the CIO of Riverbed, my role is to make sure that IT systems that run the business operate smoothly and that we meet our deliverables to our customers. I also need to make sure our systems work for every employee—regardless of their location.

With employees becoming acclimatized to the idea of work from home (WFH), as well as meeting and transacting online, organizations will shift to WFH or work from anywhere (WFA) as a norm rather than as an exception. That’s why the connection to company resources via a laptop is so vital. If your employees are not connected consistently and securely then they’re unable to collaborate and be productive.

As with most organizations, the use of video and audio-conferencing tools at Riverbed has increased significantly. As a result, we’ve ramped up our technology infrastructure to account for the surge. We’ve also increased our investment in bandwidth expansion, network equipment, and software that leverages cloud services.

Digital transformation has been accelerated by 10x

Most IT organizations have a short-term action plan and a long-term strategy. But COVID-19 served as a forcing function for both simultaneously. Digital transformation strategies that were five years forward needed to be implemented in weeks. And having a strategy in which IT sits directly with the CEO is the only way to truly drive business forward at such a rapid pace.

When we look back on last year, we realize the milestones we accomplished and the radical change that was accelerated to connect people with technology. At Riverbed, we accelerated our cloud-first strategy to operate more effectively. I would have never dreamed that it was possible to work 100% remote in a matter of weeks, but having a framework in place enabled the transition to happen quickly. And because Riverbed helps customers with network and security operations, as well as accelerating access to networks and applications regardless of a user’s location, we’re also leveraging our own solutions to help ensure user productivity and the overall performance of our business.

A remote team is a more productive and happier team

You may think of IT as technology forward, but in the end we are a people-driven function. This last year as we faced incredible pressure to keep our IT systems driving the business, I’ve seen a huge performance increase in our team. The time we used to spend commuting is now spent closer to home and that work/life balance makes for a happier workforce that is just as productive.

As I look back to the last year, I’m very proud of the work that we’ve accomplished at Riverbed. Nothing impacted our ability to run productively and meet all of our customers’ deliverables. It wasn’t easy, but our team has left a lasting impression on the business and our employees. I just can’t wait to meet my fellow Riverbed-ers face-to-face in the coming months.

]]>
The Do’s and Don’ts of Marketing in a Crisis https://www.riverbed.com/blogs/b2b-marketing-in-a-crisis/ Mon, 05 Apr 2021 14:41:56 +0000 /?p=16845 In March 2020, I was in London meeting with customers, partners and teammates and upon my return to the Bay Area, organizations were preparing for what was believed to be a brief lockdown to slow the spread of COVID-19. Weeks turned into months, and here we are, more than a year later, still dealing with the virus and its effects on the global economy.

When a prolonged crisis like a pandemic takes place, market conditions are highly unpredictable—making it a real challenge to create and implement relevant marketing strategies. Here are my thoughts on how B2B marketers can maintain relevance and drive stability and growth for their organizations during times of uncertainty:

1. Don’t stop marketing

In the early days of the pandemic, the fear of being perceived as “exploiting a crisis” caused many organizations to pull back their marketing efforts. The fact is, during difficult times, it’s essential to remain visible in your customers’ minds. The key is to be relevant and get laser focused on what your customers need, WHEN they need it.

For example, in late Q1 and through Q2 2020, we concentrated exclusively on marketing and selling our remote work solutions to our existing customers. At that time, they were all dealing with the IT challenges of enabling remote work for their entire workforce. What they needed were solutions that gave them visibility and control over network and application performance so their work-from-home employees could stay productive.

With their immediate needs met, our customers began looking at the long-term ramifications of the pandemic: the future of work, digital acceleration, cloud transformation, business resiliency and network security. So, in the second half of 2020, the timing was right to broaden our marketing activities. We expanded our target audience, enabled our partners, and launched new campaigns around the visibility and performance solutions germane to those concerns.

Even if your organization is in a situation where your customers are not buying right now, keeping your brand in front of them improves perception. Just sending a reassuring message or helpful resource goes a long way in establishing trust and loyalty.

2. Do reevaluate your go-to-market mix

According to McKinsey’s B2B Decision-Maker Pulse Survey, 96 percent of businesses have changed their go-to-market model since the pandemic hit, with the overwhelming majority turning to multiple forms of digital engagement with customers.

With digital and remote engagement proving to be as effective or more than traditional field sales, it’s imperative that sales and marketing leaders reevaluate their go-to-market mix through the lens of their buyers’ digital experience.

For marketers, this means taking a close look at the effectiveness of virtual events, online content, and digital channels such as social media, search, and email. And as your primary channel for engagement, pay special attention to your websites. Are they working for you 24×7? Are you harnessing the behavioral data that is generated online to personalize and optimize engagements?

We have a long way to go at Riverbed in this regard, but we’re making progress by adding conversational marketing, hiring more data analysts, and implementing AI and ML powered technologies to automate customer/prospect engagement across our websites and through our sales organization.

3. Do double-down on account-based marketing

In an economic downturn, when everyone is asked to do more with less, use data to guide your spending decisions. Last year, we took the time to conduct an extensive analysis of our installed base to inform our GTM and RTM models for 2021. This data also proved valuable in making choices on where to invest our marketing dollars.

What the analysis found was that the law of the vital few is true. Within our base is a set of really loyal customers who continue to buy from us, year after year. These “franchise” customers have all been impacted by the pandemic, but in different ways. Some were hit hard, others thrived, and every one of them is transforming in some way to adapt to their current environment and an uncertain future.

That’s why we made the decision to further invest in targeted account-based marketing into these franchise customers as well as a set of “look-alike” accounts. In our case, it was critical that we developed a deeper understanding of each customer’s unique situation so that we could help our sales counterparts accelerate, solidify, or increase opportunities. ABM gives you an ability to do full lifecycle marketing, not just top-of-the-funnel acquisition. And that’s the ultimate goal of marketing.

4. Do strengthen your agility

During unpredictable times, agility and flexibility is key to responding to constantly and rapidly changing customer needs. It’s essential that organizations place a premium on building these skills within their teams, investing in technologies or process improvements that enable faster decision making, and streamlining operations for maximum efficiency.

I’m still amazed at how our employees demonstrated their ability to pivot based on new priorities, try new tactics, and double down on those that worked or move on from those that didn’t. This mindset, and their ability to navigate ambiguity, enabled us to execute and drive results, even while making the transition to working from home.

5. Don’t deprioritize innovation

Riverbed was founded on the creation of a new technology category: WAN optimization. Our product called SteelHead was so entrenched into our brand identity that to this day, customers say “Oh yes, I have a Riverbed on my network.”

When you are a category creator, you win. Followers may have some success, but they will never come close to being the market leader. However, category relevance changes over time. That’s why it’s so important for organizations to always prioritize innovation, even in times of crisis.

According to a 2021 Chief Outsiders Survey, nearly half of CMOs believe the pandemic created as many, if not more, opportunities for businesses than it eliminated. Think about how many industries have already transformed their business models—retail, restaurants, entertainment, healthcare, just to name a few. Yes, the pandemic wreaked havoc, but disruption fuels innovation.

As marketers, it’s our job to know what customers are thinking about now AND what they will be thinking about in the next 1-3 years. This insight is key to both reevaluating existing portfolios and identifying new categories that seize opportunities created by changing customers’ needs.

Closing thoughts

During the worst of the pandemic, the value of marketing in driving business priorities such as brand awareness and customer retention was evident. Now, as we look to the future, we must improve our ability to understand our buyers’ mindset, provide an exceptional brand experience, and respond with agility to drive innovation and growth.

Our customers are accelerating their digital transformation initiatives to stay competitive in this new business and economic environment. I’m looking forward to engaging with them on ways Riverbed can add value in their journey—and hopefully soon, gathering with customers, partners and teammates, in person!

]]>
4 Trends Guiding HR in 2021 and Beyond https://www.riverbed.com/blogs/4-trends-guiding-hr-in-2021-and-beyond/ Fri, 26 Mar 2021 19:05:30 +0000 /?p=16811 It’s hard to believe that it’s been over a year since the onset of the global pandemic. I remember thinking initially that our offices would be shut down for a couple of weeks, maybe a month, and then life and work would return to normal. But 365+ days later, we’re still battling COVID-19 and employees are still primarily working from home.

The pandemic presents challenges HR leaders have never faced before—challenges made more complex by constantly evolving requirements and restrictions that differ city by city, state by state, country by country. There was no playbook for how to reengineer every aspect of the employee experience in a pandemic, yet that is exactly what HR teams have had to do.

Even as we begin to recover, it’s clear that COVID-19 will have a lasting impact on how organizations operate and manage their workforce. HR leaders need to prepare for new realities facing businesses and evolve their people strategies accordingly. Here are four global work trends guiding 2021 and beyond:

Trend #1: Significant and permanent increase in remote work

Pre-pandemic, an estimated five percent of full-time employees with office jobs worked from home at least three or more days per week. Now that many organizations have experienced the benefits of remote work (cost savings, increased productivity, improved recruitment and retention, etc.), that figure is expected to be at least 40% one year after the pandemic subsides.1

In flexible, hybrid work models, having adequate and reliable technology is essential to employee engagement and productivity. HR and IT teams must work together to provide remote work solutions that provide seamless and secure access to the resources employees need to perform their jobs, no matter where they work.

Be sensitive, however, to technology burnout. The phenomenon of “Zoom fatigue” is real. I’ve encouraged my team to vary their communication methods (voice, text, email, instant message) and to schedule 45-minute video conference meetings so that there’s time for breaks in between.

Trend #2: Greater focus on employee wellness programs

In a recent Global Human Capital Trends survey, 80% of business leaders identified well-being as their top-ranked priority for organizational performance and success.2  That’s no surprise given the abnormal difficulties of 2020. The pandemic, economic uncertainty, political turmoil, social injustices, and natural disasters have taken an enormous toll on us—mentally and physically.

People are like icebergs in that you can’t see what’s beneath the surface. We recognized that our employees would need additional resources to help them cope with these crises, as well as the added stress of making the transition to work from home. We enhanced our global wellness program to address the “whole person” with new services covering mental fitness, financial well-being, and confidential counseling that extends to every member of an employees’ household.

Trend #3: Spotlight on diversity and inclusion (D&I)

History tells us that during times of crises, D&I initiatives are at risk as businesses focus on their most pressing needs. But that certainly wasn’t the case in 2020. As the world combatted COVID-19, a massive protest against systemic racism and social injustice erupted, prompting business leaders to take a serious look at their organizations’ D&I practices.

Riverbed has always been a place of inclusion, diversity and community, but we know we can and must do more. With full support from our Board of Directors and senior leadership, we’ve extended our D&I programs to include a special task force focused on creating new opportunities for employees to connect and get involved. This task force also looks at D&I barriers within our recruitment, retention, advancement and onboarding practices.

With so many organizations repledging their commitment to diversity and inclusion last year, the spotlight will be on how these organizations, and the business community at large, can make an impact on issues of racism and social inequity, both within and beyond the workplace.

Trend #4: Untethering talent from location

In a virtual world, enabled by the right technology, talent plans are no longer restricted by location or a candidate’s willingness to move. This means employers can source the best talent from anywhere in the world and reduce costs associated with relocation and office setup. And it means more opportunities for job seekers, who perhaps live in rural or more remote areas of the world, to pursue roles that were once off limits to them because of where they call home.

Larger talent pools won’t necessarily make recruiting easier, especially in the tech sector, where there continues to be fierce competition to attract and retain talent. This is why factors such as culture, honest and empathetic leadership, and proven resilience are so important. These are the differentiators that give employers an advantage over the competition.

Looking back to move forward

While the pandemic has been difficult for all of us, we can find positive outcomes. At Riverbed, we’ve reached even higher levels of frequency and transparency in our communications. We’ve significantly advanced our diversity and inclusion efforts, which are core to our company culture and values. We’ve quickly transformed our learning and development courses, exceeding pre-pandemic enrollment. And through it all, we’ve kept our employees’ safety and well-being front and center.

Eventually, as restrictions are lifted, we’ll begin the complex task of returning to the workplace—a workplace that will be quite different than the one we left some 365 days ago. It will be a gradual process that safeguards employees in every way and acknowledges varying levels of personal readiness.

Looking back, I’m inspired by the resilience we’ve shown as an organization. How we have become closer to each other despite being physically apart. And I am confident in moving forward, knowing that it’s in our DNA to power through any challenge that comes our way.

 

1.Source:The Conference Board online survey of 330 HR executives, September 14 and 25, 2020, published as Adapting to the Reimagined Workplace: Human Capital Responses to the COVID-19 Pandemic

2.https://www2.deloitte.com/us/en/insights/focus/human-capital-trends/2020/designing-work-employee-well-being.html

]]>
Alerting and Troubleshooting Network Performance with Unified NPM https://www.riverbed.com/blogs/alerting-and-troubleshooting-network-performance-with-unified-npm/ Thu, 25 Mar 2021 21:59:00 +0000 /?p=16746 Unfortunately, IT is often the last to know when there’s a problem with the network. In fact, it’s often the end users experiencing problems who alert the help desk. A proactive IT department needs to be able to detect incidents as they happen, and ideally anticipate a potential issue before it happens. The result is a reduced time to resolution, an improvement to overall uptime, and the ability to solve a smaller problem before it turns into a catastrophe.

Riverbed’s network visibility portfolio brings together sophisticated monitoring, alerting, and troubleshooting capabilities all under one banner. NetIM, NetProfiler, and Portal integrate together seamlessly to bring you a robust solution that can detect and alert on very specific network anomalies and application performance issues.

Visibility for the Underlying Infrastructure

NetIM is focused on the actual network infrastructure itself. In other words, it looks at what’s going on with the routers, switches, firewalls, and all the devices that underpin your applications. With NetIM, you can monitor device health and drill down into specific metrics such as interface errors, packet drops, CPU utilization, link utilization, and packet discards.

NetIM leverages a variety of approaches for real-time monitoring, including:

  • Device APIs
  • SNMP
  • Device CLI
  • Syslog
  • WMI
  • Synthetic testing

With the information coming from the network, NetIM can then build robust visualizations of your network topology to help you understand the path an application takes through the network.

Monitoring Applications

NetProfiler is a very powerful monitoring platform you can use to analyze application flows. It combines flow data and packet-based flow metrics to provide full-fidelity traffic monitoring. NetProfiler takes you beyond the underlying infrastructure to provide behavioral analytics such as baseline traffic patterns and dependency mapping.

Network data and flow records are captured using NetFlow and sFlow, but it also collects AWS VPC Flow Logs, IPFIX information, and data directly from Riverbed tools such as the Riverbed NPM Agent and AppResponse.

For cloud visibility, NetProfiler can be deployed in AWS and Azure. And whether NetProfiler is deployed in the cloud or on premises, it can always collect flow telemetry information from resources in public cloud.

Bringing Everything Together

Portal is the dashboard that brings it all together in one place. It aggregates telemetry from NetIM and NetProfiler, and also integrates with other Riverbed visibility tools including AppResponse, UCExpert, Aternity EUEM, and Aternity APM.

With Portal, you have an active launchpad of interactive dashboards, application discovery mechanisms, and network path visualizations. A network operator can begin their daily monitoring and troubleshooting with Portal, a single source of truth for what’s going on in the environment.

Remember that Riverbed’s visibility portfolio is a collection of powerful tools that integrate with each other. Each individual component is designed to focus on one aspect of NPM. Together, they cover all the bases of network and application performance monitoring. And with Portal as our visibility homepage, they work together as one visibility solution.

Watch the video below to walk through a scenario in which end users are reporting bad network performance. Using Portal, NetIM, and NetProfiler, we diagnose the problem and discover a specific application overloading the WAN interface of the branch router.

Watch Video

 

 

]]>
Adapting Work Environments for a New Normal https://www.riverbed.com/blogs/adapting-work-environments-for-a-new-normal/ Sun, 21 Mar 2021 12:48:03 +0000 /?p=16795 I can’t believe it has been 12 months since the world locked down from the pandemic. A year ago, I was just finishing up a business trip from Europe and was back in my San Francisco office when we decided to close the office for two weeks. We all packed up and prepared to work remotely for the next few weeks, but then two weeks quickly became two months, then four, etc. We all know the story. Fast forward and here we are today still working from home one year later. It is amazing what we have learned to adapt and change the way we work and how we engage with our customers.

Virtually overnight, we enabled our employees to safely work from home and introduced new capabilities to make everyone productive. Company plans were suddenly challenged, and significant amounts of time were unexpectedly allocated to understanding and guessing the implications of the pandemic. There was no playbook to reference what to do when one billion workers suddenly work entirely from home, because their offices have shut down, while other businesses completely shut down. For businesses around the globe, discussions of growth and expansion were replaced with discussions contemplating furloughs, layoffs and pay cuts in an effort to maintain business continuity. I think we all underestimated the increase in our personal workloads and the reality of Zoom fatigue and how quickly that would set in, but we have continued to adjust and thrive. The ability to pivot in times of crisis is something we have all mastered this past year and is a lesson we will not soon forget.

While some markets felt pain, other markets flourished. Cloud and security became top of mind. Companies began to accelerate their cloud-first strategies to address the growing demands of a nomadic workforce and rethought their future physical infrastructure requirements, including real estate. A new normal was forming that will change the way we work moving forward. For Riverbed, that meant accelerating our strategy to move from an appliance-heavy business to an end-user performance business. Our Client Accelerator product enabled users to have an in-office experience accessing applications while they worked from home or anywhere, without negatively impacting productivity.

Companies were also faced with a new challenge: visibility of their infrastructure and new risks of vulnerabilities with everyone away from the safe haven of their office. Increasing demands on monitoring networks and applications put pressure on IT organizations to provide solutions to ensure business continuity and security. Riverbed’s Network Performance helped customers overcome these challenges and more importantly, helped customers through challenges when other products proved to be insufficient or exposed vulnerabilities.

The word “hybrid” has taken on a new meaning this past year and we have proven that many industries can and will work from anywhere. The need to create a secure and seamless experience for employees and customers, both in the office and remotely, will remain part of the fabric of our work life going forward.

Another learning through the pandemic was adapting the sales and customer relationships in this new environment. Being able to create a connection with the customer without being face to face can be challenging. However, new creative communication vehicles became critical for developing and maintaining customer connections and establishing value in a virtual environment became the new art. Through virtual customer events, sales kickoffs, partner summits and customer executive sponsorships, we were able to stay ahead of the curve and provide our sales teams and customers what they needed to be successful. We began to master virtual events and create experiences that could scale much broader than being trapped in a single location. This allowed us to share experiences to broader audiences that may not have an opportunity to travel even in normal circumstances.

The pandemic has been challenging for everyone on both a personal and professional level. It has forever changed us, the way we live and the way we work. As leaders in the business, the crisis has changed the way communicate, engage and lead in times of ambiguity and uncertainty with a higher sense of empathy and purpose. As we look ahead to a world where the pandemic is not at the forefront, I know there is a new normal on the horizon and we have learned a great deal this year that will help us all navigate what’s next. Finally, we must all take note of our experiences and pass our learnings on to the next generation of business leaders to prepare them for something that we did not have the opportunity to prepare for ourselves.

]]>
Troubleshoot Financial Services Applications with AppResponse https://www.riverbed.com/blogs/troubleshoot-financial-services-apps-with-appresponse/ Fri, 19 Mar 2021 12:30:00 +0000 /?p=16658 The financial services industry has led the way in network security and data encryption, but it also relies on frequent transactions that need to go through no matter what. Troubleshooting financial services applications can be difficult because it means having visibility into traffic that’s usually encrypted.

Riverbed’s Network Performance solution can decrypt certain types of traffic giving network operators the visibility they need to troubleshoot problems quickly. AppResponse, part of our Unified NPM suite, focuses on application performance monitoring and provides continuous full-fidelity packet capture of targeted applications. With AppResponse, no data is lost.

Network visibility derived from a completely reliable packet capture is very powerful, but AppResponse can go further by decrypting certain PFS, SSL, and TLS traffic. This gives network operators the ability to troubleshoot problems with traffic that they would otherwise be blind to.

Decrypting Application Traffic

First, IT provides the private server key to AppResponse. We can then intercept the session key and decrypt certain non-PFS traffic in real-time. And though using non-PFS is actively discouraged today, it’s still commonly used in enterprise environments.

Next, to decrypt traffic that does use PFS, AppResponse exposes an API that allows an external entity such as an SSL proxy to send ephemeral keys to it. Typically, this means deploying software agents to Linux and Windows systems, which then send their private server keys to AppResponse. We can also run a relatively simple script on an F5 load balancer to send the necessary keys.

AppResponse isn’t able to decrypt all public web traffic, but for internal applications it can see what’s happening with encrypted traffic on a transaction-by-transaction basis. Whether the issue is with the network, the server environment, or the end-user’s client, AppResponse can provide granular visibility into every component of an IP conversation.

Finding Correlation with AppResponse

When we open AppResponse, we start with a view of all traffic. We can locate our applications on the list in the lower left of the page, or we can open the Insights menu and select Applications there.

View all traffic and applications from the main screen of AppResponse
View all traffic and applications from the main screen of AppResponse

If AppResponse has the API key and/or server key, it will be able to show a network operator details for a secure application with full fidelity and granularity. For example, notice in the image below that we can see the transaction metrics of encrypted application traffic including page times, payload transfer times, and server response times.

See the transaction metrics of encrypted application traffic
See the transaction metrics of encrypted application traffic

We can also visualize patterns in network activity, which is a great way to see if there’s a correlation between specific metrics and application behavior. We call this a TruePlot visualization and it can be modified to focus on specific metrics or date ranges.

Use TruePlot visualization to see correlation between specific metrics and application behavior.
Use TruePlot visualization to see correlation between specific metrics and application behavior.

Correlation is just a clue, though, so from AppResponse a network operator can select a single transaction and launch Transaction Analyzer, a companion tool that allows us to look at every single step in an IP conversation.

Going Deeper with Transaction Analyzer

Transaction Analyzer can look at specific protocols and applications to provide a readout of everything going on between two hosts. For example, a financial services organization experiencing slow application performance can start with AppResponse to identify the possible cause of the behavior. Then they can use Transaction Analyzer to drill down into the back-and-forth communication between the specific client and server. Look at the image below and notice how we easily can focus on one transaction.

Transaction Analyzer makes it easy to drill down into the back-and-forth communication between the specific client and server.
Transaction Analyzer makes it easy to drill down into the back-and-forth communication between the specific client and server.

Because AppResponse stores all of the packets captured when an application is in use, we can use Transaction Analyzer to get as granular as we need to. Transaction Analyzer works in real time, so we can also use it to take ad-hoc traces between any hosts in the network as the problem is happening.

Network security and data encryption are certainly important to the financial services industry, but so is the ability to resolve application performance problems as quickly as possible. Blind spots in network activity aren’t an option when every application transaction is money.

AppResponse and Transaction Analyzer, two foundational components of the Riverbed Unified Network Performance Management solution, provides IT the ability to troubleshoot encrypted application problems in real-time and keep the business moving.

Visit our content hub to learn more about our solutions for financial services organizations.

]]>
365 Days of 100% Remote Work https://www.riverbed.com/blogs/365-days-of-100-percent-remote-work/ Tue, 16 Mar 2021 19:22:32 +0000 /?p=16779 Just over a year ago today, I wrote a note to our employees that we were planning to close our offices for at least two weeks due to COVID-19. While we didn’t know it then, two weeks would turn into 52 weeks, and still counting.

As CEO of Riverbed, and in every leadership role throughout my career, I never went a week without either working at headquarters, visiting another office, or meeting in person with customers, partners or investors. But for the last year, I have worked 100% from home, have not used a traditional office phone once, and only on a few occasions have seen a member of my leadership team in person. Our entire organization has been remote for a full year! After dozens of meeting interruptions from my dogs and 365 pots of coffee later, here are some of my lessons learned and what’s next for the future of how we work.

People always come first

When the pandemic hit, our top priority was the health and safety of employees, customers, partners, and the community. And in times of crisis, the role and importance of communications is crucial. We communicated to employees frequently and transparently in town halls, Q&A forums, BU sessions, staff meetings, one-on-ones. Things were changing fast—by the hour at times. It was important to the leadership team to stay close to our people and help them through this challenging time. We also asked our employees to stay close to our partners and customers—many who were also dealing with challenging and stressful times at home and work. When you are there for your customers, partners and employees during tough times, they remember.

We worked 100% remote and stayed very productive

At the onset of the pandemic, we did an initial test with one business unit in early March, and a week later, every single employee was remote. I believed we’d be okay, but this was new territory for our company, our systems, managers and culture. What we experienced with our team is when you have high performers, they perform and deliver regardless of location.

Fortunately, our company also offers software that helps. Client Accelerator delivers application acceleration to mobile workers by optimizing laptops and PCs; and SaaS Accelerator boosts the performance of popular SaaS applications such as Microsoft Office 365 and Salesforce by reducing network latency. These solutions, along with video and collaboration apps gave our employees an in-office experience at home and ensured our team was able to stay highly productive even in the midst of a pandemic where we were 100% remote.

Business does not stop in a pandemic

It may pause momentarily, but it keeps moving. You can take the opportunity or be paralyzed by it. Riverbed, like many others, took the challenge and navigated the business through the storm. There were good days and there were turbulent days. But we kept going, and I’m very proud of our team’s perseverance. We pivoted to meet the pressing needs of customers, leveraged video collaboration tools for sales calls, and did a lot of contingency planning.

For our customers, we placed greater focus on our work-from-anywhere solutions as well as our Network Performance offerings. As employees started working remotely, organizations needed greater visibility across the network to ensure users were up and running and that productivity and performance were not impacted. Additionally, enterprise and government customers are also finding that as more users are remote, the security perimeter greatly expands. Leveraging network visibility can play an important role in identifying and mitigating cybersecurity threats by helping with threat hunting, incident response and forensics.

We’re capable of doing a lot more than we think

When I worked at Mitel, we were planning for a day when collaboration tools in business would become ubiquitous and digitization would drastically change business models. We were beginning to see progress, but mass market adoption looked to be five years out. And then COVID hit, and a five-year build happened in THREE months! Businesses, healthcare organizations, government agencies, manufacturers, banks, retailers all evolved their business models, while supporting one billion remote workers overnight (up from approximately 350 million).

There were many heroes. As a CEO of a technology solutions provider, I can’t say enough about our customers—and how big of a role the CIO and IT organizations played in helping businesses and governments handle this massive and immediate change, and help maximize productivity and performance in their organizations during a very tough environment. Yes, we are capable of doing a lot more than we think.

The future of work will be different and better

Post COVID, many organizations will shift toward hybrid models, with employees increasingly remote or working from anywhere (#WFA). Offices won’t fully vanish. However, increasingly HQ and regional offices will become collaboration centers, with employees coming in for critical meetings or projects, using large hoteling and collaboration spaces. This is the direction Riverbed is moving. Prior to COVID, approximately 70% of our employees worked in offices full-time. After COVID, this will drop to 20%, with most employees working remote and coming into the office a day or two a week. While the percentages may vary by industry and region, and there are still roles that are best served in person, there is a clear shift toward hybrid and remote work with 600 million people expected to work remote by 2024, up approximately 70% before the pandemic. And while traveling to meet customers and colleagues will still remain important, we will also find ourselves doing some of those meetings face to face using video.

What’s encouraging is the future of work will bring forward a number of benefits we always strived for—fewer commutes for better work/life balance and less impact on the environment; greater conveniences and experiences with new digital models; and the democratization of talent, where opportunities once out of touch due to location will be within reach for future generations. With the future of work, we are starting to unlock the true promise of technology.

In closing, this has been a very challenging year with a global pandemic that has impacted so many. But it has also taught us many lessons in business and life—what matters most, what we’re capable of, and how the future of work will be better for us and our world.

]]>
Troubleshoot Multi-cloud Applications with AppResponse https://www.riverbed.com/blogs/troubleshoot-multi-cloud-applications-with-appresponse/ Mon, 15 Mar 2021 12:30:00 +0000 /?p=16629 For years network visibility has been about looking at traffic flowing through switches and routers on our local and wide area networks. Those of us who were a little ambitious might also take a packet capture every so often only to spend the entire afternoon studying hundreds of lines in our capture file.

However, the way we do business today requires visibility beyond our switches and routers. We need visibility into what’s going on with our resources in the public cloud.

Hosting resources in Azure and AWS poses its own unique challenges to an IT department. It’s difficult to troubleshoot what can’t be seen, and that’s exactly where Riverbed AppResponse  visibility solution comes in.

Visibility for the Cloud

IT departments don’t own the environment their cloud resources reside in. An IP conversation between an end user and a server in Azure will traverse the local network, the service provider’s network, and the cloud’s network.

It’s difficult to see what’s going on with servers and network devices when almost the entire environment belongs to someone else.   

AppResponse solves this problem by continuously capturing traffic between cloud-hosted applications and everything they communicate with. The packets never lie, so with a reliable, full-fidelity capture of activity at the most granular level, a network operator has both real-time and historical data down to microsecond intervals.

This works in hybrid cloud and multi-cloud environments as well. A hybrid cloud is a combination of a private data center and public cloud environment, and that has become a standard deployment method for many organizations. Multi-cloud environments introduce complexity by virtue of having multiple public cloud vendors often using disparate technologies.

AppResponse Cloud captures traffic in hybrid and multi-clouds just as well as it does for local resources because it focuses on packets, the purest and highest fidelity information that exists.

How it Works

AppResponse is deployed as a virtual machine in Azure or AWS and works with Riverbed Agents, AWS VPC Traffic Mirroring, Azure Virtual Network Taps, and various cloud brokers. In this way, AppResponse can capture all the packets flowing to and from a particular cloud-hosted application.

Keep in mind that Riverbed’s entire visibility portfolio is really one solution with several tools under the hood. AppResponse integrates with our other visibility tools such as NetProfiler, NetIM, Transaction Analyzer, Packet Analyzer, and Riverbed Portal. To provide external domain information, AppResponse provides links to ARIN WHOIS Search, Geotool, and a variety of third-party network tap aggregators.

When network operators open up the AppResponse dashboard, right away they’re presented with a wealth of information for their cloud applications. Drilling down is a matter of choosing an application, a time range, an IP address, or whatever information is relevant.

Network teams can view trends, patterns, baselines and also drill down into individual transactions and packets for root-cause analysis. Because they’re looking at individual packets between clients and application servers, they get a much better picture of what’s happening end-to-end. That means with AppResponse Cloud, they can determine if the problem is with the network, the application, or with the client.

Packet captures on LANs are certainly very powerful for troubleshooting, but being able to capture application traffic in hybrid and multi-cloud environments brings us to the next level. Today’s applications are hosted in on-premises data centers, in hybrid cloud, and in multi-cloud environments, so visibility means seeing what’s going on beyond just our switches and routers. AppResponse enables us to find the root cause of performance problems with our applications—no matter where they are.

 

]]>
Cyber Security Threat Hunting Using Network Performance Management Metrics https://www.riverbed.com/blogs/threat-hunting-using-network-performance-metrics/ Mon, 08 Mar 2021 09:30:38 +0000 /?p=16640 If you are familiar with Network Performance Management (NPM) metrics, you’ll recognize the following key performance indicators. But did you know that these same KPIs, along with many other metrics, are helpful for cyber security threat hunting?

  • Top-Talkers,
  • IP Addresses
  • Typical Port and Protocol Usage
  • HTTP Return Code Ratio
  • Traffic Volume Metrics, and many more…

Threat hunting is what cyber security analysts do…but they need data sources that can’t be compromised like full-fidelity network wire data or network flow data. Why network wire data? It is clean and consistent across the network. Attackers can manipulate logs, source of event, and break through deployed security infrastructure, but they can’t manipulate network packet/wire data.

Let’s focus on two key aspects of cyber security:

1. Threat Hunting: Proactive threat identification applies new intelligence to existing data to discover unknown incidents.

What you should be looking forward for: Threat intelligence often contains network-based indicators such as IP addresses, domain names, signatures, URLs, and more. When these are known, existing data stores can be reviewed to determine if there were indications of the intel-informed activity that warrant further investigation.

2. Post-Incident Forensic Analysis: Reactive detection and response examines existing data to more fully understand a known incident.

What you should be looking forward for: Nearly every phase of an attack can include network activity. Understanding an attacker’s actions during each phase can provide deep and valuable insight into their actions, intent, and capability.

Why Threat Hunting is Important

No evidence of compromise does not mean evidence of no compromise. Hackers are always busy trying to avoid detection. You don’t know today what you’ll need to know tomorrow! You need to investigate. If you are not putting telemetry in place, you don’t have a recording of what’s happening, which means you will not see who’s doing what, with whom, etc.

If you have a Network Performance Management background and are not a professional threat hunter, then let’s start by describing the phases of an attack and how the attacker sees your network. There are seven specific phases of cyber attacks, several of which include network activity:

  1. Reconnaissance (recon) to know the target
  2. Scanning to find something attackable
  3. Gaining an initial point of compromise into the target network to create a foothold and use it for a pivot point for additional recon and scanning
  4. Pillaging the network for valuable resources (e.g., useful info, internal DNS, username enumeration, passwords, other attackable machines)
  5. Exploiting data to get resources (i.e., data exfiltration)
  6. Creating back doors to stay in the network, including creating listeners and/or backdoor C2 channels, installing software, maintaining persistent access
  7. Covering tracks by cleaning up logs, backing out of changes, and patching systems

You’ll notice many familiar KPIs related to network performance management. That’s because nearly every phase of a cyber attack can include network activity—which is why monitoring for traffic anomalies is a great starting point for threat hunting.

Practical Advice

Here are a few examples of how Riverbed Network Performance can help you leverage network KPIs for threat intelligence and hunting:

 Network Performance KPI Data Source Existing Usage Threat Hunting Usage
Top-Talking IP Addresses Full-Fidelity NetFlow The list of hosts responsible for the highest volume of network communications in volume and/or connection count. Calculate this on a rolling daily/weekly/monthly/annual basis to account for periodic shifts in traffic patterns. Unusually large spikes in traffic may suggest exfiltration activity, while spikes in connection attempts may suggest Command & Control activity, their actions, intent, and capability.
Traffic Volume Metrics Full-Fidelity NetFlow Maintaining traffic metrics on time-of-day, day-of-week, day-of-month, and similar bases. These will identify normative traffic patterns, making deviations easier to spot and investigate. A sudden spike of traffic or connections during an overnight or weekend period (when there is typically little or no traffic) would be a clear anomaly of concern.
Top DNS Domains Queried Network Wire Data & Full-Fidelity NetFlow The most frequently queried second-level domains based on internal clients’ request activity. In general, the behaviors of a given environment don’t drastically change on a day-to-day basis. Therefore, the top 500-700 domains queried on any given day should not differ too much from the top 1000 from the previous day. Any domain that rockets to the top of the list may suggest an event that requires attention, such as a new phishing campaign, C2 domain, or other anomaly.
Typical Port and Protocol Usage Full-Fidelity NetFlow The list of ports and corresponding protocols that account for the most communication in terms of volume and/or connection count. Calculate this on daily/weekly/monthly/annual basis to account for periodic shifts in traffic patterns. Similar to the purpose for tracking top-talking IP addresses, knowing the typical port and protocol usage enables quick identification of anomalies that should be further explored for potentially suspicious activity.
HTTP GET vs POST Ratio Network Wire Data The proportion of observed HTTP requests that use the GET, POST, or other methods. This ratio establishes a typical activity profile for HTTP traffic. When it skews too far from the normal baseline, it may suggest brute force logins, SQL injection attempts, server feature probing, or other suspicious/malicious activity.

Network forensics is a critical component for most modern incident response and threat hunting work. Network data can provide decisive insight into the human or automated communications within a compromised environment. Network forensic analysis techniques can be used in a traditional forensic capacity as well as for continuous incident response/threat hunting operations.

What you really need is complete data so threat hunting can be meaningful, not sample data that retains only statistics. It’s best to use Riverbed AppResponse and NetProfiler to start collecting full-fidelity network packet and network flow data for threat hunting.

Riverbed NetProfiler Advanced Security Module is a full-fidelity network flow solution that watches for changes in behavior. These changes could be new services on a sensitive host, connections to untrusted systems, or unexpected data movement. The network fingerprinting process creates a statistical profile of network connections to identify the abnormal sessions.

The threat hunting process is data- and time-intensive. Focus on filtering key assets, unique threat identifiers, or other known aspects in the search—these are great starting points for threat hunting!

]]>
Pinpointing Application Performance Issues with Unified NPM https://www.riverbed.com/blogs/pinpoint-app-performance-issues-with-unified-npm/ Thu, 11 Feb 2021 22:03:00 +0000 /?p=16511 With employees and business partners working from home due to the pandemic, non-enterprise internet connections have exacerbated the issue of application performance. If services to your premises are degraded, you have recourse to your communication service providers. But if your users are dependent on mobile broadband, ADSL or NBN links, network performance is far from guaranteed—which can lead to frustration and reduced productivity.

However, application performance issues are not always caused by communication links. Sometimes issues and delays are created by the applications themselves, or the third-party services they depend on. In other instances, organisations delivering application services to anyone off their network have the need to identify and eliminate application performance issues for their own competitive advantage.

The challenge is how to comprehensively monitor, pinpoint and diagnose any issues across network links, the application and its dependencies, to avoid unproductive finger pointing and lengthy resolution times.

‘It’s a network problem!’

Time-sensitive applications—from ERP systems to online shopping to trading platforms—are dependent on optimum performance. It’s easy to simply attribute performance issues to the network or internet access, even in these days of cheaper bandwidth. In fact, degraded application performance is often due to other issues—so throwing more bandwidth at your WAN or cloud service links won’t necessarily fix it.

It is best practice to closely monitor and troubleshoot performance bottlenecks to isolate the true causes. Think about applications that rely on an external service as part of its operations. If your application goes off to another service such as check postcodes, ABNs, credit ratings or the like, you should have an SLA with that service provider. But if they are not meeting their SLAs, how can you quickly tell?

Breaking down a transaction, queries are sent to servers, which may query other servers, and back again. Then the response is provided to the user. If there is an unacceptable delay at any of these stages, it is critical to pinpoint the exact point so it can be resolved.

Success = technology, process, people

As in many complex situations, overcoming challenges efficiently and continuously depends on a combination of factors working in concert.

To gain visibility across performance bottlenecks, you need the right technology. Riverbed’s Network Performance solution not only monitors network connections, it uses packet capture to be ‘application-aware’—providing you with a holistic view of the factors involved in application performance. When you can be accurate in your identification of issues, they’re faster to fix. Further, when you have evidence, you can ensure responsible third parties resolve them, without going through loops as fingers are pointed in all directions.

Stringent processes for monitoring, escalating, diagnosing and resolving issues—fast—are also essential. Over the years, Riverbed has developed the processes and methodologies for timely resolution and can share them with our customers.

Finally, your people need the right skills to go beyond network monitoring in these days of ‘working from anywhere’ to rapidly diagnose and resolve—or escalate to the right service provider—in real time. Again, Riverbed offers consulting services for skills transfer or ongoing support, as well as training services to get your team up to speed.

If you’d like to know more about the possibilities of application-aware network performance monitoring, our recent webinar Network Performance Metrics That Matter is available on demand and is a good start. Or, if you would like a demonstration of our Unified Network Performance Management solution, talk to your ICT service provider or contact us.

]]>
Evolving IT Operations to Support New Ways of Working https://www.riverbed.com/blogs/evolving-it-operations-to-support-a-hybrid-workplace/ Fri, 05 Feb 2021 16:19:21 +0000 /?p=16543 The challenges of working from home have caused organizations to reevaluate how they look at networks for enterprise workloads and hybrid workplaces. The range of at-home networks and devices now engaged in critical business operations has grown by an order of magnitude. With more diverse and dispersed operations, IT decision-making processes—and IT teams, themselves— will need to evolve to meet new technical challenges, new attitudes towards privacy, and fundamentally new ways of working.

With this in mind, here are four actions organizations must take to support the future of work:

1. Invest in deeper structural changes

Up to this point, businesses have been learning as they go when it comes to optimizing the ability of their teams to work remotely. No one expected the massive disruption that COVID-19 caused, so there was never any detailed plan regarding how to optimize existing IT infrastructure for work-from-home environments. With no definitive end to the pandemic or the WFH experiment, many organizations opted for a patching approach—making small fixes as the need for them became obvious. This may have been acceptable at first, but as continual data breaches and security mishaps have taught us, a patching approach won’t cut it as a viable, long-term IT strategy.

Instead, organizations need to take a deeper look at their core operating models and invest in structural changes that will prepare them for the future of work. We’re at a point where the scales are finally tipping, and decision makers recognize that the ROI for making these changes are far greater than continuing to make small fixes in the hopes that the old ways of working will return. This is an important moment in the story that began in March 2020 and we’ll look back at it as a time when the ‘winners’ laid the groundwork necessary to emerge from the pandemic as truly evolved, resilient enterprises.

2. Move enterprise networks and workplace policy ‘closer to home’

What does this look like in practice? One fundamental change organizations will make is to offer WFH-conducive alternatives to in-office enterprise networks. While the concept of BYOD has been around for some time, its definition has changed with COVID-19. Working from home has created scenarios where individuals using two different devices may be regularly tapping into the same home network to access proprietary or otherwise sensitive information from two different organizations. Employees are also often using the same device and network for both personal and work-related tasks.

How do you ensure the security of proprietary data and separation of personal digital identities from professional digital identities? The answer may lie in dedicated 5G networks that remote employees can access from their personal devices. This gives companies a single dedicated network to focus their security efforts and may help keep personal data flows separate from enterprise-specific activity, while also addressing at-home bandwidth issues. With dedicated 5G networks or other solutions, hard boundaries (both for the network and for workplace policy) will need to be established between personal and professional digital identities. This will require new kinds of digital workplace norms, organization-wide understanding of security, and intelligent IT policy working together to ensure that employees are both protected and empowered in hybrid work environments.

3. Establish privacy as its own business category

Privacy has long been placed under the broader security umbrella when it comes to corporate policy, team responsibilities, and investment strategy. With the growing impact of GDPR and new conversations started by the shared experience of working from home, privacy considerations are branching out into their own category and sometimes even find themselves at odds with security interests. Going forward, these distinctions will become even clearer as organizations settle on the extent of visibility they can and will impose on employees working remotely.

Stronger consumer privacy rights, highlighted on the political stage by the Big Tech Senate hearings, may push employees to advocate for similar protections within their companies. This will create the need for more Chief Privacy Officers and privacy-focused teams down the chain of command that understand local regulations and the distinct challenges and sensitivities around privacy. These challenges will reinforce the need for the kind of distinct digital identities discussed earlier and how organizations choose to articulate their privacy posture can have an impact on the company culture writ large.

4. Evaluate SD-WAN in the context of hybrid work environments

With changing employee expectations and many organizations now realizing that they can stay productive while working remotely, a shift to hybrid, mobile-first environments in many industries is inevitable. We’ll see scenarios where employees go into the office once or twice a week, causing enterprises to want to rent, rather than own, much of their IT infrastructure. This will create a new demand for multi-tenant SD-WAN environments. Two primary capacities of SD-WAN—connecting branches with the data centers and onboarding to the internet—will need to be more deeply explored from the context of hybrid work environments. Whether SD-WAN deployments will slow remains to be seen. What is clear is that the relationships between IT teams, SD-WAN vendors, and other solution providers will need to evolve to meet the new needs of a hybrid workforce.

Looking back to look ahead

The changes and challenges of 2020 hit the enterprise at breakneck speed. While organizations have adapted quickly and admirably, many are still taking a thorough look at their performance. Rather than a sign of what’s to come, the past year is an indication of what’s already here, and here to stay. Decision makers will need to reflect quickly, develop clear strategies around privacy, BYOD, SD-WAN, and network performance , and then make investments to support their workforce as it continues to evolve.

]]>
Takeaway Only! The Parallels of Restaurants and IT During Lockdown https://www.riverbed.com/blogs/parallels-of-restaurants-and-it-during-lockdown/ Thu, 04 Feb 2021 16:52:01 +0000 /?p=16515 Like most Melburnians and other people around the world, visiting a local restaurant for a leisurely group meal is a strong part of my family’s social life. For much of 2020, this has been impossible as restaurants and cafes that have stayed open during lockdown bear ‘Takeaway Only!’ signs. This has led many of us to try to recreate the restaurant experience in our own homes.

With the mass evacuation of corporate offices so we can practice social distancing, our work is also in ‘takeaway only’ mode. With the implementation of work-from-home (WFH) initiatives, the initial focus of IT teams was on equipping staff with laptops and establishing access and security. As individuals, we had to work out the best place for necessary work equipment in our homes—often having to fit in with housemates, partners and children home-schooling.

Once equipped and settled, we were subject to unmonitored home internet connections of every description put under great pressure from the significant increase in video conferencing with our colleagues, as well as the conflicting needs of others in the household.

The question for IT teams turned to “How can we recreate an in-office experience for our WFH staff in terms of application performance and productivity?”

First look at the network

Many organisations accelerated their plans—or initiated new ones—to place workloads in the public cloud during the early months of 2020 in an effort to make them more accessible for distributed employees. As VPN and remote user infrastructure now had to support many more users, more consistently, there was increased reliance on IaaS and SaaS.

This involved reconfiguring links between these workloads and the enterprise data centre—making monitoring and management to optimise application performance in new configurations a critically important task. Access and security also required modification, increasing the amount of change and therefore potential performance degradation.

Clear visibility over a rapidly evolving network is essential to assure acceptable performance. Riverbed Network Performance tools are able to help you to understand just what the user experience should look like, and to identify where and why performance issues happen. This knowledge enables you to pinpoint actual or potential bottlenecks, providing the forensic evidence to present to your network service providers for rapid resolution.

Then look at remote connections

With infinitely greater numbers of employees now relying on their home connections to work productively, there is a simple way to deliver performance. Riverbed Application Acceleration solutions, including Client Accelerator and SaaS Accelerator, can provide an office-like experience to these workloads.

A word of caution around allowing users to accept poorer performance and productivity at home. They may just accept that this is the way it has to be when working away from the office—but what is the cost to your business in lost productivity?

Poor performance should not be a given. One organisation that discovered this is Landform, a professional architecture and engineering services company. Landform had to factor in the real impact of reduced productivity on their revenues given they had the same wage bill, but their employees could produce less. The solution was Riverbed Client Accelerator on employee laptops and desktops, with SteelHead CX in the data centre. This enabled 92% faster opening of CAD files from home—in one case, down from 20 minutes to just 90 seconds.

Into the future

Much research indicates that this year’s pandemic has escalated the move to working from anywhere—and that even when we can all go back to office, some of us will spend less time there in the coming years. In fact, we are now at a point where many employees have settled into a pattern of WFH—so now is the time to address the productivity and performance of these new work habits. This means that the above-mentioned technologies will continue to help us ensure ongoing productivity for our people, wherever they choose to work.

Meanwhile, if you are working from home and enjoying your favourite takeaway food at the same time, try not to spill that laksa on your keyboard!

If you’d like a 60-day trial of our Client Accelerator solution, talk to your ICT service provider or visit our website.

 

]]>
New Data Security Challenges in the Rush to the Cloud https://www.riverbed.com/blogs/data-security-challenges-in-the-cloud/ Wed, 27 Jan 2021 14:51:16 +0000 /?p=16519 The challenges of working from home have caused organizations to reevaluate how they look at their networks and the data that lives on them. The range of at-home networks and BYO-devices now engaged in critical business operations has grown exponentially and amplified our reliance on cloud-based infrastructure and solutions and scattering our data into what is frequently the unknown.

In their rush to the cloud, enterprises will need to take into consideration three new data security challenges as they reevaluate where their data is and whether they have taken enough responsibility for it:

1. Cloud whiplash

Accelerated by the dramatic shift to remote work, organizations have been steadily moving all of their data outside the enterprise and into the cloud. What this means in reality is that all the data that makes up our digital enterprise is on someone else’s computer. With the rise of SaaS, the applications that serve as the foundation of our businesses are maintained by someone else, and although that generally ensures the security of the application, the visibility on the data stored within is generally significantly diminished. Whereas in days past, a company had its own datacenters and computers, today the paths our corporate data takes are no longer owned; and therefore, visible to the company. And whether or not the infrastructure that is owned and operated by another company is monitored is frequently (and frighteningly) unknown.

We are already deeply relying on fundamental business applications like Office 365, Salesforce and Slack—the most used applications—moving to the cloud. Even the more tailored applications that don’t yet have a SaaS equivalent are moving from the corporate datacenter to IaaS to be consumed as a service.

As a result, we see enterprises starting to grapple with the complex question of where their data is, and who really has access to it, and how they might audit or track this. Their heads will suddenly turn to realize their ability to govern data is limited at best, and they have few processes in place to understand who is accessing what data and from where (internally and externally), and what the actual costs are. Visibility will become the new watchword.

2. Diminishing returns on cloud storage

As corporate entities, we generate an awful lot of data. Inevitably, the path of least resistance is to keep buying more and more storage to stuff all of our data into the cloud. And the reality is all the data we create ends up stationary, ie. “sitting around” and frequently untouched or unused for long periods of time. For example, just consider the SharePoint files of former employees. We lose sight of where that data really is, what’s happening to it, and whether or not someone may be moving it out of the organization.

We expect many enterprises will start to recognize that that path of least resistance that cloud storage represents—when not used thoughtfully and strategically—turns all that data into a liability. Companies will start to understand that we have passed the point of diminishing returns with a haphazard approach to cloud storage, both from a security and cost perspective.

In addition to acting on the understanding that not all data is worth paying to keep, especially considering its potential liability, enterprises will focus more than ever before on how they will apply cloud storage smartly, securely and affordably.

3. Think global, act local privacy

In the big picture, we have seen broad protection for consumer and individual privacy enacted through regulations like GDPR and CCPA that say people must be told what data is being collected about them. National measures in the United States have failed to pass so far, but we did see California forge ahead and New York and Massachusetts are considering following suit. But what will happen if a more progressive city, like San Francisco, decides that consumers need stronger protection of their personal data than California deemed acceptable?

We expect to see that some municipalities will begin to impose more restrictive data privacy laws than those adopted on a federal or state level. For companies who store consumer data in the cloud, their model is to use very few, but very large, datacenters to hold all that information. Such companies, like Fitbit, may find themselves forced to find local datacenters so that they can meet new municipal requirements to do business in a city like San Francisco. In turn, we may see the large cloud service providers capitalize on this dynamic by starting microfacilities across many locations and regions in order to help their customers comply.

Doing a double-take

The changes and challenges of 2020 hit the enterprise at breakneck speed and accelerated a rush to the cloud. While organizations have adapted quickly and admirably, many will start to take a second look at what they’ve done with their data, and what they need to do going forward. In the coming years, we expect organizations will implement new ways of ensuring responsibility for data, wherever it lives.

]]>
The Future of End-to-End Network Management https://www.riverbed.com/blogs/future-of-end-to-end-network-management/ Wed, 20 Jan 2021 03:00:45 +0000 /?p=16490 Due to the global pandemic, enterprises have had to accelerate digital initiatives in a matter of weeks, rather than years, as a top priority to overhaul their business processes and transform services to deliver value to their customers and employees.

As organizations continue to support remote workforces and shift toward work-from-anywhere models and hybrid work environments, network technology will play a critical role in connecting every individual, device and organizational structure that together form the digital enterprise.

With this in mind, here are five trends that will shape the future of end-to-end network management:

1. Continued consolidation of the SD-WAN market

As markets begin to take shape and mature, it often becomes increasingly difficult for smaller players to compete as larger entities begin to invest more fully. As Covid-19 has elevated the importance of how we manage and operate networks for remote work, many smaller SD-WAN players now face increasing market pressures to enter acquisition deals with larger enterprises.

A primary example is the acquisition of SD-WAN vendor 128 Technology by Juniper Networks in October of this year, a move intended to bolster the latter’s networking portfolio. Larger vendors see significant potential for incremental business growth, in particular with big existing customers, and see acquisitions as a way to expand their roster of SD-WAN features and capabilities that they can use to expand existing service subscriptions.

In the coming year, the consolidation of SD-WAN vendors will continue as larger players such as Juniper, Cisco and HPE continue to buy up smaller players in the SD-WAN space that no longer have the resources to compete.

2. The rise of predictive operations

AI and ML have increasingly played an important role in approaches to network monitoring. We expect to see the value of analytics and number of real-world implementations continue to grow, especially when it comes to identifying active and potential threats when it comes to the job of securing the network.

The predictive power of AI and ML is a powerful tool not only for threats, but for operational purposes as well. Taken together, AI-enhanced security and operational capabilities can give us the ability to both recognize existing breaches and predict faults and threats before they happen, determining how they are likely to evolve over time. Significantly, this may open the door to predictive security suites within network performance management. Taking this concept of predictive operations a step further, we even see predictive analysis and rank analysis coming together, allowing us to rank predictions based on their likelihood.

3. The fall of static development

The Covid-19 pandemic has been a remarkable accelerant for the concept of remote work. Organizations of all kinds were pushed, essentially overnight, to connect their entire workforce and ensure business continuity. We realize that the new approaches to remote work—how each company has chosen and implemented technology solutions—may be permanent in some cases and temporary in others. Which technologies remain and what percentages of people work remotely versus in-office may vary, but it’s becoming increasingly evident that ‘anywhere’ is the new axis, rather than the branch.

Increasingly, we expect to see developers grasp this new reality and begin to leave static development behind. Developers will see limited return on the idea of developing solutions oriented toward the branch office and gravitate toward anywhere as their primary development environment. In doing so, they will need to consider the proliferation of entry points and end points, and are likely to make notable advances in securing “the anywhere.” In a sense, developers will adapt their thinking to accommodate the reality that every endpoint has become a microbranch. Developers will see the client as the new branch, finding new scenarios that optimize the capabilities of the client while also ensuring that new applications and services can be managed by IT from a single point of control.

4. The emergence of cross-vendor visibility

We advocate visibility of the network and its implications for the business overall as essential for the new way of working. Being able to monitor and manage everything that happens on the network will continue to be a business critical capability in the work-from-anywhere world. Providing comprehensive visibility will rapidly become a priority in the coming year, which will push a number of vendors to reach beyond the purview of their own solutions. We expect to see more and more companies developing solutions that offer visibility into other vendors’ solutions in 2021.

5. A new chapter in the client-to-cloud story

How well applications perform in the work-from-anywhere environment will continue to be a priority for businesses moving forward. A number of vendors have taken runs at accelerating applications in the past, from one end or the other, but with limited success. But the power to accelerate applications is a claim we will see re-emerge in 2021, likely rolled into SDN offers.

How the network delivers and handles applications has changed. Luckily, Riverbed was a very early mover in approaching application acceleration from both the data center side and the client side; neither of which is a simple proposition. The acceleration technologies developed for the data center and the branch can also be implemented on AWS or Azure, accelerating the cloud, or placed in front of a SaaS application like Office365 or Salesforce. This bookends performance with acceleration in a real client-to-cloud approach. Client-to-cloud acceleration is a capability that many vendors will promote in the future, but few will be able to deliver it in a masterful way.

A year of change

2021 will be a year of rapid evolution for the networking technology that has become so fundamental for new ways of working and operating models in the Covid-19 era. With the whiplash shift to remote work somewhat stabilized, IT professionals will focus on the bigger picture and enduring opportunities that smarter network management holds. Seeing end-to-end, accelerating end-to-end, developing for end-to-end and innovating end-to-end will dominate the network for years to come.

]]>
Answering: “Am I Affected by SUNBURST?” https://www.riverbed.com/blogs/answering-am-i-affected-by-sunburst/ Tue, 12 Jan 2021 16:30:00 +0000 /?p=16409 When a high-profile hack or malware campaign hits the news, everyone’s first question is, “How do I know if I’m affected?” Security analyses and official guidance frequently contain indicators of compromise, but they rarely explain how to make use of them. Network visibility tools such as Riverbed NetProfiler and AppResponse can form an important part of any enterprise’s plan to scour its infrastructure for signs of compromise. This post references a recent, widely-reported cyberattack, SUNBURST, to illustrate how to use Riverbed NPM solutions to find and root out malicious actors based on common indicators of compromise.

Malware and similar malicious software must often use the network in executing a cyberattack, which may include communicating with Command and Control (C2) servers, downloading malicious payloads, uploading stolen data or spreading through the network. Oftentimes, these actions are designed to appear innocuous, but can still be identified as suspicious through indicators such as domain names, IP addresses, file names or unique ports. Enterprises can use these indicators to search their networks for malicious activity.

In December 2020, cybersecurity firm FireEye released an investigation of a global network intrusion campaign where hackers managed to insert a vulnerability within certain SolarWinds® Orion® Platform software builds and software updates released between March and June 2020. The Cybersecurity and Infrastructure Security Agency (CISA) followed suit with its own analysis and advisory. This cyberattack, also known as SUNBURST, has had pervasive reach due to its roots in the compromised vendor supply chain. Investigations published so far have included several indicators of compromise that are potentially of use.

Riverbed NetProfiler: long history, global reach

One of the biggest challenges in detecting compromises is that by the time the details and behavior of malware are known, it may have been weeks or months since that malware first started circulating. This is why flow tools, like Riverbed NetProfiler, are indispensable in looking for malware: it is possible to search historical network flow data for even small connections that would otherwise fly under the radar. In order to scale well, flow records must be sparse, but they contain IP addresses and ports and show how hosts connect to each other.

FireEye identified a number of IP addresses of forensic interest in relation to the SUNBURST cyberattack. NetProfiler customers can easily search for these hosts by scanning historical flow data. In the excerpt below traffic expression is merely built up from one or more host names:

Sunburst Traffic Expressions

Very often, further analysis turns up new IP addresses—either because new features of the malware have been discovered, or because the attackers have made changes to their infrastructure in response to news coverage. Using NetProfiler, it is easy to adjust the filter and check again, looking at as much history as possible.

Another indicator to watch for is the malicious use of cloud services. In the excerpt above, the first host listed is in Amazon AWS. Threat actors may use public services so that their IP addresses look more innocuous and their use of such public services tends to be short-lived. It is important to look for these indicators, keeping in mind that communications with public service hosts could be another service reusing the same IP. Time frame is key to understanding the risk, and malware analyses frequently include discussions about the times in which that malware was active: With respect to SUNBURST, FireEye’s countermeasures list includes a “First Seen” and “Last Seen” time frame ranging from February to December 2020.

Visualize the attack

NetProfiler’s ability to visualize patterns of connections over time is another key feature that can be used to better understand cybersecurity threats. In a typical forensic investigation, changing patterns of network behavior analysis would be used once a host has been identified as having been potentially compromised—for example, after seeing a suspicious communication to a C2 server. In the example below, however, it is a particular appliance that has been compromised.

Using Riverbed NetProfiler, customers can filter communications involving the IP of the appliance in question, and then use the network graph to examine the connections the appliance makes within the customer’s network. Unusual external connections may represent new indicators of compromise or other assets belonging to the cyber-attacker. Unusual connections within the network may indicate behaviors such as reconnaissance, lateral movement or attempts to initiate secondary compromise.

NetProfiler lets you discover connections in your network
Fig. 1. NetProfiler lets you discover connections in your network.

Each suspicious connection illuminates a potential move within the network by the threat actor. Duration, size, and type can shed light on what purpose a connection might serve. Context, in the form of patterns displayed before infection, can help weed out ordinary connections and expose unusual ones. The availability of historical data in Riverbed NetProfiler means that customers never have to wonder if a pattern is usual or not: just go back further in the historical record to see.

Riverbed AppResponse: the truth is in the packets

For ground truth in network investigations, nothing beats actual copies of the packets being sent. There are many packets to trawl through, however, and so packets require forethought to make use of them. Alerts can be set up based on suspicious transactions with considerably more depth than is available in flow records. Capture jobs can be created to have full access to potentially malicious traffic.

Reading through technical descriptions of malware behavior can yield useful results. These analyses frequently uncover useful indicators, in particular: first, the domain names used by adversaries, and second, packet captures of the communications themselves.

Leveraging DNS in your security search

The domain names can be useful in a number of ways. In the case of the SUNBURST cyberattack, a particular domain name, avsvmcloud[.]com, was identified as important to the attack progression. First, of course, Riverbed AppResponse customers can look up what IP addresses it currently and previously resolved to and search for those IP addresses in past and ongoing traffic. Riverbed Packet Analyzer Plus customers have the additional option of starting a capture job on UDP port 53 to examine DNS queries. Looking to see who is requesting DNS resolution of malicious domains in this way can be a powerful tool for quickly identifying affected hosts.

It is important when examining domain names to understand some of the ways adversaries use them. FireEye identified some sandbox-detection behavior in the SUNBURST cyberattack, in which the malware generated domains in a loop and tried to resolve them to see if they resolved as local IP addresses (an indicator that the malware was in a monitored environment so that it could stop execution). While it might be possible to form a list of these names to watch for them, seeing randomized domain names in DNS requests is a red flag that is not always a feasible way to search for indications of a particular malware strain.

Just as web applications are enterprise-critical, they are critical to many malicious campaigns as well. HTTP is often used to transfer files, commands or other information. Although malicious actors can and do use custom or encrypted protocols, just as often, they use standard protocols for the same reasons that commercial developers do, including reliability and ease of development.

Saving a packet capture in Packet Analyzer Plus for off-line analysis
Fig. 2. Saving a packet capture in Packet Analyzer Plus for off-line analysis.

FireEye’s SUNBURST analysis provides several examples of the use of HTTP. For example, FireEye describes communication with C2 servers, including JSON payloads with a variety of fields. For example, the key “EventType” is hardcoded to “Orion” and “EventName” to “EventManager.”  Riverbed AppResponse Web Transaction Analysis (WTA) module is very useful here. Just as Riverbed AppResponse customers can analyze business transactions, they can analyze the adversary’s transactions and search for indicative fields like these key/value pairs. Another analysis, by GuidePoint Security, identified a set of HTTP requests including “logoimagehandler.ashx” and query parameters such as “clazz” that indicate potential webshell communications.

AppResponse customers can also look for web transfers of files in the same way or web requests to malicious domains. Reading through the details of how malicious actors communicate will reveal what to watch for in their traffic.

Summary

While this post outlines indicators of network compromise specific to the SUNBURST cyberattack, the important lesson to learn here is not the indicators and security analytics tied to any one malware campaign. Instead, it is to learn how to read reports and analyses on malware to quickly identify key indicators that can be leveraged using the tools Riverbed NetProfiler and AppResponse customers already have. More specifically, IP addresses and domain names can be simple and reliable indicators of which network hosts to examine. Watch for key information such as the devices targeted and the time frames in which the malicious campaigns took place. And when in doubt, please reach out to Riverbed for help and advice.

]]>
Modern Use Cases for Application Acceleration https://www.riverbed.com/blogs/modern-use-cases-for-application-acceleration/ Thu, 07 Jan 2021 17:18:00 +0000 /?p=16405 Riverbed recently had the opportunity to speak with a panel of industry experts—bloggers, analysts, hardcore in-the-weeds technical folks. It was an opportunity to spread the word about what we’re doing and where we’re headed with network and application visibility and performance.

We kicked off with the theme of “work from anywhere,” so it was interesting to see the Tech Field Day 22 delegates in home offices, in comfy living room chairs, or, like me, in cold, unfinished basements. We used Zoom for the event which made the theme of our presentation more palpable than even the most colorful marketing slide.

Not long ago, when we were in branch offices, we had the benefit of sophisticated network tech on the backend to make our applications perform the way they should. There was QoS on our switches and routers, MPLS with strict SLAs, high bandwidth commercial-grade internet links, direct connections to cloud providers, and WAN optimization appliances.

Rest assured, all that technology is still there. The only issue is that these days, very few people are in the office to make use of it. And, this is why Riverbed’s Application Acceleration portfolio is so relevant today.

Technology for the Way We Work Today

Application Acceleration solves the problems caused by low bandwidth broadband, DSL, satellite, LTE, and the typical connectivity we have outside the office. It improves application performance over any type of connection and for almost any application whether it’s on-premises, in the cloud, or delivered as a SaaS app.

Look at Application Acceleration as a single technology that is applied in different ways based on where resources are. Sometimes resources are in traditional private data centers, often they’re hosted in public cloud, and today many apps are delivered by SaaS providers like Microsoft, Dropbox, Slack, and Salesforce.

For years, Riverbed has made those applications perform extremely well for someone in a branch office. We used an end-to-end solution with a SteelHead appliance at the branch and another SteelHead in the data center. The results were—and still are—pretty awesome.

Branch SteelHead at the Client Level

Today, we can replace the branch SteelHead with an agent that lives right on a client computer. That means a software version of Riverbed’s SteelHead appliance is with someone no matter where they are and no matter what kind of internet connection they have.

The Client Accelerator agent is very similar to a branch SteelHead, though it’s optimized for a single computer. It’s managed by the Client Accelerator Controller—a virtual machine deployed on premises or in the cloud. This way, an IT department can manage acceleration policies all from one place.

Using the Client Accelerator Controller, we create application acceleration policies that tell the agent what to do with certain traffic. The policies look a little like firewall rules because they use source and destination IPs and TCP ports to identify traffic, though we also use URL learning and correlate local processes with network activity.

The Application Acceleration Ecosystem

Riverbed offers three Application Acceleration solutions: 1) Client Accelerator, 2) Cloud Accelerator, and 3) SaaS Accelerator.

1) Client Accelerator

With Client Accelerator, we’re not accelerating a client computer. We’re accelerating the data transfer that an application relies on. The local agent communicates with the remote SteelHead to reduce bandwidth consumption on the local link. It’ll also identify applications running on that link and apply whatever acceleration policies it receives from the controller.

2) Cloud Accelerator

In the case of public cloud, the local Client Accelerator agent communicates with a virtual SteelHead in Azure, AWS, or Oracle Cloud. A network operator can control both ends, so we still have a bookended solution that dramatically improves application performance even for cloud-hosted apps.

3) SaaS Accelerator

SaaS Accelerator leverages the same technology under the hood, but because we don’t own SaaS applications or the data centers they live in, we approach it differently. We host SaaS Accelerator in Azure and offer it as a managed service. That means Riverbed is responsible for deployment and backend management of the application acceleration service instances.

Going Under The Hood

As application traffic goes back and forth between the client and the remote server, regardless of where it is, we can pick out unnecessary packets that we don’t need to send anymore once the stream is established. We look for frequently accessed data that we can cache locally using byte-level data deduplication and data referencing. That way, we cache chunks of data and tag them with markers so they can be looked up when the client makes a request for it.

We also need to deal with the adverse effects of latency. We do that by regulating window sizing that provides a type of TCP flow control. This makes the transfer of data much more efficient. We also repackage TCP payloads to make that back-and-forth communication between a client and a server more efficient. And because we have a local agent on a client computer, we can correlate specific application processes with local network activity. Ultimately, this helps reduce round trips thereby reducing the effects of latency.

Application Acceleration is an ecosystem of components that solves the problem of poor application performance due to mediocre, sometimes outright bad quality internet connections when we’re not in the office.

In response to how we all work today, Riverbed has taken a technology we’re already experts in and brought it right down to an individual computer. And we’ve also expanded that functionality right out to the cloud—whether that’s a private cloud, public cloud, or one of today’s most popular SaaS providers.

Check out the overview of Riverbed’s Application Acceleration solution below, and make sure to watch all of our presentations from Tech Field Day 22. Watch Video

 

 

 

]]>
Maximizing Network and Application Performance: A TFD22 Recap https://www.riverbed.com/blogs/maximizing-network-application-performance-tfd22-recap/ Wed, 23 Dec 2020 13:31:53 +0000 /?p=16351 The last three months may have been a whirlwind of activity for you. Many organizations are trying to wind down projects, wrap up the spending on their budgets, and start the new year off right with new projects and plans to take on the world. Simultaneously, we’ve all been facing the same old story of working from home and making the best of the current world situation.  

Here at Riverbed, it’s been no different. For the Technical Evangelist Team, we were busy in activities at online events like ONUG Fall 2020, the Riverbed Global User Conference, and Tech Field Day 22. If you missed any of these events, I recommend you look at the videos from them. We shared live demonstrations. We presented several technical sessions showing you how to maximize the visibility and performance of networks and applications. We also spent a few hours with the famed Tech Field Day delegates, digging into our Unified NPM solution to show how it can help you spot issues now and how Riverbed thinks the future of this space will unfold.

It’s very apparent to me that people still regard Riverbed as a WAN Optimization company. I hear people refer to having a “Riverbed” in their network when they mean that they have a SteelHead Appliance in their network. The truth is that we still Optimize the WAN, but that’s just a small fraction of the overall goodness that Riverbed provides. Our CEO, Rich McBee, is very clear in the video embedded below when he states that the Riverbed mission is to “help organizations maximize visibility and performance across networks and applications for all users anywhere they reside.” Watch Video

This mantra is the fundamental driver of Riverbed, and everything we do leads back to this.  

In this article, I want to highlight what we discussed at Tech Field Day 22 and why it’s crucial that organizations seriously consider their visibility posture going into 2021.

Understanding the portfolio

In the first two sessions, Phil Gervasi and I talked about our portfolio. If you don’t understand our product line, you need to watch these two videos. The following do not take you through specific hardware models, sizing and such. These videos cover at a high level how our solution fits together. Here’s why it’s essential to visualize the solution.

At a high level, I look at it like this: two areas impact network and application performance. Let me explain in the following section.

Latency, protocol shortcomings, chatty applications

Several factors contribute to latency, but a network operator can’t control all of them. The same is true with chatty applications and protocol shortcomings because you don’t necessarily control those attributes. For these scenarios, Riverbed provides acceleration solutions. Deploying these solutions in your branches, data centers, cloud, SaaS applications and endpoints, you get full coverage no matter where your users perform their work. Here’s Phil’s session:

Watch Video

Configuration, security, routing, hardware, and application issues

When it comes to configuration, security, routing, hardware, and application issues, we have more control. However, the catch is that we must identify these issues before preventing them or stopping them from impacting performance. For these scenarios, Riverbed provides a Unified NPM solution for the branch, data center, cloud, SaaS application and endpoint. Here’s my session:

Watch Video

Digging into Unified NPM

Now that you get what we are trying to do, let’s show you how we do it. For this, we turn to John Pittle, Vince Berk and Gwen Blum. In the following videos, we jump back and forth between John showing how we can identify issues right now, and Vince discussing how AI and ML will lend itself to the future. Sprinkled in there, Gwen does two demonstrations:

Watch Video

Takeaway from TFD22

So, let’s bring this back around to the point. Here at Riverbed, we do accelerate traffic, and that’s one way we help organizations maximize performance for their networks and applications. That’s not what we focused on at TFD22. No, at TFD22, we focused on the visibility aspect. So, the key takeaway from these videos is that here at Riverbed, we capture all the packets, all the flows, SNMP data, and more. By gathering all this data, we can then provide you with the best view of what’s going on in your network environment. This information allows you to fix configuration, security, routing, hardware, and application issues impacting your network and application performance. That’s why Rich opened the way he did. At Riverbed, we “help organizations maximize visibility and performance across networks and applications for all users anywhere they reside.”

 

]]>
Navigating the Lockdown Part 3: Back to the Office…or Not? https://www.riverbed.com/blogs/navigating-the-lockdown-back-to-the-office-or-not/ Mon, 21 Dec 2020 19:00:49 +0000 /?p=16355 In the third of a series of HR-focused blogs on Navigating the Lockdown, Riverbed’s HR Director for APJ, Ravi Abbott, looks at how the necessity to work from home has had some unexpected benefits. 

As the world carefully comes out of lockdown, many of us are seeing it in a different way. Do we go back to exactly how it was before the pandemic or do we take this opportunity to embrace lasting change?

If COVID-19 has shown us one thing, it’s our ability to adapt and respond during times of crisis. Projects that would normally take years to implement were rolled out in weeks. We achieved in days what would ordinarily take months.

Simply going back to our old way of life now would be a waste of that superhuman effort. So, how can we hold on to some of these changes for good?

WFH might be here to stay

Inhabiting office space versus working from home is currently a hot topic of discussion—but it is not a new concept.

A Gartner survey of 229 organisations found that 30% of employees were already working from home at least some of the time before the pandemic. Since COVID-19, that number has jumped to 80%. The world was already moving slowly towards a distributed workforce with more and more people working remotely. The pandemic just made it happen more quickly.

Riverbed CEO, Rich McBee predicts that 15-20% of employees previously working out of an office will work remotely in the future. He believes there will be more focus on flexible working hours and ‘results based’ work, instead of the number of hours spent in an office.

Companies are rethinking their investment in office space and instead looking at ways to enable employees with ‘at-office capability’ working from anywhere.

Physical versus virtual presence

At Riverbed, we drink our own champagne. When the pandemic hit and social distancing was enforced, our people continued to work remotely with the same capacity that they had in our offices. A survey of our employees taken at around two months into lockdown showed that the majority felt they were just as, if not more, productive at home than in the office.

In today’s world, collaborative technology is improving in leaps and bounds while domestic bandwidth is no longer a bottleneck. Increasing numbers of workers are from the ‘born-digital’ generations and perfectly comfortable with newer ways of socialising and working together in teams. All of this means that physical office space is becoming less and less relevant for progressive companies.

“The ‘individual cube’ of yesterday can be your home office,” says McBee. “It’s private, you’re working, you’re concentrated. Then, when it’s time to collaborate, the human-to-human interface will be done in a pseudo-office environment.”

A glimpse into the future

Despite all this, I think that the office will still have an important role to play in our post-pandemic lives. However, this time it’s going to look and feel different. Organisations will either move towards shared space options or redesign their current office layouts to allow for more collaboration and socialisation. Cubicles and closed offices will be a thing of the past.

Here at Riverbed, it’s a fundamental commitment to our people that we’ll balance the extraordinary work we do with their lives. Work life after lockdown may just be another way in which we can fulfill that promise to the exceptional people who work for us.

If you’d like to learn more about working at Riverbed, including current roles, visit our website.

]]>
Navigating the Lockdown Part 2: How Traditional Onboarding Has Changed https://www.riverbed.com/blogs/navigating-the-lockdown-onboarding-new-employees/ Thu, 17 Dec 2020 02:04:08 +0000 /?p=16335 In the second of a series of HR-focused blogs on Navigating the Lockdown, Riverbed’s Technical Recruiter for APJ, Mahesh Thyagaraj, looks at how onboarding new employees has evolved.

All organisations are currently facing unique challenges in their workplaces due to the outbreak of COVID-19. This said, it is critical that we continue to support and manage all new hires as normally and consistently as possible when they join the Riverbed family.

Our traditional onboarding process had new employees participate in a series of in-person meetings with HR, managers, leadership and team members to build their first impression of the company and its culture. Since March 2020, however, like many other businesses, Riverbed has had to onboard its new hires virtually. As a result, we’ve made a huge shift in our processes to adapt.

Going virtual

In order to successfully onboard new employees remotely, we pre-planned the virtual experience, making note of all the people they should meet, the tools and equipment required and the experiences each new employee must go through in order to fast-track their ramp up.

First, we ensured they had the hardware, software and information resources they’ll need on Day 1 by asking our IT team to set everything up in advance and deliver the equipment to the new employee’s home office. As soon as they’re on board, we make sure they understand how to use essential communication tools, online meeting solutions and file-sharing applications. We also brief them on who to go to with their different questions, and how to best contact those individuals whilst we’re all working remotely.

By preparing in advance, we can share our plan with the new employee and give them full visibility of their schedule for the first few weeks. We created a comprehensive resource page for new hires to access information on whatever they may need as they settle into working remotely in their new role at Riverbed.

We have numerous virtual social gatherings and the first port of call is to ensure that our new hires are added into these social groups so that they can get to know their colleagues on a more personal level.

Getting into the culture

As a new employee, understanding who you will be working with on a daily basis and how to develop those relationships, is critical. We have worked hard to adapt our onboarding processes to allow strong bonds to develop within teams, despite lockdown conditions.

Each new hire is made aware of their team culture by having department-specific onboarding discussions about values and expectations, including providing them with links to our employee handbooks and company policies and procedures. Their manager will brief them on their new job responsibilities and discuss their learning and development plan and they’ll have regular virtual meetings with the rest of their team so they can feel comfortable with their colleagues and become a part of the Riverbed family!

Managing under lockdown

As our new employees settle into their daily routine, frequent catch-up calls are scheduled by their managers and colleagues. These calls keep managers apprised of how their new team member is settling in and make them aware of any help they may need. During these calls, managers check in to understand what their new employee needs to be successful in their new role, whether that’s support, resources, or additional work and ensure that they provide for these needs. Each employee will have different needs, and being attentive to these needs is our top priority as we onboard our new hires.

Managers set specific goals and expectations for their new hires outlining short and long-term goals and scheduling 1:1 meetings to discuss upcoming tasks and resolve potential concerns.

Help is at hand

All new Riverbed employees are assigned a Riverbuddy; we believe a supportive, caring and helpful culture is very beneficial. Providing a Riverbuddy to new employees helps them to settle in quickly and gives them someone to go to no matter what help they need or questions they have.

COVID-19 has created a uniquely challenging time for anyone starting a new job. That’s why we’ve taken all the measures we can to ensure a smooth onboarding for all new hires. Our aim is to induct newbies into the Riverbed family with a warm and informative virtual welcome and have them thriving in their new roles as quickly as possible!

What they say

Success of any new process is in its implementation, and we’re delighted to have had some encouraging feedback! Here are some testimonies from some of our new employees.

“Onboarding is both an exciting and an anxious period, especially when the whole world is going through a pandemic. But, from the very beginning of my journey with Riverbed, everyone has made me feel welcome.” 

“From the time of my interview to virtual onboarding and finally understanding the workflow of the organisation … The whole Riverbed experience has been amazing and I wholeheartedly thank each and every one who made it easy for me to join the organisation virtually from the comfort of my own home!”  

“The management architecture of Riverbed is clean and smooth. Team coordination is good and transparent. I wasn’t sure how the whole process of hiring could be done virtually, however the interaction and support from all the departments made it easy for me.” 

“Training and induction were organised in a well-planned manner and done online. I was introduced to the complete team online and received a very warm welcome. Since then I’ve had a number of conversations with everyone whether during training, team meetings or case troubleshooting help. My Manager, HR and team have been in continuous contact with me and provided all the support and guidance required.”

If you’d like to learn more about working at Riverbed, including current roles, visit our website.

]]>
Speed CAD File Downloads & Uploads for Your WFH Professionals https://www.riverbed.com/blogs/speed-cad-file-downloads-uploads-wfh-professionals/ Tue, 01 Dec 2020 13:25:44 +0000 /?p=16246 Why design professionals at architectural, engineering, construction and related firms struggle with running CAD applications remotely—and how to help them, fast.

Of the organizations I’ve worked with over the past few months, amongst those experiencing special difficulties with their people having to work at home come from the architectural, engineering, construction (AEC) and related sectors. With teams typically collaborating and sharing day-long with large Computer-Aided Design (CAD) files, network performance is business-critical.

The WFH factor

When working in the office, the performance of local network connections is closely monitored for reliability and performance. But, once large numbers of staff were forced to work out of their homes early this year, productivity reduced due to unpredictable ‘last mile’ connections. Many professionals are also sharing single internet links with housemates, working partners and homeschooling children. These factors can significantly increase the time it takes to download and upload large CAD files.

This, in turn, has a clear connection to their firms’ profitability and reputation. Slower project delivery time means reduced margins—because labor costs are increased by the reduced productivity of highly paid employees or contractors. Deadlines are missed, clients are unimpressed, and repeat business becomes less certain.

Besides the business risks, having skilled professionals ‘watching paint dry’ as they wait for their work to cross to and from their data center or those of business partners—which is frustrating and demotivating.

So what’s the solution?

The performance of ‘heavy’ CAD data for users working from home is dependent on three factors: network congestion, network latency and network unpredictability. Removing these inhibitors is the way to give users a great experience—wherever they are working.

Heavy design data can really clog networks and, depending on the distance between the CAD application server and the user, latency can significantly impact application behavior. Add the unpredictability when every user is working over unique last-mile conditions, and all of these elements can really slow things down.

Riverbed Client Accelerator software on user laptops combined, with Riverbed SteelHead on the application server side, significantly speeds up processes that they are used to performing in the office in seconds, or minutes, rather than hours. Essentially, some of the ‘interesting behavior’ that negatively impacts application performance over networks is eliminated, and users experience CAD as if the application is local.

When remote users are running Client Accelerator:

  1. Network congestion is reduced by eliminating up to 90% of the data which goes on round trips, backward and forward between the user’s device and the CAD application in your data center (or beyond to the cloud if you’re using SaaS).
  2. Network latency is mitigated to improve application performance by up to 33 times.
  3. Network unpredictability—not the least of challenges for WFH professionals—is thus reduced over last-mile connections.

Fortunately, Client Accelerator is a relatively simple and fast solution to trial then roll out to users for rapid results.

CAD file acceleration in action

One firm that has dramatically improved network performance for its WFH staff is US-based Landform Professional Services. The multi-disciplinary consulting firm delivers integrated site design services including civil engineering, landscape architecture, planning, urban design, and land surveying.

Wanting to enable staff to open large CAD files from home and remote sites as quickly as they do in its office, Landform deployed Riverbed Client Accelerator and experienced immediate improvements. After deploying a proof of concept in just 30 minutes, it experienced 92% faster opening of large CAD files from home (from 20 minutes down to 90 seconds). This resulted in daily savings of up to 2-3 hours per employee—achieving ROI in less than a month.

Another business benefiting from optimized performance for remote workers is British firm, Hilson Moran. With more than 250 people in five offices across the UK and the Middle East, this engineering consultancy plans, designs, manages and operates built assets for a range of clients.

Using Riverbed Client Accelerator (formerly called SteelHead Mobile) it has improved network efficiency by as much as 80%, encouraging greater team collaboration through productive remote working. According to Hilson Moran CFO Roger Waters-Duke, “staff can work out of the office, on-site, with a client. We can move data faster and with more resilience… even in locations with thin broadband.”

If you’d like to learn more and take a 60-day trial of Riverbed Client Accelerator, visit our website.

 

 

]]>
5 Key Takeaways from Riverbed’s Global User Conference https://www.riverbed.com/blogs/5-key-takeaways-from-riverbed-global-user-conference/ Thu, 19 Nov 2020 20:59:33 +0000 /?p=16223 Wow—what an event! In keeping with current norms of social distancing and remote work, Riverbed held its first virtual global user conference, and what a day it was! I want to thank everyone who attended and participated. It’s your interaction and knowledge sharing that made the conference a success. Additionally, I need to remind all attendees—and those who didn’t register—that more than 30 sessions and keynote replays from our conference are available on-demand, here.

The Riverbed Global User Conference theme “Maximizing Performance and Visibility for Any User, Network, App, Anywhere” couldn’t have been more timely given the impact the recent pandemic has had on organizations of every kind. With the reality of a work-from-anywhere world, and with it the expanding complexity of the network, the challenges IT teams are facing in our modern, digital era have never been greater.

Uniting subject matter experts from across many domains who all share a passion for delivering the best user experience and productivity for their constituencies, was core to the design of this conference. The SMEs took center stage and didn’t fail to deliver. Sharing the latest knowledge of how you develop a holistic, end-to-end view of exactly what’s happening from the user client—across hybrid networks—and to the cloud was foundational to truly understanding where we are now and where we must go. Building on that knowledge with AI and machine learning to provide actionable insights and forward visibility has become absolutely critical. Leveraging all of the above and providing guidance on how to apply numerous innovations in network and application acceleration—from client to cloud—brought the conversations full circle.

The day-long event was abuzz with energy, but there are five key takeaways that should be reinforced:

  1. Every business needs to accelerate its digital journey. Digital transformations were already well underway in 2019. With the new year, came COVID-19, a global pandemic that would challenge our norms and shift digital transformation into overdrive, overnight. IT teams pivoted quickly and responded in herculean fashion and there’s no turning back. The path forward for every operating model—from supply chains to service delivery to crisis management—is clearly to be led by digital transformation.
  2. Hybrid and multi-cloud adoption continue to grow rapidly. Collaboration tools and SaaS app usage are up significantly, including Office 365, Microsoft Teams, Zoom, and Slack. For example, Teams grew 70% to 75 million daily users in April, alone. The shift to the cloud continues as organizations move more workloads to IaaS and further adopt SaaS apps.
  3. With remote work growing at 50%+ on the heels of the pandemic, flexible work environments are the future. The ‘global experiment’ has made business leaders more comfortable with remote work (95%), and 80% of employees expect more work-from-anywhere flexibility. Overnight, nearly 1.1 billion people were working from home (up from ~350 million in 2019), and more than half of them aren’t returning to the office when the pandemic subsides.
  4. End-to-end visibility is required for our work-from-anywhere reality. Digital transformation, the expanding workplace and explosive growth in cloud/SaaS have only magnified the complexity of today’s network and extended the visibility challenge for IT organizations. And, the old adage has never been more appropriate: you can’t control what you don’t see. End-to-end visibility is more critical than ever. Riverbed has helped many of our customers through this time with the tools to provide better visibility across modern, complex networks. View any of the 15 sessions focused on Network and Application Visibility—ranging from a deep dive on packets and flows to machine learning and AI for NetOps and Troubleshooting—on demand, now!
  5. Performance must extend from the user client through the network and to the cloud—regardless of your team members’ location. Delivering optimization across the modern network includes acceleration for all environments and infrastructure—on-prem, cloud, SaaS, mobile/client—regardless of where the user is located. Riverbed saw 3x growth initially on Riverbed Client Accelerator, which boosts app performance on laptops for remote workers and approximately 100% QoQ growth in Q3 for our application acceleration solutions that include Client Accelerator and Riverbed SaaS Accelerator, which boosts performance of SaaS apps like O365, Microsoft Teams, Salesforce and ServiceNow. You can view any of the 10 sessions focused on Network and Application Performance—ranging from optimizing encrypted SSL traffic to best practices for O365 acceleration—on demand, now!

The Riverbed Global User Conference clearly reinforced just how valuable our peer experts can be (make sure to view the many great success stories customers shared during the event) and also how passionate the IT community is with respect to providing the best user experience and productivity for their teams.

This is exactly where Riverbed’s been focused for years, with you, and even more so today. Partner with us to help your business or government agency maximize visibility and performance—for any user, network, application. Anywhere your users reside.

]]>
Maximizing Visibility & Performance for a Work-From-Anywhere World https://www.riverbed.com/blogs/maximizing-visibility-performance-for-a-work-from-anywhere-world/ Thu, 05 Nov 2020 14:29:11 +0000 /?p=16146 When I joined Riverbed in October 2019, one of my first priorities was to present a clear vision and strategy for the company going into 2020. And after weeks of learning and listening to our customers, partners and employees, the value Riverbed brings to the market and our mission became evident—we exist to help our customers deliver exceptional visibility and performance for any network, any application, to all users, anywhere.

Now as most IT professionals will attest, that’s not an easy thing to do, especially in complex hybrid cloud environments. But that’s where Riverbed has always proven to be most valuable—in the world’s largest, most sophisticated networks—helping IT teams effectively manage diverse and distributed infrastructure, multiple clouds and third-party services.

Fast forward to March 2020 when the global pandemic caused many executives, including myself, to revisit well-planned strategies for the year. For Riverbed, this meant pivoting all our efforts to helping our customers quickly scale work-from-home models with application acceleration and network performance management solutions that kept remote workers productive and networks running and secure. For our customers, it meant accelerating digital initiatives like never before. Projects that would have taken years to execute were completed in weeks or months as IT organizations worked to support 1 billion employees suddenly working from home.

With urgent needs met, our customers are looking ahead to a future that is increasingly hybrid—both from an IT infrastructure and workplace perspective. As a result of the pandemic, 61% of CIOs are fast-tracking digital transformation efforts[1] and 59% of enterprises are accelerating adoption of cloud services[2] and MPLS alternatives. In addition, 74% of companies plan to expand the number of remote workers[3], creating hybrid workplaces where employees will split their time between the office and working remotely. And regardless of location, these employees will require the very best network and application performance to do their jobs.

In this hybrid network, hybrid workforce environment, the same IT professionals who navigated their organizations through the initial waves of COVID-19 disruption are being called upon again—this time to lead the most critical priorities that have emerged since the crisis. These priorities, which are all IT dependent, include accelerating digitization, enabling work-from-anywhere models, and strengthening operational resilience.

With all eyes on IT leaders and their teams to deliver against these priorities, it’s absolutely vital that they have the tools they need to succeed. First and foremost, they need end-to-end visibility—from the client, to the network, to the application, to the cloud—because it’s impossible to manage what isn’t measured or control what can’t be seen.

Visibility provides insight into where performance and security problems exist, what they are, when they occurred and why. Insight, in turn, informs action. As issues are uncovered, IT teams need to be able to quickly apply network changes, including optimization and acceleration, exactly where it’s needed to improve application performance, bolster security and ensure end-user satisfaction.

With these capabilities in place, IT teams are better equipped to deliver the quality of service, resiliency and innovation their organizations, customers and end users expect. And in doing so, technology leaders will take their rightful seat at the table, alongside other business leaders who are empowered to make strategic decisions and effect change for their organizations. Because never before has the technology strategy and execution of the IT organization been so closely linked to the productivity and performance of organizations as a whole.

These are unprecedented times. But, I believe the value of Riverbed and the mission we set forth prior to the pandemic remains true and ever more relevant to our customers as they enter 2021 and beyond. If you are interested in learning how we help organizations deliver exceptional visibility and performance for any network, any application, to all users, anywhere, you’ll find more than 30 sessions and keynote replays from our Riverbed User Conference here. I hope you take advantage of one or more of these sessions offered to position yourself and your organization for future success. It will be time well spent.

 

[1] IDG Research: CIO COVID-19 Impact Study, April 2020

[2] Flexera 2020 State of the Cloud Report

[3] Gartner: COVID-19 Bulletin, Executive Pulse, 3 April 2020

]]>
Strengthening Operational Resilience: A Crucial Goal for Surviving the Next Threat https://www.riverbed.com/blogs/strengthening-operational-resilience/ Fri, 23 Oct 2020 00:10:07 +0000 /?p=16110 It’s an interesting moment in time, to say the least. Across industries, every company is looking inward at their own operations to determine how they can weather this period in history. At the same time, they’re looking outward at how they can support their customers in doing the same.

The reality is that in a changed world, we all have to look at our businesses differently. That’s why so many organizational leaders are rethinking their priorities to focus on what’s critical to maintaining both short- and long-term relevance: accelerating digital transformation, enabling work-from-anywhere models, and strengthening operational resilience.

If you try to run your business with its pre-pandemic focus and cadence, you’ll miss big. At best, the results will be off the mark and at worst, they’ll prove disastrous for your company, your customers and your employees. This is the time to be incredibly proactive in analyzing and addressing the operational challenges that are unique to your business in a pandemic landscape.

But it’s not solely about weathering this particular storm. According to global consulting firm PwC, the definition of operational resilience is “an organization’s ability to protect and sustain the core business services that are key for its clients, both during business as usual and when experiencing operational stress or disruption.” So it’s clear that business and IT leaders need to look with a long eye to a horizon that may have other pandemic-level disruptions and ask, “Do we have what it takes to survive the next big hit?”

This is about seizing the opportunity now to build operational resilience in real time to address this current crisis—and then evolve that resilience to keep your organization strong and flexible enough to absorb external shocks and keep on going.

At Riverbed, we’ve seen the interest in operational resiliency firsthand. As companies went from workforces tightly clustered in physical offices to a far-flung, work-from-anywhere model, the sudden hit to IT visibility into application and network performance was unnerving and unproductive. How were critical apps and systems running? Could employees connect with business-critical apps when and where they needed them? How were these applications performing across the network? Could there be a better experience? Could IT departments understand security threats to the network—or network performance at all, especially with the workforce going remote?

We’re fortunate that our innovations help customers stay ready and able to deliver their own innovative products and services. When you’re trying to keep things moving in a crisis, it’s important that employees are able to work efficiently using applications in complex hybrid environments. For example, that’s where our ability to deliver ten times the acceleration for SaaS applications is essential.

Our real-time visibility tools make it possible to understand network and application performance and resource utilization across these complex hybrid cloud environments. The way we manage network performance blends telemetry from every packet, flow, and device in context with the machine learning, AI-powered analytics and visualization to ensure action can be taken. This is the way IT teams can get to the bottom of issues faster, detect security threats before they become catastrophes, and automate remediation.

Moving forward, operational resilience will increasingly become a differentiator for companies large and small. Customers want reassurance that when disaster strikes, the companies they choose to engage with can still deliver on their commitments—from delivering products that inspire to helping them troubleshoot and solve problems to developing new services that address emergent needs.

If the pandemic has taught us anything, it’s that operational resiliency is paramount. And, this was certainly validated at the Riverbed Global User Conference, where more than 1,000 attendees gathered virtually to discuss every angle of operational resilience and more. If you were unable to attend the event, we’ve compiled more than 30 sessions and keynote replays from our conference to give you the essential capabilities and how-to advice needed to maximize performance and visibility of any network for any application to all users, anywhere. Register to access the full library of content, here.

]]>
Enabling Work-From-Anywhere Models https://www.riverbed.com/blogs/enabling-work-from-anywhere-models/ Thu, 15 Oct 2020 19:39:49 +0000 /?p=16062 In Part 1 and Part 2 of this blog series, we established that forward-thinking organizations are prioritizing technology investments to ensure business growth and long-term relevance. This includes getting prepared—and fast—to enable work-from-anywhere models.

The concept of remote work is not new. Yet, according to Riverbed’s Future of Work survey, 69% of business leaders said they were not completely prepared to support extensive remote work at the start of the COVID-19 outbreak. And technology performance issues amongst their remote workers impacted both the individuals and business as a whole through reduced employee productivity (37%); increased anxiety (36%); and increased difficulty engaging with customers (34%). As a result, 61% of business leaders plan to make investments over the next year to enhance remote work performance.

Wise investments given the widely-held belief that the office of the future will be increasingly hybrid and distributed. As employees become more comfortable in the post-pandemic world and begin to move about, work-from-home will transition to a work-from-anywhere model. Employers also realize there are benefits to remote work—cost savings, employee retention, talent acquisition—to name a few. In fact, many leading brands, including Twitter, Square and Nationwide, are already paving the way by expanding their remote work policies and/or extending them “forever.”

CIOs and their teams are at the heart of helping their organizations enable work-from-anywhere models. But for remote employees, the unpredictability of network and application performance dramatically increases. They face unique issues—poor network stability and saturated local connections due to simultaneous access of bandwidth-intensive collaboration apps like video streaming and large file sharing—all of which negatively impact workforce productivity. Riverbed helps enterprises address these productivity challenges, maximizing application performance through massive data reduction and latency mitigation. Workforces can stay productive anywhere, anytime with fast, consistent, and available applications they need to get work done.

In a highly distributed world, cross-domain visibility of the expanded network is a must for security and resiliency. This requires a network performance management (NPM) solution that captures telemetry from every packet, flow, and device in context and then leverages machine learning, AI analytics and visualization to empower action. This gives organizations the control they need to enable work-from-anywhere models and to proactively identify and quickly troubleshoot network and application performance and security problems.

Anywhere can be an office, but only with the right technology. If you are interested in learning how we help organizations deliver exceptional visibility and performance for any network, any application, to all users, anywhere, you’ll find a wealth of information at our global user conference site. Don’t miss this opportunity to take advantage of more than 30 sessions and keynote replays offered to position yourself and your organization for future success.

]]>
8 Keys to Choosing an Ideal NPM Solution https://www.riverbed.com/blogs/8-keys-to-choosing-an-ideal-npm-solution/ Thu, 15 Oct 2020 14:15:00 +0000 /?p=15949 I’m sure you’ll agree that cloud environments and new application architectures have drastically evolved over the past five years. With this evolution, network performance management (NPM) and application performance management (APM) solutions are pushed to the limits. Application migration from on-premises to the cloud, the popularity of SaaS applications, and the transition from virtual environments to containers have all contributed to fundamental and profound changes. As a result, there are significant blind spots that make it extremely challenging for IT teams to effectively monitor and manage the holistic hybrid infrastructure.

Siloed NPM Solution creates blind spots in a hybrid environment

Complicating the lives of IT operations teams further, their responsibilities have reached far beyond the corporate network boundaries. We all have witnessed an unprecedented shift to remote work as a result of the pandemic. Now, the responsibilities of IT operations are going well into home and work-from-anywhere environments. Not only do IT teams have to grapple with performance issues, they have to deal with increased security vulnerabilities as cyber attackers have stepped up their game against vulnerable home office safeguards. Ensuring remote workers remain productive while keeping corporate data and applications secure is imperative. Yet, Digital Enterprise Journal found that it takes 197 days on average to just identify that security was breached. That is too long for operations to fly blind.

NetOps and SecOps need common NPM solution

8 keys to selecting the best NPM solution 

In this new norm, how do business and IT leaders ensure their organizations operate at peak performance? To start, they should consider these 8 keys to providing NetOps and SecOps teams with an ideal NPM solution:

  1. Monitor digital experiences beyond the network
  2. Integrate packets, flow and device metrics
  3. Proactive capabilities should alert NetOps before users notice
  4. Auto discover applications
  5. Map application and network dependencies
  6. Enable NetOps and SecOps with common datasets
  7. Provide insights into end-user experience
  8. Gain enterprise-wide visibility

As you develop your short list of potential NPM providers, download The Essential Network Monitoring Solution Checklist and be sure to evaluate Riverbed’s Unified Network Performance Management solution. With Riverbed, you can:

  • Understand how network performance and security threats impact business initiatives
  • Proactively detect and fix network performance and security problems
  • Remove cloud and hybrid infrastructure blind spots
  • Eliminate the finger pointing among operations teams

What criteria do you use to choose your NPM solution? Share your thoughts in the comments below.

_______________

References:

  1. Digital Enterprise Journal “19 key areas shaping IT performance markets in 2020” — Apr 22, 2020
  2. Digital Enterprise Journal

 

 

 

]]>
Enterprise SD-WAN Trade-Offs Part 4: User Experience vs. Security https://www.riverbed.com/blogs/enterprise-sdwan-tradeoffs-user-experience-versus-security/ Tue, 13 Oct 2020 12:30:00 +0000 /?p=15907 Is it possible to meet user expectations and maintain SD-WAN security?

One benefit of SD-WAN is that it makes it easy to steer certain traffic from remote sites toward your on-premises data centers and steer other traffic from remote sites directly to the Internet. Once selective traffic steering is made easy, there’s less of a reason to backhaul Internet-bound traffic from remote sites through your data center. Doing so only adds latency between users and their Internet-hosted apps and adds unnecessary traffic on your network. Instead, steer Internet-bound traffic directly from the branch to the Internet. Less latency. Less overall network traffic. Better performance. There’s a catch, however.

SD-WAN security trade-offs - skydiver in air

The problem is that steering traffic directly from the branch to the Internet comes with it the cost of increasing the threat perimeter of your network. You’ve traded network security for app performance. In order to navigate this trade-off, let’s investigate the following:

  • What are the best ways to effectively protect the edges of my network without breaking the bank?
  • What if I have to continue backhauling Internet-bound traffic (e.g. due to regulatory compliance or corporate policy)?
  • Is there a way to overcome the negative effects of higher latency that may arise?

Protect the edges of your network without breaking the bank

A decision about which security solution(s) to use is a critical one for an IT department—and one which is rarely met with casual points of view. First of all, when considering network security services as part of an SD-WAN transformation, start by making sure your SD-WAN solution has you covered regardless of the path you choose. Namely…

  • Your SD-WAN solution should make it easy to service chain with 3rd party security services, AND
  • Your SD-WAN should offer a set of native security functions out of the box

Let’s double click on each of those statements to further explore why it’s important and what to look for in each.

Your SD-WAN security should make it easy to service chain with third-party security services

SD-WAN security must support service chaining - cogs

It’s important that your SD-WAN solution does not require you to abandon the use of security services from vendors that are already in use and trusted within your organization. It’s typical (and recommended) that an SD-WAN transformation project be done in collaboration with the IT security team. They’re a critical stakeholder. You want to offload Internet-bound traffic at the source—near the user. They see that as throwing a bomb into their traditional approach to security, which looks to limit the number of access points to the big bad Internet.

As a starting point, look for an SD-WAN solution that enables the network team to meet your security team. Be mindful of the following:

Does the SD-WAN solution integrate with ANY other third-party security vendor products?

You’ll find that with basic SD-WAN solutions, as well as those offered by vendors who began their life as a network security vendor, that there’s little choice about which security solutions integrate well with the SD-WAN functions. This is obviously the least desirable scenario.

Does the SD-WAN solution integrate with a specific but limited number of third-party security vendor products?

Obviously, this is better than nothing but only works well if the integration includes support for the security vendor required by your security team.

Does the SD-WAN solution provide third-party security service chaining in a one-box configuration?

As you evaluate different SD-WAN offerings this is what really separates the wheat from the chaff. Very few SD-WAN solutions provide one-box service chaining supporting the integration of virtual instances of third-party security services. This can make a big difference in both the capital and operational cost of managing the edge of your network. Multiply the number of boxes in each site by the total number of sites and the numbers can get really big, really fast.

Your SD-WAN security should offer native security functions out of the box

SD-WAN security must offer native advanced securityWhile it’s often wise and pragmatic to first focus on integration with third-party security functions (e.g. from a vendor your security team already knows and/or uses), there’s an opportunity to further reduce total costs by leveraging native security functions provided by your SD-WAN solution out of the box. Look for SD-WAN solutions that provide a complete set of capabilities to maximize your savings, including:

  • Next-Gen Firewall
  • Next-Gen IPS/IDS
  • Malware Protection
  • Antivirus Protection
  • Unified Threat Management

Deliver exceptional user experience for backhauled Internet traffic

While SD-WAN may unlock new opportunities to steer Internet-bound traffic from remote sites directly to the Internet, bypassing any backhaul to a centralized data center or hub, it’s unlikely this will happen all at once for all traffic types. It’s more likely that many sites will continue to backhaul for some time (e.g. those that haven’t yet migrated to SD-WAN). Even once a site has migrated to SD-WAN, it’s likely that certain Internet-bound traffic will continue to be backhauled. For example, a business application delivered via SaaS may be more trustworthy than recreational Internet traffic. In this case, it’s prudent to keep backhauling all Internet-bound traffic except for a specific whitelist of apps that are steered directly from the branch to the Internet.

Every site and/or app that leverages backhauling will continue to face higher latency vs. direct steering from the branch. And, if the backhauled traffic is traversing conventional circuits (e.g. MPLS), you may also be facing bandwidth constraints as well.

Your SD-WAN solution should overcome high latency and limited bandwidth for backhauled traffic

Most SD-WAN solutions use app-centric policies to determine when Internet-bound packets are steered directly from branch to the Internet or backhauled. But, once the packets are placed on the network, the user’s experience is entirely determined by circuit conditions of the chosen path.

Look for an SD-WAN solution that offers WAN optimization and app acceleration services, especially for SaaS and cloud-hosted apps.

SD-WAN security and user experience should not be a trade-off

As you modernize your WAN, you will face trade-offs between network security and user experience / app performance. There’s no question about that. However, you can break through these trade-offs so long as your SD-WAN solution provides the right set of capabilities. Ensure your solution supports: (i) extensible service chaining, (ii) advanced native security functions and (iii) app acceleration for SaaS/cloud-based apps.

- Sign posts - impossible and possible. SD-WAN security and user experience will not be a trade-off if you consider the capabilities carefully

With those capabilities in hand, you’ll have the freedom to transform your WAN over time. You can maintain SD-WAN security requirements AND meeting user expectations for fast and reliable app performance.

Resources:

  • You can find an SD-WAN solution that provides all of the functions described in this blog post here.
  • This blog is part of a broader series on breaking through important trade-offs you’ll encounter while modernizing your network with SD-WAN.
  • Learn more about the differences between SD-WAN and WAN optimization.
]]>
Accelerating Digital Transformation: The Race for Relevance https://www.riverbed.com/blogs/accelerating-digital-transformation/ Thu, 08 Oct 2020 14:01:01 +0000 /?p=16017 As established in Blog 1 of this series, three critical CEO priorities have emerged as a result of the pandemic. At the top of that list is accelerating digital transformation.

It’s not a secret that COVID-19 disrupted the very carefully planned digital transformation trajectory most companies were on. CIOs and internal technical organizations had mapped out a steady-state pace of investments in cloud services from IaaS to SaaS to PaaS while simultaneously adopting mobile capabilities and exploring technologies like artificial intelligence, machine learning, Internet of Things and Big Data to drive digital innovation. But the pandemic shifted dollars from longer-term priorities to address immediate needs: getting employees up and running, securely and productively, in their home environments. This became—and continues to be—about the race for relevance, the effort to remain competitive no matter the circumstances.

Today, across industries, businesses have adapted and the universal hope is that the pandemic environment will end quickly. But the reality is that the timing around the pandemic’s conclusion is unpredictable and the ripple effects will last much longer. In the meantime, there’s no path to a clean network transition that encompasses thousands of “sites” (employees’ homes) that are not on company-owned networks. Hence, the pressure most leaders feel to get at least some of their workforces back in the office when it’s safe to do so.

Think of the enterprise-owned premises as a castle; IT knows how to support and protect its inhabitants as long as they’re behind the moat and thick castle walls. But send them back to the village into their own places and IT’s typical mechanisms for support and protection no longer work. In a pandemic world, IT teams are blind. They can’t ensure consistency of experience and security also becomes much more difficult. Compounding that problem, there’s still a desire for digital transformation but that transition is not in the hands of a single group or person. Any major shifts that require buy-in from multiple stakeholders are inherently a slower proposition.

However, digital technologies can provide an immediate reprieve, solving the problems of today while company leadership sorts out the priorities and timelines for tomorrow. Many of our customers, for example, are turning to Client Accelerator with SaaS Accelerator to optimize the performance of critical productivity apps such as O365 to users anywhere, even when the traffic origination point is now in control of third parties. We continue to see the value such solutions have in sustaining remote workforce productivity and quality of experience.

We also see continued value in foundational capabilities every IT organization must have to support digitization. This includes next-generation, software-defined networks and most importantly, unified visibility and real-time insights into IT infrastructure—every packet, flow and device—that comprise an experience for the end user. Knowing the good and the bad as they happen is critical for CIOs and IT organizations to either stay the course or course correct as need be.

It’s clear that in a world where the vast majority of interactions are now virtual, there is an acute and immediate need to fast-track digitization to not only survive the crisis but to ensure long-term relevance. Companies need to select for forward momentum in every technical decision, policy, and purchase that’s made. Even those actions taken for the short term should still be evaluated against one primary metric: How does this accelerate our longer-term digital transformation efforts?

Riverbed is laser-focused on delivering the innovations that help companies generate real, lasting momentum on their digital transformation journey. We’ve compiled more than 30 sessions and keynote replays from our Riverbed User Conference to give you the essential capabilities and how-to advice needed to maximize performance and visibility of any network for any application to all users, anywhere. Register to access the full library of content, here.

]]>
Enterprise SD-WAN Trade-Offs Part 3: Cost vs. Performance https://www.riverbed.com/blogs/enterprise-sdwan-tradeoffs-cost-versus-performance/ Thu, 08 Oct 2020 12:30:00 +0000 /?p=15847 SD-WAN makes it easy to incorporate less-costly bandwidth options like Internet Broadband and LTE at remote locations. What are the performance-related SD-WAN trade-offs to consider? Here’s a question: What is the increase in capacity going to do to your app performance? In this third part of the Enterprise SD-WAN Trade-Offs blog series, we will examine the factors you should consider when incorporating inexpensive bandwidth options.

You might be thinking, “Wait! Doesn’t more capacity always equate to better app performance?” Well, like most things in life, it depends.

The reality is that more WAN capacity can lead to any range of possible effects concerning app performance:

  1. More WAN capacity could yield NO DIFFERENCE to app performance, or…
  2. More WAN capacity could make app performance BETTER, or…
  3. More WAN capacity could even make performance WORSE!

It all depends on the underlying bottleneck which is limiting app performance in the first place. If you don’t know the situation you’re in, you may be surprised to find your app performance is no better—or is even worse—with higher capacity bandwidth circuits in place.

SD-WAN trade-offs: performance factors to consider

There are three key bottlenecks to be aware of as well as how they map to the results mentioned above:

  • High Network Latency: more capacity will yield NO DIFFERENCE.
  • Low WAN Capacity: more capacity will make app performance BETTER.
  • Poor Link Quality: more capacity of lower quality can make performance WORSE.

Note that when it comes to maximizing your application performance, it’s an iterative process. You need to identify the current bottleneck, apply the appropriate remedy and then repeat the same process over again. As one bottleneck is alleviated, a different one may emerge. This means that you need to have a solution with a full complement of capabilities to overcome each bottleneck along the way.

Here’s a very common example: Let’s say that you’re dealing with the performance of large file transfers across your WAN using a file-sharing protocol like Microsoft CIFS/SMB. Each of the bottlenecks above can emerge, and increasing bandwidth only addresses one of the problems.

Network latency

The first factor in this SD-WAN trade-off is network latency that inhibits the performance and throughput of the network protocols (TCP) and application protocols (CIFS/SMB). One indicator of this situation is that available WAN capacity remains unused even while the file transfer occurs.

SD-WAN Trade-offs: Latency is one of the performance factors to consider.How is latency having this impact? In the case of network protocols, the TCP stacks residing in the client and/or server operating systems are configured by default to send a maximum amount of data (in IP packets) onto the network before receiving a response that the data has been received. Only after the data is transmitted across the WAN, and an acknowledgment of its receipt is transmitted back across the WAN, will the operating system send more data onto the network. Similarly, the file-sharing application protocol (CIFS) will only transmit a maximum number of data “blocks” and waits for an application-level acknowledgment before sending more.

To alleviate this bottleneck use WAN optimization that can accelerate the performance of BOTH network AND application protocols. If only one or the other, but not both is employed, latency will continue to limit the end-to-end throughput of the file transfer.

 

WAN capacity

SD-WAN Trade-offs: WAN capacity is one of the performance factors to consider.Next, WAN capacity has become fully utilized and is thereby limiting end-to-end performance. To alleviate this bottleneck, use network data compression and/or deduplication to virtually expand circuit capacity. You could also upgrade to a higher capacity WAN circuit, however, be mindful of the following common result.

Link quality

Finally, poor link quality causes end-to-end throughput to suffer. You’ve upgraded your MPLS circuit to a higher capacity Internet Broadband circuit, but surprisingly you see end-to-end performance degrade. The percent of network packets dropped during transmission has increased. (This is often due to internal congestion of the WAN itself. Unlike MPLS circuits, which come with higher SLAs and guaranteed performance, lower cost Broadband or LTE bandwidth may be oversubscribed. Essentially, you get what you pay for.)  Such dropped packets slow down the whole machinery of your data transfer. Each dropped packet must first be detected as “lost”. It then must be resent. And finally, its acknowledgment must be received. This entire process takes time and multiple roundtrips across the WAN. And all the while, it keeps the contiguous data stream from being delivered, in order, to its recipient. 

SD-WAN Trade-offs: Link quality is one of the performance factors to consider.The solution to this is to employ link conditioning, or forward-error correction (FEC) techniques, such as packet duplication or multi-packet parity encoding. When these techniques are used, a sender sends more information (alongside the data) onto the network that can be used by the recipient to reconstruct one or more packets that may have been lost along the way. The use of these techniques comes with one important warning: If the underlying cause of the dropped packets was network congestion in the first place, then such techniques can further exacerbate the problem, causing more congestion, more packet drops and further reducing the experienced “quality” of the circuit. (TIP: Look for solutions that automatically and dynamically turn on and off such techniques only when required, based on real-time network conditions.)

As this example illustrates, using SD-WAN to increase WAN capacity may do nothing to improve your app performance. And, if you adopt lower quality circuits, your performance can get worse.

In summary

To break through any SD-WAN trade-offs between cost and performance, make sure that your SD-WAN solution provides the following capabilities. Only then will you be able to overcome each and every bottleneck that will arise.

  • Network Protocol Acceleration (Eg. TCP/UDP)
  • Application Protocol Acceleration (Eg. CIFS/NFS/HTTP)
  • Network Data Compression
  • Network Data Deduplication
  • Dynamic Circuit Conditioning (Eg. Packet Duplication, FEC)

For more information on how to go about correctly diagnosing your current bottlenecks to app performance, also refer to the following:

And for more information about an SD-WAN solution that provides all of the necessary capabilities discussed in this blog entry, check out Riverbed SteelConnect EX.

]]>
Enterprise SD-WAN Trade-Offs Part 2: the Destination vs. the Journey https://www.riverbed.com/blogs/enterprise-sdwan-tradeoffs-destination-versus-journey/ Tue, 06 Oct 2020 12:30:00 +0000 /?p=15817 Preface: COVID-19 delays SD-WAN deployments in 2020

In between the first draft of this Enterprise SD-WAN Trade-Offs blog series and the present, the COVID-19 pandemic emerged, and with it a new crop of IT requirements and a shift in priorities to support work-from-home employees. In turn, many SD-WAN adoption projects have been put on hold, and analysts have forecasted that SD-WAN spending will be flat YoY in 20201. However, the same analysts have predicted a rebound of 40% YoY growth in 2021, as enterprises reimagine and reintroduce use of on-premises locations.

The pause-button we have experienced with SD-WAN is a perfect example of a common case for SD-WAN adoption. Namely, that SD-WAN adoption never happens all at once, which is the focus of this blog. Keep this in mind as you reanimate those SD-WAN projects that may have been temporarily put on hold. Now back to our regularly scheduled blog entry…

The journey toward successful SD-WAN adoption

We all want SD-WAN. But it’s impossible to transform the old into the new all at once. This means we have to traverse an intermediate phase—the brownfield—where some sites/circuits are managed via SD-WAN and others remain managed via conventional routers.

Beach with 'danger mines' warning signageThe difference between navigating this phase unscathed and bringing your network to a screeching halt has everything to do with the ability of your SD-WAN solution to effectively interface with your existing network and cope with its topological complexities, one-off hacks and special-case router configs that have built up over time. Those hidden network demons that have been lurking unnoticed will inevitably (thanks, Murphy!) rear their ugly heads once the transformation is underway.

This blog takes a close look at multiple phases you’ll likely encounter during SD-WAN deployment and what capabilities you’ll need in place.

The high-level and intuitive takeaway here is this: if you want to ease the migration from a legacy network to SD-WAN, it’s critical that your SD-WAN solution be as fluent in legacy routing technology (on the underlay) as it is with its own SD-WAN (in the overlay). During the transition, you’re going to have one foot in the old world and one foot in the new world. You need an SD-WAN solution that is fluent in both. From the old world, this includes capabilities such as the following:

  • Full routing stack
  • IPv6 support (overlay & underlay)
  • VRF segmentation
  • Multicast support (overlay & underlay)
  • Flexible topologies (full mesh, hub and spoke, spoke-hub-hub-spoke, hub-spoke-spoke-hub)

Here’s a closer look…

Transitionary (brownfield) phases and critical capabilities you’ll need

As you consider the phases in the table below, it’s notable that the hardest cases (on the right) are actually more common. They exist and persist as you phase in the adoption of SD-WAN at remote sites. Conversely, the easier cases (on the left) are the ones that are least common—only found at the tail end of a complete transition to SD-WAN.

Table comparison of various SD-WAN deployment approaches

In closing: is SD-WAN adoption more trouble than it’s worth? 

The answer to this question is simple (and hopefully now rather obvious).   

  • If your SD-WAN solution provides the capabilities needed to successfully get you from point A to point B, then YES. Go forth planning SD-WAN adoption with the confidence that your new network and your old network can co-exist seamlessly every step of the way. 
  • However, if your SD-WAN solution doesn’t provide these critical capabilities, then BEWARE. The cost, risk and effort associated with navigating the inevitable minefield of the brownfield could decimate the benefits you were seeking from SD-WAN in the first place. 

 


1Gartner: Forecast Analysis: Enterprise Network Equipment, Worldwide (24 July 2020)

]]>
3 Critical CEO Priorities Driving Post-COVID Growth https://www.riverbed.com/blogs/priorities-driving-post-covid-growth/ Fri, 02 Oct 2020 16:17:44 +0000 /?p=15903 2020 was a year of great change and uncertainty for our customers and indeed, the entire world. Seemingly overnight, organizations have had to quickly pivot to deal with the challenges of the global pandemic and at the same time, every operating model—from supply chains and service delivery to go-to-market and crisis management—has been put to the test.

But with change and uncertainty comes opportunity. Forward-thinking business and IT leaders have already begun to reevaluate their strategies, carefully balancing the need to manage expenses during the crisis with making investments that will drive post-COVID growth and position their organizations for future advantage. According to a worldwide CIO survey, there are three critical CEO priorities that have emerged:

1. Accelerating digital transformation

Prior to the pandemic, most organizations were on a steady and carefully planned digital transformation journey. They were adopting cloud services (IaaS, SaaS, PaaS) and making investments in mobile capabilities and technologies such as AI, ML, IoT, and Big Data to spur digital innovation. But in a world where the vast majority of interactions are now virtual, there is an acute and immediate need to fast-track digitization to not only survive the crisis but to ensure long-term relevance.

2. Enabling work-from-anywhere models

While the requirement to work from home will eventually be lifted, many organizations are planning to continue and even expand remote working models. Employers have realized that there are many benefits to remote work—cost savings, employee retention, talent acquisition—and that with the right set of tools and technologies, remote workers can be just as productive as their in-office counterparts. As a result, the office of the future will be increasingly hybrid, enabling employees to work and collaborate both virtually and physically anytime, anywhere.

3. Strengthening operational resilience

Operational resilience is a new imperative for organizations that struggled to uphold acceptable service levels when the pandemic hit. Times now demand a sharp focus on ensuring critical systems, applications and infrastructure are secure, accessible and performant for all end users regardless of where they are located or how they choose to connect. Redesigning operations to be more intelligent, automated and adaptive is the only way organizations can truly prepare for future waves of disruption.

Preparing for what’s next, now

CIOs and their teams play a vital role in helping their organizations address these priorities. But to ensure success, they must overcome the challenges of insufficient visibility, unpredictable network and application performance, and expanded cybersecurity risks—all while improving their ability to be agile and resilient to ever-changing conditions.

Riverbed is on a mission to help IT teams conquer these challenges. We’ve compiled more than 30 sessions and keynote replays from our Riverbed User Conference to give you the essential capabilities and how-to advice needed to maximize performance and visibility of any network for any application to all users, anywhere. Register to access the full library of content, here.

 

]]>
Synthetic Monitoring: A Key Tool for Hybrid Enterprises https://www.riverbed.com/blogs/synthetic-monitoring-key-tool-hybrid-enterprises/ Fri, 25 Sep 2020 12:30:00 +0000 /?p=15735 While the benefits of cloud infrastructure and applications continue to drive enterprises to direct more of their IT investments in that direction, the cloud is certainly not a panacea—especially when it comes to maintaining visibility across an increasingly hybrid IT landscape.

Consider the experience of Jamie Halcomb, CIO of the U.S. Patent and Trademark Office. In a WSJ article (Dec 31, 2019) CIOs Share Their Priorities for 2020, Halcomb shares: “Part of my mission is to stabilize mission-critical systems and take our agile and DevSecOps practices to the next level while we move assets into the cloud.”

Halcomb seeks to increase agility while maintaining stability. But distributing applications across on-prem data center, cloud and SaaS adds new complexity. This fundamentally makes it harder to ensure the availability and performance of these apps. One reason for this is that blind spots increase as the IT landscape becomes more hybrid and complex.Stats representing that an "increase in cloud services means major increase in visibility gaps"

And so, while digital transformation has made technology a critical part of an organization’s success, increasing service disruptions can have a profound impact on user experience, brand value and financials of a company.

Statistics on how synthetic monitoring can help reduce service disruptions

In order to maintain a high-performing, reliable and secure network, you need a broad and complete view across IT domains—on-premises and in the cloud.  

Achieving a holistic view of your critical hybrid IT environment requires the integration of multiple approaches. There are two primary approaches to help you ensure availability and measure end-user experience:

  • Real-time user monitoring (RUM)
  • Synthetic testing/monitoring

What is real-time user monitoring?

Real-time user monitoring (RUM) measures one of the most critical metrics: actual user experience as, and when, they interact with their apps. RUM constantly observes the system in the background—tracking availability, functionality, responsiveness and other metrics. This approach leverages real user traffic to gauge performance.

What is synthetic testing / synthetic monitoring?

Synthetic monitoring and testing is a method used to monitor your applications or infrastructure running in the cloud or on-premises data center by simulating users. It is an active testing method and very useful for measuring availability and response time of critical web sites, system transactions and applications. It works whether you have user traffic or not.

How does synthetic monitoring/testing work?

Synthetic monitoring, or synthetic testing, uses distributed test engines to proactively evaluate the availability and performance of your applications and web properties—even when there is no real user traffic. With synthetic monitoring, scripts or agents are deployed across the globe at key user locations to simulate the path an end user takes when accessing on-prem or cloud applications. The applications can reside anywhere—in the data center, in the IaaS cloud or a SaaS application. As long as there is a path to the application from the testing location, synthetic testing can be used.

Benefits of synthetic monitoring for hybrid applications

  • Proactively identify issues before your users notice
  • Keep a pulse on availability and performance round the clock
  • Take monitoring where your applications go
  • Monitor complex interactions live or pre-release
  • Baseline and objectively measure application SLAs

Riverbed’s solution

Riverbed Unified NPM provides both synthetic and real-time user monitoring giving you a complete view of performance from the end-user perspective. Riverbed’s synthetic testing can simulate searching (database), adding items to cart (web application), logging in (identity validation), etc. in order to measure the performance of holistic application interactions. Riverbed NetIM, part of the NPM suite, offers a variety of synthetic tests, including Ping, DNS, TCP, LDAP, databases, HTTP, and external scripts for creating your own tests. It uses SNMP, CLI, Traps, Syslogs and API polling as well as synthetic testing to capture availability and performance information of network devices, servers and applications.

Are you using synthetic monitoring in your environment? How are you using it? Share your experiences in the comments below.

 


  1. Digital Enterprise Journal “19 key areas shaping IT performance markets in 2020” — Apr 22, 2020
  2. Market Guide for Network Performance Monitoring and Diagnostics. Published 5 March 2020 — ID G00463582
  3. Digital Enterprise Journal, March 2020
  4. Annual outage analysis 2020 — Uptime Institute (March 2020)

 

 

 

]]>
5 Key Benefits of Synthetic Monitoring for Modern Apps https://www.riverbed.com/blogs/5-key-benefits-of-synthetic-monitoring/ Wed, 09 Sep 2020 20:40:00 +0000 /?p=15673 Before we get into the benefits of synthetic monitoring, let’s start by defining it. Synthetic monitoring is a method used to monitor applications by simulating users. It’s different than real-time monitoring, which requires user traffic and measures the actual user experience. Real-time monitoring reactively identifies problems or issues after they occur, whereas synthetic monitoring proactively measures application health using synthetically-generated traffic.

There are many benefits of synthetic monitoring, but here are the top five:

1. Monitor Proactively

Synthetic monitoring does not require users to monitor the performance and communication health of an application. You can determine how packets flow between potential users and on-premises or cloud-hosted applications. EMA’s survey* found that 39% of all network problems are reported by end users before network operations is aware. Synthetic monitoring is that holy grail for NetOps, DevOps and SecOps—being proactive and identifying issues to fix before users notice.

2. Know Global User Satisfaction 24×7

Modern applications are spread across cloud data centers such as Azure, AWS, GCP and others. Add to this mix, the unabated growth of SaaS applications such as Office 365, Workday, Zendesk, Zoom, SFDC and the list goes on. How do you ensure your users will get the performance you want to provide them? By having synthetic agents distributed across the globe, you can know if your users will be satisfied or not 24×7. You can run continuous simultaneous tests and always know the state of your user experience.

Synthetic monitoring can help deliver great experience for remote workers
Proactively know the experience of your remote employees

3. Supercharge Business Agility

Deploy your application infrastructure to meet seasonality, unplanned demands, roll out an app as a competitive response or respond to an event such as a pandemic. Roll out your apps at the pace your business demands and NetOps will be right there in lock step. Synthetic testing gives tremendous flexibility with lightweight infrastructure that can be turned on instantaneously. It can go anywhere your application goes.

4. Monitor Complex Application Interactions

Synthetic monitoring allows you to emulate business processes and user transactions between different business applications. You can understand critical infrastructure performance. You can test business-to-business web services that use SOAP, REST or other web services technologies to validate and baseline interactions. Synthetic testing can simulate searching (database), adding items to cart (web application), logging in (identity validation), etc. in order to measure performance of holistic application interactions.

5. Baseline and Objectively Measure Application SLAs

With synthetic testing, you can baseline around-the-clock network behavior. Baseline and benchmark data to analyze trends and variance between peak and off-peak hours and to plan for capacity. Managing SLAs is very important today as so many companies rely on third-party vendors to host all or parts of their applications. Synthetic testing affords you the ability to monitor performance of any 3rd party application at frequencies you want to validate and from locations you choose, at any time. It can be used to ensure quality service delivery, accelerate problem identification, protect customer experiences and report on the compliance of internal or external providers.

If proactive monitoring is the direction you want to take your IT organization, synthetic monitoring is a key capability you cannot afford to overlook. Synthetic monitoring enables NetOps to move from a reactive firefighting mode to proactive and around-the-clock visibility without depending on actual users.

Riverbed NetIM is a comprehensive solution for mapping, monitoring and troubleshooting your infrastructure components. It leverages multiple approaches such as synthetic testing, SNMP, CLI, WMI and more. Learn how Riverbed can help you expand infrastructure monitoring and deliver the benefits of a Unified NPM approach across packets, flows and devices.

 

*Enterprise Management Associates: Network Management Mega Trends 2020

]]>
C-Level Perspectives: Preparing for the Office of the Future https://www.riverbed.com/blogs/office-of-the-future-cxo-panel-discussion/ Thu, 03 Sep 2020 20:20:58 +0000 /?p=15671 I think it’s safe to say that nearly every organization in the world is currently thinking about workplace transformation. And, it’s not just about redesigning office space. The COVID-19 crisis has and will fundamentally change how and where work gets done and it’s incumbent on business and IT leaders to prepare their organizations for what’s next.

But, what is next? What does the Office of the Future look like? If you’re like many executives right now, you’re actively seeking answers to these questions. And, that’s why I’m looking forward to moderating an upcoming C-level panel discussion on the Office of the Future and how organizations can ensure digital performance and productivity in an evolving workplace.

I hope you will join me on September 17 as I tap into the minds of prominent CXOs from Ellie Mae, Sophos, Kofax, and Conga to explore the lessons they’ve learned as their organizations shifted to large-scale remote work. We’ll talk about how their priorities and investments have changed as a result of COVID-19 and the technologies and cultural factors that will determine whether work-from-home, and eventually, work-from-anywhere models succeed or fail. And, with the spotlight shining bright on digital capabilities these days, it will be interesting to hear their perspectives on what the future holds for the IT profession.

As the Chief Digital Officer for Riverbed, I remember the early days of COVID-19 and the amount of pressure my organization faced as our entire company began working from home. Fortunately, we were already leveraging cloud-based collaboration tools like Zoom, Office 365 and Slack, as well as our own application acceleration and network optimization solutions to provide our employees with the same experience, if not better, as working in the office.

But, there’s planning and work to be done. The pandemic has set a course for long-term remote/hybrid working models, where employees will expect to be able to work when and where they choose and where teams can collaborate both physically and virtually. This means reexamining models of redundancy, resiliency and security based on new ways of working and it means a renewed focus on IT visibility and performance to drive the best employee experience and business outcomes.

I’m optimistic about what’s next and the elevated role IT will play in shaping the Office of the Future. You’ll have to register to attend the panel discussion to see if my fellow CXOs feel the same way.

]]>
SD-WAN or WAN Optimization? https://www.riverbed.com/blogs/sd-wan-or-wan-optimization/ Wed, 22 Jul 2020 21:56:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15405 SD-WAN or WAN Optimization? I love that question. And, in order to answer it correctly, let me first dispel a common misperception. The question assumes SD-WAN and WAN Optimization are different solutions for the same set of problems. They are not. There may be some overlap between the two, but a lot less than you might think.

As with most questions, the answer depends on the problem and situation. Here’s a quick “decoder ring” that works 100% of the time to give you the correct answer:

Problem #1: Conventional Routers

Situation: “My fleet of conventional branch routers are too hard to manage, especially now that I have more apps in the cloud and different types of WAN circuits at remote sites.”

Solution: This one is easy. Get rid of your old routers. Invest in SD-WAN. Just make sure it’s an SD-WAN solution that’s equipped with an enterprise-class routing stack.

Problem #2: Latency 

Situation: “My app is running too slow even though there’s unused WAN capacity.”

Solution: This one is also easy. SD-WAN won’t help. More WAN capacity won’t help. The poor app performance — response time and/or end-to-end throughput — is likely being dictated by latency’s effect on underlying network and application protocols. Use WAN Optimization. Specifically, use one that accelerates BOTH networking AND application protocols over long distance.

Like traffic on a highway, distance, capacity, and congestion impact how quickly and efficiently apps reach their destination.
Like traffic on a highway, distance, capacity, and congestion impact how quickly and efficiently apps reach their destination.

Problem #3: Bandwidth 

Situation: “My app is running too slow and I’ve run out of WAN capacity.”

Solution: There are actually three distinct scenarios you could be running into here. We’ll cover the scenario and solution for each one, and then show you a bullet proof way to know which scenario you’re actually in.

1) Scenario A: You’re out of bandwidth. But, it’s a red herring. The performance of the app in question is actually being dictated by latency, in which case adding more bandwidth will just add more cost.

Solution A: Use WAN Optimization. SD-WAN won’t help.

2) Scenario B: You’re out of bandwidth. And, the lack of bandwidth is the true bottleneck of app performance. However, there’s no good option to increase raw WAN capacity. No carrier provides a larger circuit for that location and/or it’ll take too long to procure and/or it’ll be too costly once it’s there.

Solution B: SD-WAN can’t help. Use WAN Optimization. Look for one that provides byte-level deduplication AND compression. With both techniques, you can virtually expand capacity by 4x, 5x, even 10x and more.

3) Scenario C: You’re out of bandwidth. And, the lack of bandwidth is the true bottleneck of app performance and procuring more WAN capacity is a cost-effective and timely option.

Solution C: Use SD-WAN. With one BIG caveat. MAKE SURE YOU’RE NOT IN SCENARIO A (i.e., make sure latency isn’t your real problem).

Finding the Root Cause

There are tools that can analyze packet captures from your network and tell you if your bottleneck is bandwidth or latency. Riverbed Transaction Analyzer is one of them. It can even help you determine if the problem isn’t in the network at all (e.g., it’s a client-side problem or a server-side problem).

In a nut-shell, make sure you know which problem you’re facing before you ask “Do I need WAN Optimization or SD-WAN?” Because what you really need is the flexibility to use either or both in combination whenever it makes sense.

The real problem you might be facing is that there aren’t many solutions out there that deliver both. Here’s an SD-WAN solution that combines enterprise-class routing, advanced SD-WAN, industry-leading WAN Optimization and Application Acceleration, and next-generation security. Like I said, I love that question.

 

]]>
Webinar: Microsoft and Riverbed on Work-From-Anywhere Challenges and Exciting New Cloud Innovations https://www.riverbed.com/blogs/webinar-microsoft-and-riverbed-on-work-from-anywhere-and-new-cloud-innovations/ Sat, 18 Jul 2020 01:51:09 +0000 https://live-riverbed-blog.pantheonsite.io?p=15413 Are you using collaboration apps, joining video events and streaming more video? Of course you are — we all are!

With the onset of COVID-19, businesses responded almost immediately with policies to protect their workforces and maintain business continuity. Remote workers, mobile workers and traditional office workers all became work-from-home employees — almost overnight. In the process, global enterprises quickly realized that their teams could remain productive provided they had the right tools and technology in place to connect their teams and business workflows. As a result, demand for collaboration and communication services exploded. The already popular Office 365 has grown to 258M monthly active users, and Microsoft Teams has ballooned to 2.7B (yes, billion) meeting minutes every day in March, up 200% from just the month prior.[i]

As regulations have begun to ease, organizations are coming to grips with what the new norm looks like. Most are still working through the details, but it’s clear that they won’t be returning to business as usual. The global crash-course in remote work has taught us all that we really can work from anywhere — and still be productive.

What happens when you can’t?

72% of companies report that network performance is a key concern[ii]. And, it’s no surprise. With billions of work-from-anywhere teammates sharing files and joining video meetings, the networks to support them continue to grow in complexity, and the sheer volume of data traversing is taxing, too. All of these factors impact the performance of applications like Office 365, Teams, and Stream lowering the ROI enterprises expect from these modern application investments.

Teammates can have the best collaboration apps, but their work-from-anywhere networks are often less up to the task. In fact, Riverbed’s recent Future of Work survey (July 2020), found that 37% of business leaders feel remote performance issues result in weaker employee performance and productivity. The most common network inhibitors impacting application end user experiences productivity are:

  • Latency – which creates network bottlenecks, increases load time, and is multiplied significantly with chatty cloud and SaaS applications
  • Congestion – from the massive amounts of data from heavy file sharing and live and on-demand video causing delay, packet loss, and blocking
  • Unpredictable Last Mile Performance – which IT doesn’t control but must accommodate when delivering applications to remote employees

How can you ensure work-from-anywhere productivity?

The reality is that your remote teams don’t need to be impacted by these factors. Your work-from-anywhere teams can turn on 10X faster O365 experiences, 33X faster file sharing and up to 99% reduction in bandwidth — in a matter of minutes — with application acceleration solutions from Riverbed.

Want to know more? View this on-webinar with David Totten, CTO, US One Commercial Partner at Microsoft and Dante Malagrinò, Riverbed Technologies, CDO as they discuss:

  • Work-from-anywhere networking challenges
  • The rise of video and SaaS collaboration apps
  • Exciting application acceleration solutions — with live demos — for enhanced networking that help maximize the value of enterprise investments in Office 365 and Microsoft Teams and Stream productivity

It’s a virtual event you won’t want to miss!

 

___________________

[i] Microsoft Work Trend Index, April 2020

[ii] Tech Target, February 2020

 

 

 

 

]]>
The Next Norm: Improving Network Resiliency and Security to Support Work-From-Anywhere (Part 4) https://www.riverbed.com/blogs/next-norm-work-from-anywhere-network-resiliency-and-security-part-4/ Wed, 15 Jul 2020 19:57:09 +0000 https://live-riverbed-blog.pantheonsite.io?p=15394 In Part 1 of this blog series, The Next Norm: Prepare to Work-from-Anywhere, we looked at the recent, explosive growth in work from home and the transition to the new norm, work from anywhere. Part 2 of the series, Next Norm: Work-from-Anywhere Performance Management, reviewed the need for Network Performance Management (NPM) and key considerations for evaluating NPM solutions. In Part 3, The Next Norm: Work-from-Anywhere Application Delivery for Productivity we discussed the challenges and opportunities in ensuring fast, consistent application delivery to your work-from-anywhere teams. In this, the final blog of the series, we’ll offer a leader’s perspective and guidelines to improve network resiliency and security to support work-from-anywhere models.

The threat: work-from-anywhere means potential threats from everywhere

Just as quickly as the enterprise went home to work, so did the cybercriminals. Cybercriminals are constantly looking for new ways to beat your defenses. You build them; they find the chinks in the armor. Recently, security experts have reported an increase in phishing and compromised VPNs. In fact, per Google, phishing attacks increased 350% from Jan 2020 to Mar 2020.

And, it’s not just phishing. COVID-related DDoS attacks are up, leading to inaccessible apps for your end users. DDoS attacks take down websites and VPNs, which means your customers can’t do business with you and your users are unproductive. Time down equates to lost revenue, and without proper visibility, security breaches go undetected for longer periods and are more difficult to mitigate.

DDoS can also hide other more insidious attacks. While you are busy trying to recover from the DDoS attack, the cybercriminal may be launching a second more dangerous attack hidden in the noise. This attack may be designed to exfiltrate data, passwords, or just stay hidden until needed.

With a work-from-anywhere workforce, data breach concerns are also heightened – and rightfully so. With IBM reporting a mean time for breach detection of 197 days (and another 69 days to contain them), it’s no wonder that 75% of security experts are not satisfied with the speed and capabilities they have in responding incidents.[i]

The challenge: securing the complex, work-from-anywhere network

As the workforce has become more mobile and applications have expanded to SaaS and cloud, enterprise networks have grown increasingly complex. Digital businesses need secure, reliable networks to support their distributed employees wherever they work while minimizing risk to the business. Organizations also need to manage a mix of legacy infrastructure and application models in conjunction with modern applications distributed across on-premises data centers as well as in multiple public clouds.

Add to this the increased dependency on unpredictable last-mile networks for remote workers and the challenge becomes even more painfully obvious: detecting and responding to the increase in cyberattacks and the broader attack surface due to the growing number of remote endpoints is growing increasingly more difficult.

As result, most organizations rely on 3-6 tools to monitor their network, but the multiple, disjointed data streams often add their own analysis complexity to the issue instead of providing advanced insight and quicker mitigation. And, looking to cloud for the latest in protection only helps so much, as the point solutions provided by cloud vendors are insufficient. They only provide insight into cloud elements of their network, not hybrid or multi-cloud networks.

So, what should enterprise securers do?

The right approach: unified visibility to see everything and intelligence to take appropriate actions

At Riverbed, we agree with the EMA on this: “Integrated platforms are more effective at performance monitoring than standalone, best-of-breed tools.”[ii] Why take our word for it? Well, for starters, we’ve been recognized by Gartner as a Leader in every MQ on Network Performance Monitoring and Diagnostics (NPMD) since 2012 – and, we deliver the only unified NPM solution in the market.

Securing your work-from-anywhere network is no place to cut corners. Make sure that your organization is fully prepared by confirming that your solution does the following before you buy:

  • Improve overall network performance by 59%.[iii] Provides comprehensive visibility across hybrid networks, applications and infrastructure in a single solution to support modern, work-from-anywhere teams.
  • Reduce MTTR by 65%.[iii] Leverages full-fidelity data: captures all packets, flows, and infrastructure metrics, 100% of the time to identify and respond to threats due to data exfiltration, password brute force attempts, blacklisted sites, DDoS attacks, etc.
  • Reduce network and application blind spots by 53%.[iii] Applies machine learning and AI to network flow, packet and device data to detect anomalies, respond to network security threats faster, mitigate risks, and avoid exposure by identifying unknown threats that lurk in your environment using network threat intelligence.
  • Improve IT collaboration by 41%.[iii] Delivers integrated end-user experience, application, network and infrastructure performance into a single dashboard as well as role-based views to improve visibility of hybrid environments.
  • Improve user experience by 59%.[iii] Insights into device and interface health, configuration monitoring, and path analysis to ensure high-performing apps.

Riverbed’s unified NPM solution does all of this and more. If you’re looking to improve your network resiliency and security to support your work-from-anywhere workforce, the safe bet is Riverbed!

________________

[i]  Forbes, The Speed Of Business: How Automation Improves Operations And Security, June 2019

[ii] Enterprise Management Associates, Network Performance Management for Today’s Digital Enterprise, Shamus McGillicuddy, May 2019

[iii] The Benefits of Riverbed Unified NPM, TechValidate, July 2020

]]>
Future of Work Survey: How Companies Are Planning for a ‘Work from Anywhere’ World https://www.riverbed.com/blogs/future-of-work-global-survey-2020/ Tue, 14 Jul 2020 16:59:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15384 As the “new normal” becomes just normal, companies are preparing for a large-scale, long-term shift to remote work, where increasingly employees will ‘work from anywhere.’

Although we all wish the impetus for widespread remote working was different, the new way of working—one that’s distributed, technology-enabled, and aligned with meaningful digital transformation goals—should have long-term positive effects for business and people. Today, we’re releasing the Riverbed Future of Work Global Survey 2020 and the results paint a clear picture of where companies are, and where they intend to go.

The abrupt shift to remote work caused some major initial challenges

It’s no surprise to anyone that at the very beginning of the pandemic, many companies were caught flat-footed. Although 95 percent of leaders were comfortable with the idea of remote work, 69 percent said they were not completely prepared for such a jarring transition. That sudden shift produced some substantial problems.

For instance, 40 percent flagged increased technical difficulties as a major disruptor while 37 percent cited weaker employee performance and productivity. Another 36 percent indicated stress and anxiety were big issues for employees. These are all predictable outcomes for a pandemic that upended both personal and professional norms. Fortunately, all these issues are surmountable with the right technology.

Business leaders have a better sense of performance barriers

The sudden shift to remote work gave business leaders a better sense of the biggest barriers to success for ensuring the performance of a remote workforce. According to the 700 global respondents, the biggest barriers are: technology to optimize or improve remote performance (39% globally, 50% in the U.S.), spotty or unreliable home Wi-Fi (38%), and the need for better visibility into network and application performance (37%).

Riverbed Future of Work Global Survey 2020 reveals current barriers to remote workforce performance
Riverbed Future of Work Global Survey 2020 reveals current barriers to remote workforce performance

The office of the future will be different

Forward-thinking organizations are investing for performance in this remote work reality. Of those surveyed, 61 percent of leaders will be making additional technology investments in the next 12 months, with 31 percent describing this expansion as significant. Anecdotally, we’ve heard this same theme from our customers, who are deeply interested in taking a more proactive posture.

There’s no question that hybrid work environments are on the roadmap for many businesses across a wide variety of industries. In fact, the survey found that on average globally, businesses expect 25 percent of employees will work remotely after COVID-19, nearly a 50 percent increase versus prior to the pandemic. Employees will increasingly “work from anywhere” (#WFA)—and technology will be the enabler that breaks down barriers to performance and security.

Conclusion

Tools that maximize the performance and reliability of apps and remote workers or that drive enhanced network visibility regardless of location will be absolutely fundamental to high-functioning organizations in this new paradigm. This is an area Riverbed is very focused on with our customers—with solutions such as Client Accelerator, SaaS Accelerator, and unified Network Performance Management.

See what else business leaders globally had to say about the future of work and what they’re doing to help their people navigate this new working normal here. Learn more about our remote workforce productivity solutions and join us in the conversation around the #FutureofWork #WFA #remotework.

]]>
The Next Norm: Work-from-Anywhere Application Delivery for Productivity (Part 3) https://www.riverbed.com/blogs/next-norm-work-from-anywhere-application-delivery-part-3/ Thu, 09 Jul 2020 17:51:58 +0000 https://live-riverbed-blog.pantheonsite.io?p=15374 In Part 1 of this blog series, The Next Norm: Prepare to Work-from-Anywhere, we looked at the recent, explosive growth in work-from-home and the transition to the new norm, work-from-anywhere. In Part 2 of the series, Next Norm: Work-from-Anywhere Performance Management, we reviewed the critical need for Network Performance Management (NPM) and the top considerations for evaluating NPM solutions. In Part 3 of the series, we’re going to discuss the challenges and opportunities in ensuring fast, consistent application delivery to your work-from-anywhere teams.

Keep your work-from-home teams engaged and productive with fast and consistent application delivery
Keep your work-from-anywhere teams engaged and productive with fast and consistent application delivery

The Goal: Ensuring fast, consistent application delivery to your work-from-anywhere teams

For years, enterprise IT buyers, and more recently LOB leaders, have been looking for the best communication and collaboration applications to keep their teams as productive as possible. The quest to find the best application available never seems to stop – or let up. The recent surge in work-from-home and the post-pandemic evolution to work-from-anywhere, have only increased this demand and hastened its immediacy. In fact, 74% of companies plan to permanently shift to more remote work even after the COVID-19 restrictions subside.[i] As a result, companies are increasingly looking to SaaS and cloud offerings to provide quick, cost-effective services to keep their teams productive, regardless of where they work.

Investing in collaboration tools that connect team members and business workflows are clearly a top priority as can be seen by the growth in popular services like Microsoft Office 365, which has grown to over 258M daily users and Microsoft Teams (for video collaboration) ballooning from 32M daily users to 75M active users daily since just March of 2020.

But, selecting the right applications isn’t always enough.

The Problem: Application delivery for remote working is still a challenge

While IT (and some LOB leaders) continue to introduce new communication and collaboration tools to the enterprise, 54% of HR leaders say poor technology and/or infrastructure for remote working is the biggest barrier to effective outcomes.[ii] Despite other advantages, the shift to SaaS and cloud hasn’t proven to be the panacea with 42% of enterprises reporting that at least half of their distributed/international workers suffer consistently poor experience of the SaaS apps they use to get their jobs done.[iii]

The Challenge: Unpredictable network performance

Although the ubiquity challenge of the network has been resolved and bandwidth is generally plentiful (albeit not always cheap, depending on location), the quality and consistency of network performance is every bit as challenging as it has been in the past.

With the rise of hybrid networks, SaaS, cloud, and on-prem/off-prem, the network has actually grown more complex, and in many cases, less reliable. Companies often experience network-related SaaS slowdowns on a regular basis – even for their most critical business applications. In fact, a full 72% of companies report that network performance is a key concern with Office365 [iv], impacting end-user experience and productivity directly.

And, the increase in remote work only adds additional challenges for IT teams trying to meet SLAs and deliver applications with high productivity value and the desired end-user experience. Remote users too often experience unacceptable performance due to consumer-grade Wi-Fi, bandwidth saturation and contention (oversubscribed connections from heavy usage of collaboration and enterprise applications), and disruptive latency when connecting back to corporate networks, the cloud, and SaaS applications. ESG’s recent study during the COVID-19 crisis revealed that 40% of remote workers in North America still struggle with subpar internet connectivity. [v]

Unfortunately, IT has limited control of the remote network. Connectivity to the data center or cloud must be optimized to account for unreliable, last-mile access over public Wi-Fi, cellular data networks, and home DSL/cable modems.

The Opportunity: Innovations in application acceleration can make a world of difference

Despite all the challenges, IT must ensure that users can reliably and securely access high-performing applications and tools. They need to find efficient ways to rapidly connect those users back to the corporate network and their apps. Fortunately, there are recent innovations in the market that do just that!

Riverbed Acceleration Services address the unpredictability and poor performance of business-critical applications. Riverbed Client Accelerator and SaaS Accelerator optimize application traffic for work-from-anywhere models, which leads to productivity benefits of an additional 7 hours/year per employee as shown in a recent ESG technical validation study. These solutions can be deployed quickly leading to rapid time to value and:

  • Drastically reduce user traffic by up to 99% and extend optimization for staff working outside the office (on their laptops) to ensure they can be equally productive, regardless of location with byte-level de-duplication methods that work across all of your applications
  • Optimize networks and deliver best performance (10X faster) for the most popular enterprise SaaS applications (Office 365, Salesforce, ServiceNow, etc.) to users anywhere
  • Intelligently accelerate the TCP conversations across the WAN by prioritizing the way data is sent over distance
  • Reduce the number of application round trips across the WAN, which directly applies to minimizing the impact of latency on application performance

Conclusion

There’s no doubt that work-from-anywhere will be the next norm. And, in order to ensure business resiliency and growth in the months and years ahead, IT teams need to consistently deliver performance and visibility across networks and applications regardless of how complex and distributed their IT environment. Riverbed offers solutions that can help you optimize remote user connectivity, accelerate business-critical application performance, and improve network resiliency and security. Learn more about our work-from-anywhere solutions.

_____________________

[i] Gartner, COVID-19 Bulletin: Executive Pulse, 3 April 2020

[ii] Gartner, Coronavirus in Mind: Make Remote Work Successful, 5 March 2020

[iii] ESG, The Impact of Poor SaaS Performance on Globally Distributed Enterprises, March 2019

[iv] TechTarget, Office 365 Survey, February 2020

[v] ESG, The Impact of the COVID-19 Pandemic on Remote Work, ​2020 IT Spending and Future Tech Strategies, May 2020

 

]]>
Top 4 Reasons to Optimize Your SD-WAN https://www.riverbed.com/blogs/top-4-reasons-to-optimize-your-sd-wan/ Mon, 29 Jun 2020 12:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15320 I often get the question “When should I enable WAN optimization with my SD-WAN?” It’s a good question, especially since it is a common mistake to either conflate the two or view them as mutually exclusive. They really address different challenges. And the best results come when you use the two together in the right way.

Here is a list of the top four situations when enabling WAN optimization/application acceleration with SD-WAN will help you achieve the best results:

1. SD-WAN assures operational agility and optimization assures app performance

No amount of bandwidth can address the negative effects of latency on app performance. Organizations are adding Internet broadband to the branches to meet capacity demands cost-effectively. Often branches have multiple paths across MPLS, Internet broadband and LTE. SD-WAN brings tremendous network agility with application intelligence to solve problems such as multi-link utilization, path selection, zero-touch provisioning and policy-based management. However, SD-WAN doesn’t help mitigate the negative effects of latency that often exist between users and their apps. Once the packets are on the wire, SD-WAN’s job is essentially done. WAN optimization is the necessary ingredient to dramatically reduce the number of round-trips required to transfer data or complete a transaction. Make sure, however, that the WAN optimization solution addresses the behavior of network and application protocols over long distances. Solving just half of the equation won’t assure end-user performance.

 

SD-WAN needs WAN Optimization
SD-WAN selects the best path and optimization makes the app perform better over that path

2. Migration to the cloud adds latency

The migration of applications to the cloud often increases the distance between users and their apps. It does not matter whether the traffic is backhauled or sent over direct internet access (DIA). This additional latency degrades the performance of the apps and negatively impacts user experience. Look for a WAN optimization solution that is capable of accelerating apps hosted in SaaS and cloud environments. A common misstep is to assume steering packets directly from a branch to the Internet will guarantee exceptional performance. Only when you layer in WAN optimization and SaaS/Cloud app acceleration will you see performance boosts of up to 3x, 5x, even 10x and more.

3. Data reduction can save big in the cloud

With increasing data in the cloud and traffic to and from multi-cloud infrastructure, the egress charges from the cloud providers can quickly add up. For example, egress charges for 25TB of cloud data can cost over $2,000. Classic WAN optimization data reduction techniques offer significant savings for organizations by lowering egress charges. Make sure your WAN optimization is capable of securely intercepting and optimizing SSL/TLS/HTTPS protocols as the vast majority of the traffic to and from the cloud is encrypted.

4. Many business-critical apps continue to be hosted in on-prem data centers (DCs)

Apps will continue to be served from the DC for the foreseeable future (read the blog “MPLS is obsolete”). These are applications like file sharing (CIFS, SMB, NFS….), video streaming (live and on-demand), storage replication, on-prem web applications, etc. Organizations may be reducing MPLS bandwidth as they adopt DIA from the branches. This situation makes it even more critical to optimize traffic on constricted WAN links.

Networks need application acceleration technologies in today’s cloud-first world to address impacts of increased distance between users and the applications. Therefore, it’s critical that organizations choose a SD-WAN solution that offers application acceleration capabilities. SD-WAN and WAN optimization are complementary solutions solving distinct problems. You get the best of both worlds—the best WAN path to route the traffic and the best app performance over the chosen path.

Learn how you can combine SD-WAN and application acceleration with Riverbed Software-Defined WAN.

]]>
The Next Norm: Work-from-Anywhere Performance Management (Part 2) https://www.riverbed.com/blogs/next-norm-work-from-anywhere-performance-management-part-2/ Fri, 26 Jun 2020 15:16:24 +0000 https://live-riverbed-blog.pantheonsite.io?p=15336 In Part 1 of this blog series, The Next Norm: Prepare to Work-from-Anywhere, we reviewed the recent, explosive growth in work-from-home and the transition to the new norm, work-from-anywhere. We discussed how enterprises are proactively addressing the productivity challenges of work-from-anywhere. In Part 2 of the series, we’re going to focus on one area that can positively impact all of the above, Network Performance Management (NPM).

NPM is a timely discussion. Per the recent Gartner survey, 50% of network operations teams feel that they will be required to rearchitect their network monitoring stack by 2024. This is a significant increase from just 20% in 2019. What’s driving the spike in demand? Well, it’s a number of things, but more than anything, it’s the complexity of hybrid networks.

As organizations continue to invest heavily in technologies and services that fuel their digital strategies, the supporting network has grown more complex. Adopting cloud services, supporting mobile workers, leveraging AI, IoT and Big Data have put tremendous strain on enterprise networks—and on the teams who manage them.

What can IT do? Get the upper hand on what’s happening across your network—and what’s going to happen! Three core areas where you should be engaging right now are: 1) ensuring that you have cross-domain visibility of the expanded network, 2) leveraging new technologies that can help you in the process, and 3) guarding your flank with integrated security.

1. Greater cross-domain visibility is a must!

As discussed, network demands have evolved. No longer do they simply serve to connect corporate-owned facilities and a limited number of road warriors accessing services via the VPN. They are hybrid and complex, combining on- and off-premises infrastructure, connected by private and public transport types. They connect a high percentage of the modern work-from-anywhere workforce and are accountable for ensuring high productivity across the full range of applications that are distributed, dynamic, increasingly delivered as a service, and run in data centers and clouds.

To be able to fully monitor what is happening and troubleshoot any anomalies on the network, you need cross-domain visibility. Your NPM solution should be collecting and analyzing all the data whether its source is on-prem or in SaaS or cloud extensions. It’s not enough to have point products that are tapping into a few spots or just sampling data. There are better options. To ensure complete visibility, you should seek network performance management solutions that collect and analyze all the packets across your many applications, all flows across the complete hybrid network, and telemetry from all the devices in play. Choosing an integrated platform provides peace of mind from knowing there are no gaps in information or dropped handoffs between standalone components. In fact, research from Enterprise Management’s May 2019 report, Network Performance Management for Today’s Digital Enterprise, shows that “integrated platforms are more effective at performance monitoring than standalone, best-of-breed tools.”

2. Leverage AI and machine learning technology

Once you have collected all the data, you have a treasure trove for mining in times of need. However, with the complexity of modern networks, the volume of data they produce is almost unmanageable. When your teams are reporting slow application response times or the inability to participate in critical video meetings, how quickly can you root-cause the issue and respond? There is just no way to analyze it all manually in any acceptable window of time.

As the demands on your network expand and user expectations rise, your modern NPM solution should be leveraging advanced technologies to deliver insights much faster than human analysis. Network performance management solutions should leverage AI and machine learning to track trends, surface anomalies and identify the root cause of potential problems before they are impacting your users.

A perfect, real-world example is OneMain Financial. By capturing and analyzing down to the packet level OneMain is able to quickly pin slow performance directly to the network or application, eliminate finger pointing, slash troubleshooting literally from days to just minutes, and fix problems before users across their 44-state network ever notice.

3. Integrate NPM and security to guard your flank

With cross-domain visibility and eyes on all the data, it’s no wonder that network performance management and network security solutions have become inextricably linked. In light of the latest increase in cyberattacks, the partnership has become even more important. With the recent surge in the number of endpoints tied to remote work due to the pandemic, cybercriminal activity has seen explosive growth with “phishing and counterfeit web pages increasing by more than 265% daily from January 2020 to March 2020,” per the Bolster analysis of over 1 billion websites.

Choosing a network performance management solution with advanced security capabilities that works in conjunction with your VPNs and leverages every network flow are critical to performing forensic investigation, cyber threat hunting, threat intelligence and DDoS detection to keep up your guard.

Riverbed’s unified NPM measures every packet, every flow and all device metrics, all the time. This gives organizations the control and the insight needed to enable work-from-anywhere models and to proactively identify and quickly troubleshoot network and application performance and security problems. 

]]>
The Next Norm: Prepare to Work-from-Anywhere (Part 1) https://www.riverbed.com/blogs/next-norm-work-from-anywhere-part-1/ Thu, 18 Jun 2020 20:13:55 +0000 https://live-riverbed-blog.pantheonsite.io?p=15308 With the onset of the recent pandemic, countries across the globe reacted in unprecedented fashion to ‘flatten the curve’ by implementing shelter-in-place guidelines. Businesses responded almost immediately with new or expanded policies to protect their workforces and society at large. Remote workers, mobile workers and traditional office workers all became work-from-home employees – effectively overnight. With nearly 30 million employees in just the Fortune 500 alone, the impact and scale of this movement are quickly evident.

While it was bumpy at first, many organizations quickly realized that their teams could remain highly productive provided they had the right tools and technology in place to connect their teams and business workflows. And, along the way there were many benefits recognized for both employees and the business by having a larger remote workforce.

As regulations have begun to ease in certain countries and regions, organizations across the globe are coming to grips with what the new norm looks like for them. Most are still working through the details, but it’s clear that they won’t be returning to business as usual. In fact, 74% of companies plan to increase the number of remote workers and nearly a quarter will move 20% of their workforce to permanent remote work.[i] As individuals become more comfortable in the post-pandemic world and begin to move about, work-from-home will undoubtedly become work-from-anywhere (WFA). And, many leading brands, including Twitter, Facebook, Square and Nationwide, are already paving the way by expanding their remote work policies and/or extending them “forever.”

But unlike flipping the switch to work-from-home, the shift to WFA is being made with more time, planning and consideration regarding the technology and processes to empower the new norm workforce. Three focal areas you should be considering as you prepare your work-from-anywhere future are your technology, security and people.

What investments are needed to support the work-from-anywhere model?

Connecting people and business workflows is always a challenge, but even more so when teams are geographically dispersed. In fact, 54% of HR leaders say poor technology and/or infrastructure for remote working is the biggest barrier to effect remote working.

As a result, many of them are increasingly looking to SaaS and cloud offerings to provide quick, cost-effective services to keep their teams productive. Investing in collaboration tools that connect team members and business workflows are clearly a top priority as can be seen by the growth in popular services like Microsoft Office 365, which has grown to over 258M daily users and Microsoft Teams (for video collaboration) ballooning from 32M daily users to 75M active users daily since March, 2019.

To provide the best end-user experience and ensure high productivity despite the extended challenges of serving work-from-anywhere teams, IT leaders are investing in innovative acceleration technologies that are proven to overcome latency and increase network capacity. Ensuring these investments are paying off and that business-critical applications and networks perform as expected is also driving increased organizations to deploy network performance management solutions that provide the visibility, analysis and insights needed across geographically-dispersed teams and hybrid networks. As the old adage goes, “You can’t manage what you don’t measure!”

Increased vigilance to manage increased security threats

As the surge to work-from-home took shape, IT teams were faced with massive overnight challenges: get teams the gear they need, get them onto the network with access to services from home, and get them secure.

Of course, bad actors didn’t wait while IT worked feverishly to put new systems in place. In fact, they went into overtime mode as well, resulting in a 667% increase in phishing attacks in just the first month of work-from-home. While 34% were brand impersonation attacks, thousands were financial scams and business email compromise (BEC). Organizations need to stay wary of this and put the right safeguards in place to protect customer data, corporate data and brand reputation.

Collaboration between network and security teams to reduce time from breach to detection and mitigate data exfiltration is critical to a speedy response. Investing in the right visibility solutions allows you to transform network data into cybersecurity intelligence, providing essential visibility and the forensics needed for broad threat detection, investigation, and mitigation.

New approaches to manage work-from-anywhere teams

Just as there are many technology concerns to address in support of the new norm, the changes that impact our work-from-anywhere team members and managers need to be considered as well.

Organizations should be identifying best practices, benchmarking and putting processes in place to measure and optimize work-from-anywhere engagement. To keep your best team members – and keep them engaged and productive – managers will need to be flexible and share their discretion for remote work with team members. Mutual trust is the foundation of distance relationships and a requirement for work-from-home success between employees and employers.

Policies must be developed regarding who is needed in the office (or specifically not), when and why, and who should work remotely. Similarly, there will likely be policy changes for compensation (often impacted by geography), work-related expenses, expected hours of operation, flexibility for external environmental situations, etc.

Shelter-in-place and work-from-home came in a flash and was empowered by best effort heroics from IT. Hopefully these constraints will soon be gone. Work-from-anywhere is right on the horizon and it is expected to last. Riverbed can provide you with industry-leading application and network visibility and performance to ensure work-from-anywhere success. Learn more about our remote work solutions.

]]>
How to Solve Performance Issues with SSL Encrypted Traffic https://www.riverbed.com/blogs/solve-performance-issues-with-ssl-encrypted-traffic/ Thu, 11 Jun 2020 21:08:27 +0000 https://live-riverbed-blog.pantheonsite.io?p=15173 With the security concerns we face these days it’s ever so important for organizations to use encryption to secure their data in transit. And since the HTTP protocol is so widely used as a means to transfer various types of data, like MAPI over HTTP, a mechanism is needed to secure it. That mechanism is SSL or TLS. There are several reasons you might experience performance issues when using HTTPS sessions between two hosts. In this article, I’ll show you how to address these performance issues using Riverbed SteelHead technology and SSL optimization. Before getting into the nuts and bolts of SteelHead, let’s talk briefly about SSL. This will aid in understanding the configuration requirements once we get to that point.

SSL Overview

SSL, or really TLS these days, uses both symmetric and asymmetric encryption. Symmetric encryption is commonly used for real-time data transfer. The keying is smaller than that of asymmetric encryption and the same key is used for both encryption and decryption. Asymmetric encryption uses two keys, a public key and a private key. A sender uses the recipient’s public key to encode and send a message; the recipient uses its private key to decode the message and within this communication, a symmetric session key is calculated. Asymmetric encryption isn’t often used for real-time data as the key size is much larger, often 2048 or 4096 bit. As mentioned, asymmetric encryption is used to send a message that we then calculate the symmetric key with. The symmetric key is random and is only used for the current conversation. This key is known as the session key. Once the session key is established both parties encrypt and decrypt using the session key.

For a moment, let’s look at the SSL negotiation process.

SSL Process
SSL Process

As you can see in the figure, the process begins with a client sending a hello to the server. In response to this, the server sends its public key. The client then sends the random material that will be used to create the session key. The server’s public key is used for this and the data can only be decrypted by the server using its private key. The server generates keys and responds back to the client with the “Change Cipher Spec” message, switching further communication to the use of the generated session keys.

So, now that we’ve reviewed the SSL process, let’s talk about what we need to do to configure our SteelHead environment to optimize SSL traffic. I do want to note here that we can certainly optimize ALL SSL traffic since really it’s just a TCP session. But what we really want to get at is the different types of traffic inside there so we can perform additional optimization techniques as needed.

Optimization of SSL

So here’s how the overall process of SSL optimization works:

1. Server-side SSL Certificates and Private Keys are copied to the SteelHead appliances.
2. The SteelHead appliances use their own identity certificates to establish a secure connection between one another proactively or on-demand.
3. When the client sends the initial “hello,” it is intercepted by the server-side SteelHead appliance.
4. The server-side SteelHead establishes a connection with the server.
5. The server-side SteelHead then establishes an SSL connection with the client. This comes in the form of the server-hello.
6. A temporary session key is migrated from the server-side SteelHead to the client-side SteelHead. This moves the SSL session between the client and the client-side SteelHead.
7. Transfers over the WAN are now accelerated and optimized between the client-side SteelHead and the server-side SteelHead using all of the Riverbed RiOS mechanisms.

For all this to happen, there must be a trust between the two SteelHeads. The client must trust the server-side SteelHead and the server-side SteelHead must trust the certificate it receives from the server.

So let’s configure SSL optimization. I’ll take you through each step, but I also recommend you watch the video where I walk through each of these steps.

To begin, here is the topology I’ll be using in this configuration.

SSL Optimization Topology
SSL Optimization Topology

Our first step is to obtain and install SSL licenses on the client and server. The license if free and should be included. You can verify that you have it by navigating to Maintenence>License. You can see what you’re looking for in the image below. You’re going to want to make sure that both the client- and server-side SteelHeads have the license. If not, you’ll need to contact Riverbed Support.

Verify License
Verify License

Your next step is to enable SSL optimization on both SteelHeads. You’ll find this checkbox in the SSL Main Settings. When you enable SSL optimization you must save and restart services on the SteelHead.

Enable SSL
Enable SSL

Now, recall that the server-side SteelHead intercepts the initial request from the client and it’s the server-side SteelHead that then creates its own SSL session to the server. The server-side SteelHead then uses the server’s private key and certificate to then create a session with the client. In other words, the server-side SteelHead responds to the client’s SSL request to the server, as if it were the server. For this reason, you need to get the server’s certificate and private key on the server-side SteelHead. You also need the CA certificate so that you read and trust the imported server certificate. First import the CA certificate under SSL>Certificates.

CA Certificate Import
CA Certificate Import

Then, import the server’s key and certificate back in SSL>Main Settings. Import both the key and the certificate.

Import Server Cert and Key
Import Server Cert and Key

Now, on the client-side SteelHead create an in-path rule to allow optimization of the desired SSL servers. In the image below, I am looking for ANY IPv4 traffic headed specifically to the server. Also, make sure this rule is added above the default rules or it won’t be matched and the traffic will bypass optimization and be passed through.

In-Path Rule for SSL
In-Path Rule for SSL

At this point, you can send traffic. When you do this, you’re going to notice that traffic will be matched and it will have some of the RiOS techniques applied to it. But also notice the red triangle in the image below. What’s that all about?

SSL No Peer
SSL No Peer

By expanding to see the details, you will note that the inner channel is not secure. Why not? Well, the client-side and server-side SteelHead don’t trust each other now. By navigating to SSL>Secure Peering(SSL) you’ll find an entry on the gray list. Use the Actions drop-down to move the SteelHead to the white list by selecting Trust.

White List Peer
White List Peer

Once the peering is established, we can try a download again and we’ll see that everything is in order and that all of the power of the RiOS Optimization techniques are now able to be applied to SSL traffic.

SSL Optimized
SSL Optimized

Wrap Up

Well in this short post we’ve covered the need for SSL optimization as well as an overview of how SSL works and how to configure both the client-side and server-side SteelHeads to handle the optimization of this traffic. By giving attention to these types of technical aspects in an enterprise network, we can enhance the user’s experience by eliminating many of the common performance issues they experience. SSL optimization is just one of many capabilities in the Riverbed WAN Optimization arsenal. Head on over to the WAN Optimization solutions page and learn more about what Riverbed can do for your organization.

]]>
MPLS is Obsolete https://www.riverbed.com/blogs/mpls-is-obsolete/ Fri, 05 Jun 2020 18:40:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15220 Is it? Is MPLS fast approaching its demise as it is portrayed in many industry articles and blogs? I beg to differ. For the foreseeable future, I do not anticipate the end of MPLS in enterprises. Jokingly I say, at least not until I retire. As networks go through modernization with SD-WAN, MPLS will be an integral part of that transition. The managed MPLS market is not shrinking. Instead, it is growing at a CAGR of 6.5% between 2020 to 2025 according to a report from Research and Markets.1

MPLS continues to be the predominantly used WAN technology today and into the foreseeable future
MPLS continues to be the predominantly used WAN technology today and into the foreseeable future

4 Reasons Enterprises Will Continue To Utilize MPLS

1. Decreasing price differential between mpls and broadband

Often, the price differential between MPLS circuits and Internet broadband has been proposed as the catalyst for MPLS decline. A few years back, the differential of MPLS vs broadband was considerable in the order of 100+x. However, within the last few years the prices of MPLS have come down by orders of magnitude. Now the average differential between Internet broadband and MPLS is 20-30x. Widespread availability of Internet broadband has given enterprises considerable leverage in negotiating MPLS prices during contract renewals.

2. Many applications are deeply intertwined with business processes

There is tremendous momentum to move applications to cloud infrastructure and SaaS applications. A wide gamut of applications are moving to the cloud –productivity apps, collaboration apps, HR apps, monitoring tools, security services, etc. Yet, there are myriad of business critical applications hosted in on-premises data centers. Think of IT/OT applications used in manufacturing plants or assembly lines. These applications take multiple years to redesign, migrate data, and to establish new business processes while driving revenue.

3. Businesses are highly risk averse

With mobile phones we have traded quality for convenience. How often did we ask “Can you hear me now?” when using a landline? Fixed line phones operated as a utility – dependable and always available when needed. MPLS circuits provide the same level of connectivity with guaranteed application services across different tiers of QoS. Businesses are inherently risk averse, especially large global corporations, to depend on the best case connectivity of the Internet for mission-critical applications.

4. Performance of legacy applications and latency

Home-grown applications weren’t designed for the Internet age. These applications were written years ago for platforms like DB2 and mainframes using legacy programming languages. The architecture of these applications, the protocols used, the chatty handshakes all assumed a highly reliable underlying network with low latency. No amount of bandwidth can overcome the inherent latency introduced over Internet connectivity.

 

Hybrid WANs are the Future

Internet broadband, cloud technologies, and SaaS applications deliver tremendous benefits for enterprises to ignore. Corporations will invest in cloud infrastructure and Internet connectivity. However, MPLS is not finished. It is not going away anytime soon. By 2023, 30% of enterprise locations will use Internet-only WAN connectivity, up from less than 10% in 2019, to reduce bandwidth cost.2 Conversely, 70% of enterprise locations will continue to rely on other WAN technologies, of which MPLS has the lion share.

Corporations with a complete dependency on Internet-only connectivity across all locations will be exceptions. Hybrid WANs will be the norm. Although slow to adopt compared to their mid-market brethren, SD-WAN will take enterprises through the next wave of network modernization. MPLS vs. SD-WAN, which is it? It is both. You will see MPLS alongside Internet broadband to implement SD-WAN overlay networks. Enterprise SD-WAN with WAN Optimization and Application Acceleration technologies will catapult enterprises as they continue on their cloud journey.

 

[1] https://www.researchandmarkets.com/reports/4557775/managed-mpls-market-growth-trends-and

[2] Source: Gartner Report, Forecast Analysis: Enterprise Networking Connectivity Growth Trends, Worldwide, 2019. By Gaspar Valdivia, Lisa Unden-Farboud, To Chee Eng, Grigory Betskov, Susanna Silvennoinen, 20 September 2019

 

]]>
Top 5 Traps That Can Ruin Any SD-WAN ROI Analysis https://www.riverbed.com/blogs/5-traps-that-ruin-an-sd-wan-roi-analysis/ Fri, 22 May 2020 17:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=15109 The dynamic application workloads of today’s organizations are aggressively moving from on-premise data centers to “cloud data centers.” This migration demands highly agile underlying infrastructure and SD-WAN is becoming crucial to support these web-scale hybrid applications. Network organizations fall into key traps when performing SD-WAN ROI analysis that may be detrimental to choosing the right solution. Beware of these five traps that ruin SD-WAN ROI analysis:

1. Desiring to justify with MPLS cost reduction

The global average cost of 1 Mbps of MPLS can range from 20-30X the cost of 1Mbps of Internet broadband. The high cost differential can lead IT organizations to justify SD-WAN projects with the cost savings. To support IaaS and SaaS migration, organizations need higher capacity. Organizations can be disappointed by the offsetting cost of increased Internet broadband capacity against the cost savings of the MPLS circuits. A better approach is to justify the increased capacity to enable the cloud migration while off-setting the cost of circuits.

SD-WAN ROI of WAN Circuits
Cost reduction of MPLS vs cost management with increased Internet capacity

2. Failing to quantify uptime benefits

A key benefit of SD-WAN is the level of automation and managing policy-based access with a network-centric approach as opposed to a device-centric view. When managing hundreds of locations and controlling the optimal application traffic flow, the automation can eliminate significant amount of downtime caused by manual operational workflows. Various industry reports estimate the cost of downtime to a company can range from $300,000 to about $4,000,000 an hour. Do not overlook the benefits of uptime improvements in your SD-WAN ROI analysis.

3. Ignoring benefits of high-impact IT initiatives

The operational efficiency gains of software-defined WANs over traditional router-based networks free up high-value engineers from mundane operational tasks to drive high-impact corporate initiatives. The quantifiable benefits will at least be equal to the run rate of these senior IT staff and can have a multiplying effect on the overall benefits to the company.

SD-WAN ROI must account value of critical projects
Benefits of freeing up staff to work on high-value projects should not be overlooked

4. Expecting 1-to-1 cost replacement

IT organizations can often fall in the trap of taking a simplistic approach of comparing cost of hardware. Replacing a traditional router with an SD-WAN appliance can give a false sense of 1-to-1 hardware cost comparison. SD-WAN brings with it a slew of benefits – automation, operational gains, hybrid infrastructure enablement, application-level intelligence, integrated application acceleration, flexibility of integrated or service-chained security, and branch IT sprawl reduction to name a few. The hardware cost comparison fails to comprehend the value of these benefits in an ROI analysis.

5. Overlooking user productivity gains

One of the core advantages of SD-WAN is the ability to steer application traffic intelligently across multiple underlying technologies. With workload spread across on-premise data centers, private cloud, public cloud and SaaS applications, this policy-based application access is paramount to the digital journey. SD-WAN is a key enabling infrastructure to successfully adopt applications such as Office365, Salesforce, Workday, and others. As a result, the pace of user adoption of cloud and associated productivity gains should not be overlooked in the ROI analysis.

Download the ROI guide by Enterprise Management Associates on Riverbed Steel Connect EX.

Are there other traps that you have come across in an SD-WAN ROI analysis? Share your thoughts in the comments below.

]]>
Enterprise SD-WAN Trade-Offs Part 1: Is SD-WAN a Piece of Cake? https://www.riverbed.com/blogs/is-sd-wan-a-piece-of-cake/ Fri, 15 May 2020 16:32:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14984 This blog is the first in a 4-part series that takes a detailed look at the SD-WAN trade-offs that commonly emerge during a network transformation project–and more importantly, how to avoid pitfalls.

To encourage you to read on, or take a detour for important background information, here are two things we won’t be covering (with quick links for more information):

  • This is not a What is SD-WAN? technology primer.
  • It’s also not an enumeration of SD-WAN benefits.

Indeed, at this point, it’s a foregone conclusion that the branch router we’ve known and loved (or loved/hated, perhaps) has outlived its primacy. It’s also generally understood that a Software-Defined WAN (SD-WAN) is much more apt to take you, your network, and your company where you need to go in the next decade.

Is SD-WAN all unicorns and rainbows?

SD-WAN sounds great in theory. But is there a catch? According to Gartner, only 20% of enterprises have successfully adopted SD-WAN in at least some of their remote sites to date. Why not more enterprises? Why not more sites? Doesn’t SD-WAN equate to network nirvana? And isn’t it supposed to be easy?

To cut the suspense, the answer to this question is an emphatic, “No!” SD-WAN alone won’t take you to network nirvana. There are major pitfalls, the most common of which come in the form of unfortunate trade-offs that all-too-often emerge and can reduce or even decimate the benefits you were seeking to gain with SD-WAN in the first place.

Here are the three most common trade-offs that you will undoubtedly face:

Enterprise SD-WAN trade-off #1: destination vs. journey

Is transitioning to SD-WAN more trouble than it’s worth?

The minefield of brownfield SD-WAN integration
The minefield of brownfield SD-WAN integration

We all want SD-WAN. But it’s impossible to transform the old into the new all at once. And so, you have to traverse an intermediate phase–the brownfield–where some sites are connected via SD-WAN and others remain connected via conventional routers. The difference between navigating this phase unscathed and bringing your network to a screeching halt has everything to do with the ability of the SD-WAN solution to interface with your existing network and cope with its topological complexities, one-off hacks, and special-case router configs that have built up over time. Those hidden network demons that have been lurking unnoticed will inevitably (thanks, Murphy!) rear their ugly heads once the transformation is underway.

Part 2 in this blog series will share important information about best practices and critical SD-WAN features that will increase your chances of success as you navigate the minefield of the brownfield.

Enterprise SD-WAN trade-off #2: cost vs. performance

Is it possible to maintain WAN capacity and increase app performance?

Some of you might be thinking, “Wait! I thought more network capacity equated to better app performance.” Well, like most things in life–it depends. Sometimes more capacity absolutely leads to better application performance. Sometimes more capacity does absolutely nothing to improve application performance. And sometimes, adding capacity actually reduces application performance!  Woah, not good.

Adding bandwidth doesn't always equate to better app performance
Adding bandwidth doesn’t always equate to better app performance

Part 3 in this blog series takes this topic head-on and will offer fresh insights into the following:

  • How can I tell if and when app performance will improve by adding more bandwidth?
  • Why on earth could adding more bandwidth actually reduce application performance?
  • If bandwidth isn’t bottlenecking app performance, what is? Latency? Link quality? How can I tell?
  • Is app performance being dictated by the behavior of networking protocols, or application protocols, or both?
  • And, most importantly, once I understand the true causes and conditions of insufficient app performance, what are the best tools, techniques and technologies available that can improve the situation?

 

 

Enterprise SD-WAN trade-off #3: user experience vs. security

Is it possible to meet user expectations and maintain network security?

One benefit of SD-WAN is that it makes it easy to steer certain traffic from remote sites toward your on-premises data centers and steer other traffic from remote sites directly to the Internet. Once selective traffic steering is made easy, there’s less of a reason to backhaul Internet-bound traffic from remote sites through your data center. Doing so only adds latency between users and their Internet-hosted apps and adds unnecessary traffic on your network. Instead, steer Internet-bound traffic directly from the branch to the Internet. Less latency. Less overall network traffic. Better performance.

Avoid trading network security for user experience
Avoid trading network security for user experience

The problem, of course, is that by steering traffic directly from the branch to the Internet comes with it the cost of increasing the threat perimeter of your network. You’ve traded off network security for app performance.

Part 4 in this blog series will investigate remedies for this situation, including some nuances that might not be so obvious:

  • What are the best ways to effectively protect the edges of my network without breaking the bank?
  • And what if I have to continue backhauling Internet-bound traffic due to regulatory compliance or corporate policy? Is there a way to overcome the negative effects of higher latency?

Summary

Let’s close out by returning to the title of this blog, “Is SD-WAN a piece of cake?” The answer, as you might expect is, yes … and no … and yes!

  • Yes – relative to managing conventional routers, SD-WAN is a quantum leap in the direction of simplicity and agility. However…
  • No – the benefits of SD-WAN do not appear magically on their own. Without careful planning and attention to the pitfalls that can arise during this transformation of your network, your project will not feel anything like “a piece of cake.”  And so…
  • Yes! – if you are mindful of the trade-offs, you can have your cake and eat it too. This is when you’re on the true path of wisdom that will ultimately lead to SD-WAN success.
Have your cake and eat it too!
Have your cake and eat it too!

We hope you enjoy this series and that it helps you tackle your SD-WAN project with greater confidence, even ease. For my part, I’m going to find a delicious piece of cake. And I’m going to eat it!

Nothing could be simpler.

]]>
Sales Leadership During a Pandemic: What Does It Look Like? https://www.riverbed.com/blogs/sales-leadership-during-pandemic/ Thu, 14 May 2020 00:58:02 +0000 https://live-riverbed-blog.pantheonsite.io?p=15030 Whether it’s with your team, family, company or friends, it feels like there’s just one conversation in the world right now, with good reason. COVID-19 has taken all of our plans–both personal and professional–and chucked them right out the window.

Call it pivoting, regrouping, recalibrating or whatever you like. The fact is, we are all in the same boat: rethinking our once well-designed plans against a fluid landscape that changes not by the day but seemingly by the hour. At times it feels chaotic, but it’s also true that the challenges posed by this pandemic are not insurmountable. The reality is that many companies will emerge on the other side of this, perhaps not unscathed but definitely unbroken.

For companies with sales staff now working at home–and customers that it’s no longer possible to visit–there’s one major question: What does sales leadership look like in a pandemic environment?

Here’s what I’m telling my staff.

1. Focus on what you can control

In a crisis, people seek order and stability. With so much that’s not remotely in your power to change, it’s reassuring–and productive!–to focus on the elements within your sphere of influence. For sales teams, that should really center on developing their pipeline, positioning for the future, and driving real-time results right now. All are doable.

2. Solve for the problems of today

There is no business as usual now. Your salespeople should never waste their customers’ time – or their own – having conversations about things that will have no impact at this moment or the foreseeable short term. They should already understand that big projects that require substantial investment will get back-burnered as CapEx and OpEx thins. The nice-to-haves that might have once enticed customers are now out of the question.

Instead, turn to identifying and solving customers’ immediate needs. For us, that’s helping companies ensure their work at home workforces have the network visibility and app acceleration they need to be successful. For you, it will be something else unique to your company and offering. Have candid conversations with your customers now. Where’s their pain? And how can you help them stop hurting?

3. Remember your ability to connect is paramount

Truly excellent salespeople can influence a customer no matter the medium. For these individuals, doing their jobs remotely is a complete non-event. They know how to leverage their ecosystem for support. They’re excellent writers, able to get in touch and connect with customers over email. They know how to ask the right questions and listen to what’s said (and what’s unsaid) on a call. They can pull together a useful webinar, proof of concept or trial solution. They’ve got virtual demonstrations down cold.

But even more critically, a great salesperson can deftly cultivate trust to forge genuine connection. They are credible because they know how to connect customer pain to their company’s unique value proposition. Doing that well is more important than ever because customers are, frankly, facing quite a lot of challenges.

4. Enable your people in this new environment

Regardless of function, everyone is being asked to be more flexible. But we need to equip our sales teams to pivot more quickly than most because our usual go-tos could be off the table. There are no more lunches and golf outings or happy hours, onsite customer visits, and networking events. That’s the reality for right now.

The victorious teams will be the ones who quickly adjust to this new normal and move to help their sales executives with enablement designed for this virtual era. Are you quickly ramping up to provide them the tools to demo or present at a distance? How about teaching them how to have a productive customer conversation when you never actually sit face to face with them in real life? As a sales leader, have you mastered having the executive conversation in our new environment? You should.

5. Watch your email messages for tone and relevance

Open your email inbox and there are probably more earnest emails on “Our response to the COVID-19 crisis” than you can count, all from companies you maybe did business with one time (if ever).

If you’re relying on email as one tool in your inside sales arsenal, that’s fine. But make sure you’re crafting a message that is sticky, specific and solves the problems of today. I do open inbound emails, sometimes from genuine interest and occasionally from morbid curiosity. Marketing messages with generic, tone deaf subject lines like, “CAN WE HELP YOU MAKE BETTER CONNECTIONS WITH CUSTOMERS?” have a one-way ticket to the trash bin.

It’s clear to me that, as with so many things, this crisis should change how we measure the sales organization. If your team can’t sell a technology that’s clearly hyper-relevant for this time, it means you don’t have the right sales talent on your bench and your messaging isn’t hitting the mark. But if your organization excels at selling in this new remote paradigm, just imagine how powerful they’ll be once the crisis diminishes. Because whether at home or in the office, you’ll know they’re capable of creating authentic relationships and delivering messaging that works.

That is a gamechanger.

]]>
Riverbed and Gigamon: A Great Network Visibility & Analytics Partnership https://www.riverbed.com/blogs/riverbed-and-gigamon-partnership/ https://www.riverbed.com/blogs/riverbed-and-gigamon-partnership/#comments Mon, 11 May 2020 20:28:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14961 Great partnerships make a difference! Just think of Michael Jordan and Scottie Pippen, Elton John and Bernie Taupin, Bill Gates and Paul Allen. These relationships are known for their amazing success in sports, music, and business. Each partner brought a different skill set to the relationship. Without Pippen’s defense, Michael Jordan would not have six NBA championships. Without Bernie Taupin, Elton John wouldn’t have sold 300 million records. If Paul Allen hadn’t negotiate the deal to purchase the QDOS operating system, Microsoft would not have changed the PC industry forever.

Just as Bill Gates and Paul Allen make a great partnership, so do Riverbed and Gigamon. The basis of a good partner is that they complement each other.
Just as Bill Gates and Paul Allen make a great partnership, so do Riverbed and Gigamon. The basis of a good partner is that they complement each other.

A great partner consists of many things. Top of mind would be shared vision, mutual contribution, and solid relationships. As the leader of a sales organization, I am always looking for a great partner that will help me enable my sellers and more importantly, our customers be successful in their digital journey. Gigamon is a partner that brings these characteristics to Riverbed, our partners, and to our customers. Together we are able to meet the needs of our customers. Together we empower our customers to maximize their investments in their digital infrastructure. We do this by assembling data across the hybrid infrastructure to ensure the critical services the business depends upon are performing to their maximum potential.

Here’s how it works: The Gigamon Visibility and Analytics Fabric captures all network data, processes it and sends it to Riverbed Network Performance Management solutions. Digital teams can leverage advanced capabilities for optimizing network loads, analyzing applications, and detecting and responding to threats. Together our solutions can scale to the needs of the largest networks in the world and quickly pinpoint gaps in IT performance that could disrupt business performance.

Additionally we share a common partner ecosystem where our customers can engage the experts for their industry who share the Riverbed and Gigamon joint vision to maximize digital performance and business performance impact.

We would welcome the opportunity to come together to discuss your goals and determine how our partnership can help ensure that you achieve them.

We also invite you to our upcoming joint webinar on “Network Resiliency and Security Tips for a Remote Workforce”​on May 21 at 2:00 PM ET to learn more about this great partnership and how we can help you ensure remote workforce productivity.

]]>
https://www.riverbed.com/blogs/riverbed-and-gigamon-partnership/feed/ 1
Riverbed Application Acceleration for AWS FSx https://www.riverbed.com/blogs/riverbed-application-acceleration-for-aws-fsx/ Fri, 08 May 2020 18:29:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14799 Amazon Web Services (AWS) continuously adds new services and features to enhance the cloud experience. Amazon FSx delivers that experience for Windows file shares so it’s critical that applications accessing FSx perform well. In this post, I will cover both the features and benefits of using Riverbed’s Application Acceleration solutions to enhance the user experience for AWS FSx. 

What is Amazon FSx?

Amazon offers a fully-managed native Microsoft Windows file system for Windows called FSx. Built on Windows Server, FSx provides administrative features such as Microsoft Active Directory (AD) integration, user quotas, end-user file restore, and is accessible via SMB3. Windows-based applications that require file storage in AWS can access this file server, which is cost-optimized for short-term workloads.

Accessing windows files via SMB3 on Amazon FSx can be challenging because branch offices are spread across continents. Because SMB3 is a chatty protocol, transferring data on an Internet link may take a long time. For example, copying a 2.6 MB AutoCAD folder with design files takes a minute and 33 secs from Mumbai to AWS, California. Average AutoCAD files are in the range of a few GBs, which may take hours and sometimes even days to copy, resulting in lost productivity. My measurements show that average speeds at work range from 5 Mbps to 10 Mbps; at home, average speeds are 700Kbps to 900 Kbps.

  Mumbai to California ( AWS) measurements
Latency 236 ms
Bandwidth 121.9 Mbps (Uplink), 29.3 Mbps (Downlink)

With many employees working from home due to the Coronavirus–and potentially staying at home as remote work becomes more popular–enterprises need to ensure consistent performance of SaaS, cloud, and on-premises applications to any user, regardless of location or network type.

Riverbed delivers remote work solutions built for today’s dynamic and distributed workforce. Through a combination of the following WAN optimization and application acceleration offerings, Riverbed can ensure end-to-end acceleration with help of:

Riverbed WAN Optimization and Application Acceleration

Application acceleration for Amazon FSx

Riverbed accelerates Amazon FSx for remote/mobile users, branch office users, and data center applications using a combination of Riverbed products such as SteelHead, Client Accelerator, and Cloud Accelerator. Client Accelerator offers SteelHead benefits for mobile/remote workers using laptops to optimize applications across branches, data centers, and cloud services. Client Accelerator is configured by SteelCentral Controller for SteelHead Mobile (SCSM) using centralized policies deployed by IT administrators.

Cloud Accelerator is an infrastructure-as-a-Service (IaaS) environment running on leading IaaS platforms such as Microsoft Azure, AWS, and Oracle Cloud. User productivity is enhanced because Cloud Accelerator optimizes and accelerates applications to deliver maximum cloud value to the business.

To accelerate Amazon FSx, deploy Cloud Accelerator for AWS in the same VPC that hosts the FSx server. To deploy FSx, please refer to the AWS deployment guide at https://docs.aws.amazon.com/fsx/latest/WindowsGuide/getting-started.html.

The FSx server connects to the Active Directory Domain of the enterprise, so users/applications would use the FSx server.

How to install Riverbed Cloud Accelerator

There are three ways to install Cloud Accelerator (Cloud SteelHead virtual appliance), as described below.

1) Riverbed Community Cookbook

You can use the Riverbed Community Cookbook for installing Cloud Accelerator on AWS because it offers a single-click launch facility with few configurations and it is easy to set up. It is configured in two modes described below.

  • To configure an existing VPC and create Cloud Accelerator, you need to input details such as VPC ID, security group, subnet details, and more

    Deploying Cloud Accelerator in an Existing VPC
  • To create a VPC and set it up in a new VPC, you need to input VPC details such as Zone, CIDR blocks, and EC2 key pair to enable SSH, IAM role, and more. Cloud Accelerator gets created in the new VPC.
Create VPC and deploy Cloud Accelerator

2) Manual deployment (requires a Riverbed support login account)

Here are the steps required to create a Cloud Accelerator for AWS:

AMI that Riverbed Support shared with you

Configure instance details – advanced details – user data

Configure Instance Details
ds=/dev/xvdq
passwd=<Your preferred password>
appname=<your org Name ManuallyDeployedSteelHead>
lshost=cloudportal.riverbed.com
rvbd_dshost=cloudportal.riverbed.com
lott=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

where:

  • ds – The device node in which the Cloud Accelerator expects the data store EBS volume to appear. Due to changes in EC2 architecture, set this to /dev/xvdq.
  • passwd – The password hash for the admin user.
  • appname – Name of the Cloud Accelerator.
  • lshost – The fully qualified domain name of the licensing server, and generally, this name is usually the Riverbed Cloud Portal.
  • rvbd_dshost – Fully qualified domain name of the discovery server, and generally, this name is often the Riverbed Cloud Portal.
  • lott – You can obtain a token from the Cloud SteelHead license on the Riverbed Cloud Portal, and hence to redeem the license.

Add storage

  • Add and configure two volumes in addition to the root volume. One of these volumes stores the Cloud Accelerator software, so it serves as the configuration and management services disk. The other serves as the data storage disk.
  • Click Add a New Volume
  • Under the Device column, select /dev/sdk for the configuration and management services disk, and select /dev/sdm for the datastore disk.
  • Under the Size (GiB) column for each drive, specify a size based on the Cloud Accelerator model. See Cloud Accelerator models and required virtual machine resources.
  • Under Volume Type, you can choose Magnetic unless the Cloud Accelerator model you are deploying requires a solid-state drive (SSD).

    Add a New Volume

Configure security group

  • Choose a security group for the virtual appliance.
    • To connect the Cloud Accelerator, the Discovery Agent, and the client-side SteelHead, configure the security group to allow:
      • UDP port 7801, so connections coming in from the Discovery Agent work.
      • TCP incoming ports 7800, 7810-7850, so connections coming in from the client-side SteelHead work.
      • TCP incoming ports 22, 80, and 443, so CLI and UI connections coming in from the client-side SteelHead work.
      • Click Review and Launch.

        Select security group

 

3)  Riverbed Cloud Portal deployment (requires a Riverbed support login account)

Cloud Accelerator needs to be configured with the Active Directory domain services so that it joins the same domain as FSx. The Active directory could be an external AD or AWS-managed AD. Client Accelerator is managed and configured by SteelCentral Controller for SteelHead Mobile. Client Accelerator automatically connects to cloud services, so the connections are accelerated to the Amazon FSx server. See Riverbed Cloud Portal deployment (requires Riverbed support login account).

Typical FSX use case setup

Testing methodology

Performance tests are concentrated on the transaction response time and compared under three different conditions (when possible):

  • Baseline transaction – without application acceleration setup
  • Cold transaction – with application acceleration setup (the first transaction)
  • Warm transaction – SteelHead cache is not empty (second-and-above time transaction).

For our test, we set up a standard set of reference MS Office files (Word and PowerPoint), PDF files, and AutoCAD design files, so that different sizes are used for the Windows file sharing test. The test ran in a setup similar to the graphic above (from Mumbai to AWS, California).

We observed unique benefits with Riverbed application acceleration of FSx. The below-given measures are in seconds and in X for the improvement factor.

Optimization ratio highlights the benefit of Riverbed SteelHead on user experience. It shows how application acceleration divides application response time.

Each transaction was played two times under each of the three conditions so to avoid any artifact effects. We took the BEST case of baseline values (lowest transaction time), and the worst case of cold and warm transactions (highest transaction time). The optimization ratios were computed as per the below formulas:

  • Cold Transaction Improvement over baseline = Baseline value/Cold Transaction value
  • Warm Transaction Improvement over baseline = Baseline value/Warm Transaction value

Test results

Windows File Sharing

 Copy PDF file: 100MB
Baseline value 37.43 Seconds
Cold Transaction value 30.14 Seconds
Cold Transaction  Improvement over baseline 1.241X
Warm Transaction value 7.98 Seconds
Warm Transaction 4.69X

 

 Copy AutoCAD Folder structure: 1.95 GB ( 1992 files)
Baseline value 11340.12 Seconds
Cold Transaction value 2411.47 Seconds
Cold Transaction  Improvement over baseline 4.70X
Warm Transaction value 1583.78 Seconds
Warm Transaction  Improvement over baseline 7.16X

 

 Copy of  word file: 99.5 MB
Baseline value 54.62 Seconds
Cold Transaction value 30.31 Seconds
Cold Transaction  Improvement over baseline 1.80X
Warm Transaction value 7.76 Seconds
Warm Transaction  Improvement over baseline 7.038X

For this transaction, cold cache measurement was not taken into account since the file is already transferred and working on it.

Save of word file: 99.5 MB 
Baseline value 20.69 Seconds
Warm Transaction value 16.69 Seconds
Warm Transaction  Improvement over baseline 1.239X

For this transaction, cold cache measurement was not taken into account since the file is already transferred and working on it.

 Open  word file: 99.5 MB
Baseline value 19.64 Seconds
Warm Transaction value 13.59 Seconds
Warm Transaction  Improvement over baseline 1.445X

LAN Vs. WAN Peak rate ratio (218 Mbps Vs. 14.6 Mbps ~ 15X), and excellent average ratio (8.7 Mbps Vs. 1.4 Mbps ~ 6X)  on encrypted SMB3 connections:

LAN Vs. WAN Peak rate ratio (218 Mbps Vs. 14.6 Mbps )

66% data reduction on encrypted SMB3 connection for the above operation on the cold transaction:

66% data reduction on Warm Transaction

93% data reduction on SMB3 encrypted connection over Warm Transaction on FSx:

93% Data Reduction

106.7 times capacity increase (Lan throughput of 981.5MB translated to 9.2MB of WAN throughput):

106.7 time capacity Increase

Conclusion

Riverbed application acceleration provides tremendous benefits to the workforce, hence phenomenally improving user productivity. It saves high costs by lowering bandwidth requirements and reduces egress traffic cost in AWS because it saves several GB traffic. The user experience is dramatically enhanced.

]]>
SaaS Accelerator Configuration Walkthrough https://www.riverbed.com/blogs/saas-accelerator-configuration-walkthrough/ Thu, 07 May 2020 00:40:06 +0000 https://live-riverbed-blog.pantheonsite.io?p=14839 Today’s work environment is certainly not what anyone had anticipated it would be at the beginning of this year. Today, most of the world is forced to work from home (WFH) unless deemed an essential employee. This can add several challenges to an organization. While organizations have scrambled to implement company-wide VPN and issue laptops to employees that are normally sitting in an office, one aspect that’s proving to be a challenge is dealing with the latency seen in home broadband networks. Now perhaps your organization has migrated to cloud apps such as Microsoft Office 365, still, one thing rings true; from the time your application traffic leaves the laptop to the time it reaches your cloud provider, IT organizations have little control of the latency that users will experience. So in this article, I’m going to walk you through the process of configuring Riverbed SaaS Accelerator, Cloud Controller, and Client Accelerator so that end users can immediately benefit from our application acceleration and optimization techniques. Let’s get started.

Solution components

There are three components to the solution I’m covering in this article.

  1. Riverbed SaaS Accelerator
  2. SteelCentral Controller for SteelHead Mobile
  3. Client Accelerator

I’m going to cover the configuration of each of these, but let me give you a bit of an overview of the three first, beginning with SaaS Accelerator.  SaaS Accelerator is a Riverbed Hosted SaaS offering. You’re given access to SaaS Accelerator Manager, which acts as your cloud-hosted management interface to configure and monitor SaaS Acceleration. SteelCentral Controller for SteelHead Mobile is a virtual appliance you can deploy in your data center or in the cloud, for example in Azure.

Azure Marketplace
Azure Marketplace

Finally, Client Accelerator, formerly known as SteelHead Mobile, is a client application for Windows or macOS that performs client-side application optimization between the user and a cluster of Riverbed Cloud SteelHeads. Each of these three components are required to accelerate application traffic. With that said, let’s get into the configuration.

Configuration walkthrough

If you prefer to watch the configuration, I invite you to view the following video where I walk and talk you through each of the following steps. If you’re the reading type, you can skip it and continue on.

Now, if you’re still with me let start by logging into SaaS Accelerator Manager. You can see this in the image below. What you’re looking at is the first step in the configuration. A Root Certificate needs to be generated. The only requirement is the Common Name. I’ve set that to TE-Lab.

Create Cert
Create Cert

The next step is to enable acceleration for an application. In the image below, there are no applications being accelerated right now. By clicking the “Accelerate Application” button, you’ll get a configuration page that requires our attention.

Enable Service
Enable Service

To enable the service, we need to select which application we want to accelerate. We support Office 365, Salesforce, and Box to name a few. Next, we select the Region. Finally, we select the number of users that we will be accelerating for at any given time. These steps are important because of what needs to happen in the background after you click submit. So what’s that? Well, in short, a cluster of SteelHeads along with load balancers and additional network plumbing is brought up in the cloud, right in front of the Office 365 tenant in the United States (based on the region we selected). This lets us accelerate traffic from end-to-end, dropping your accelerated traffic off at the front door of the service.

Selection Options
Selection Options

You can see that process happening in the background in the image below. The service status will provide updates until the service is up and ready. This process usually takes less than 10 minutes.

Automation Runs
Automation Runs

While we wait for the service to be ready, we now jump over to the SteelCentral Controller for SteelHead Mobile. Our first action is to tie the SaaS Accelerator Manager (SAM) to the controller and vice-versa. To do this, we need to provide the FQDN of SAM and the Registration token that you retrieve from SAM under the Client Appliances page.

Enter FQDN
Enter FQDN

This immediately puts us on the Gray List.

On Gray List
On Gray List

To get us off the gray list, we need to head back to SAM. We now see the serial number entry for the controller in SAM and we can click it to move it to the white list.

Change Gray List
Change Gray List

In the following image, you can see that we have moved the controller to the white list. At this point, the two will communicate with one another.

Set White List
Set White List

Moving back to the controller, we enable acceleration. When we apply you will see a list of applications and the service endpoint for acceleration. This comes from SAM and the information is based on the cluster that was created when we enabled the service.

Enable Service
Enable Service

Now we need to create a policy on the controller to tell our client to accelerate Office 365 traffic. We’re going to apply this policy to an MSI package later on.

Create Policy
Create Policy

Once the policy is created, we need to create the rules. Here I’m creating one of four rules that are required for Office 365 traffic. Most of the rule is left at its default values, however, we set the Destination Subnet or Host Label to SaaS Application and then select the app from the dropdown.

Add Rules
Add Rules

After this has been done for all four of the Office 365 traffic types, we can see each of the rules.

Review Rules
Review Rules

In addition to the In-Path rules, we need to enable MAPI of HTTP optimization.

Enable MAPI
Enable MAPI

Then we need to enable SSL Optimization.

Enable SSL
Enable SSL

And finally, we enable SaaS Acceleration. Clicking “Update Policy” finalizes the policy creation.

Enable SaaS
Enable SaaS

Next, we create a package that ties the policy to the client. Here I am creating a package called TE-LAB. The group is TE-LAB and the endpoint settings will come from the TE-LAB policy where we created the in-path rules. You can also grab the MSI from here. On a side note, the controller requires you to save the configuration. You can see the save button in the menu bar in the image below. Make sure you click it!

Create Package
Create Package

At this point, the service is ready to rock but we need to throw some hosts at it. We’ve downloaded the client installer to the machine you see in the following image. Let’s run that file and work through the installation wizard. This is pretty basic stuff here so I’ll spare you the multiple screenshots, however, one thing I should mention is that you probably wouldn’t install this manually on all your clients. You can use whatever your software management tool of choice is to push to multiple clients at once.

Start Install
Start Install

After the install completes and the client is running, it’s time to send some traffic. In the image below, I’ve logged into my SharePoint site and I’m downloading a 500MB file. The first time I did this without Client Accelerator installed, it took me about two minutes to download.

Begin Download
Begin Download

After Client Accelerator was installed, it took me about 30 seconds.

Download Speed
Download Speed

Why such a difference? Well, have a look at the image below. Here we are looking at Client Accelerator and you’ll notice the Performance Statistics showing 98% data reduction.  happening here? Well, this is just one of our techniques used to accelerate traffic. A local cache is created on the client. The default size of the cache is 10GB. As files are transferred. this information is cached. When a file is retrieved we don’t have far to go. Making changes to files means that only the changed data needs to be transferred, not the entire file.

Review Client
Review Client

There are of course other techniques that are used, for example, we also reduce the number of application turns by handling that locally between the client and the agent (Client Accelerator) rather than send all that protocol data over the network and waiting for a response from the service side when most of it is unnecessary and inefficient.

Wrap up

As you can tell by the walkthrough here, the setup is not too complex. There are a few areas you need to interact with: the SAM, the controller, and the Client Accelerator agent. Still, the benefits are immediate and quantifiable. I hope you found this article interesting and look forward to your comments.

 

]]>
Does Your SD-WAN Pass the Enterprise-Grade Litmus Test? https://www.riverbed.com/blogs/does-your-sd-wan-pass-the-enterprise-grade-litmus-test/ Mon, 04 May 2020 13:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14862 Are all enterprise SD-WAN solutions created equal? Although a rhetorical question, the answer is a definitive, “No.”  That begs the question … “What makes one SD-WAN different from another?” And in particular, since SD-WAN adoption to date has occurred predominantly amongst smaller- and medium-sized businesses … “What makes an SD-WAN solution truly fit for larger organizations and global enterprises?”

Analyst firm, IDC, surveyed enterprises to understand what they demand from an SD-WAN solution to scale from pilots to full-scale rollouts.

You can read some of their findings in the IDC Technology Spotlight sponsored by Riverbed: Crossing the Chasm: What Makes SD-WAN Enterprise Grade.

 

IDC paper on key components of an enterprise SD-WAN
IDC paper on key components of an enterprise SD-WAN

Here are 3 key messages about enterprise SD-WAN from IDC’s Technology Spotlight:

 

1. Look beyond connectivity

“Some advancements that would make SD-WAN solutions more appealing to a broader swath of large enterprises are as follows:

Core network services, such as enterprise-class routing, quality of service (QoS), dual-stack IPv4/IPv6, and segmentation, that seamlessly integrate with existing networks (Such services are critical during brownfield phases of SD-WAN deployment.)”

2. Importance of cloud-based application acceleration

“IDC survey respondents ranked the ability to connect to SaaS and IaaS providers and the ability to improve the performance of those connections as the top 2 use case criteria for adopting SD-WAN solutions.”

Use cases for enterprise sd wan

3. Value of network visibility and analytics

“It’s one thing to have optimized connections across the network; it’s another thing to have the tools in place to monitor those connections and ensure they’re operating the way they’ve been programed to.

SD-WAN solutions that have integrated performance visibility and analytics not only can help ensure performance but also can be a security benefit. Solutions that retain a rich history of packet-, flow-, and device-centric telemetry can help identify the root cause of attacks.”

Addressing the growing demand for enterprise SD-WAN

Application workloads are increasingly distributed across on-premises data centers and cloud data centers. The complexity and cost of WAN, coupled with insatiable demand for bandwidth are driving enterprises to consider SD-WANs.

Emergence of hybrid WAN to meet the dynamic application workloads
Complex connectivity needs require a new approach to networks

It’s one thing to act as an SD-WAN overlay network served by an underlay of conventional physical routers.  It’s another thing altogether to communicate with (and even replace!) conventional routers to bridge between the old and the new. Such capabilities are essential when navigating the complexities of brownfield network integration.

Simply steering packets over Internet broadband circuits to reach cloud applications do nothing to overcome fundamental laws of physics. Network latency will ultimately dictate user experience. To truly boost performance and user experience across high-latency hybrid WANs, application acceleration is key.

In addition to reading IDC’s Technology Spotlight, read this blog about Riverbed’s SteelConnect EX SD-WAN architecture, which has raised the bar and set a new standard for what an Enterprise SD-WAN should be.

Does your SD-WAN pass the enterprise-grade litmus test?

]]> Onboarding SteelConnect EX Branches with a Staging Script https://www.riverbed.com/blogs/branch-onboarding-using-a-staging-script/ Wed, 29 Apr 2020 13:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14710 [embedyt] https://www.youtube.com/watch?v=9zGajxUImvM[/embedyt]

After you’ve deployed a SteelConnect EX SD-WAN headend, common next steps are to onboard branches to the headend, place them into the SD-WAN fabric, and allow them to be managed via overlay by the Director. In this article, we are going to walk through the process of using a staging script to onboard an Internet-only SD-WAN site. To do this, we have to complete a bit of pre-work first. We need to update the topology to add Branch D (I’m using the same environment that I used in the above-referenced article and video), configure some templates, and then use the script to onboard the device. To begin, let’s have a look at the topology. I’m using GNS3 because it makes it simple to add and delete sites, links, etc.

Topology Start
Topology Start

In the above topology you can see that the headend is already deployed and that I have three sites that are MPLS only. We will migrate those in other articles but in this one, we are going to add a brand new branch called Branch D and connect it to the Internet cloud. You can see this in the image below.

Topology Complete
Topology Complete

Now that the topology is ready, I want to start the VM so that it’s ready when I get to the point of onboarding it. In the meantime, I need to provision things in the Director interface. Let’s start there.

Creating a workflow template for branch onboarding

When you onboard a branch you do so using a template. There are several types of templates available to you in the SteelConnect EX SD-WAN solution, however what we want to start with is a Workflow Template.

  1. First, navigate to Workflows>Templates>Templates
  2. Then click the + button to add a template.
Adding Template
Adding Template

We need to populate a few values here. These include providing a name, selecting the type of template we are using, the organization, controllers, subscription information, and analytics cluster. Some of these values are required to move on. You’ll note that this is a multi-tabbed template so we have a few pages that will require us to provide configuration data. You can see the first page below.

Basic Config
Basic Config

After clicking continue, you now provide the Interfaces configuration. Take note of how the device is physically wired and also that there are no device-specific values here. This is a template after all. Multiple devices can be deployed using this template. Below is what my interface configuration looks like for this deployment.

Interface Config
Interface Config

One thing to note here is that the interfaces I’m using are port 1 and 2. This is because port 0 is reserved for management. Therefore, port 1 is mapped to vni-0/0 (which is the WAN interface) and port 2 is mapped to vni-0/1 (which is our LAN side interface).

Interface Definitions
Interface Definitions

The next tab is the Split Tunnels page, where you map your VRF to a WAN VR and define that we will use DIA (Direct Internet Access) for devices onboarded with this template. DIA ensures that Internet-bound traffic is sent directly to the underlay and not backhauled to the data center. There’s actually a lot that goes on behind the scenes here. Not only is NAT configured, but BGP connectivity is established between the WAN VR and the LAN VRF so that routing between the two can take place.

Split Tunnels
Split Tunnels

Be sure to click the + button or the DIA will not be applied.

On the next page, we need to configure our DHCP values for the LAN side. This will allow onboarded branches to allocate DHCP addressing to devices in that branch.

Services
Services

On the last page, you can click the Create button and your template will be committed to the Director. Once this is done, it’s time to add a device. Before adding the device, ensure that the template was committed successfully. You can check this by clicking the refresh icon to see if the template shows up in the list.

Verify Deploy1
Verify Deploy1

Next, we add the device and attach it to a device group. The device group has the template defined that was created earlier. You can add the Device Group as part of the Add Device workflow. We do this Under Workflows, by clicking on Devices and then the + symbol.

In the form fields, you need to provide all the required values. This includes providing a name, selecting an organization, providing a serial number that you create (I use the device name here for simplicity), and then clicking the +Device Group Link.

Basic Device Info
Basic Device Info

When the Create Device Group page appears we can provide a name, select the organization, select the template you previously created as the Post Staging Template and then click OK. You can see this in the image below.

Create Device Group
Create Device Group

We then proceed to the Location Information page. On this page, the only mandatory field is the Country field. However, make sure you click the Get Coordinates button and that the latitude and longitude populate or you will need to manually enter these as well.

Latitude and Longitude
Latitude and Longitude

The next page we need to work with is the Bind Data Page. This is where we tie the variables in the template to the actual device.

Bind Data
Bind Data

We can click on the Serial Number link to see the variables on one page. From here, we can enter the required values and then deploy the device.

Populating Bind Data
Populating Bind Data

As we did with the template, we should verify that it shows up in the device list now.

Device Verify
Device Verify

Now comes the part we’ve all been waiting for: staging the device using ZTP. We do this from the CLI of the SteelConnect EX that we put in our topology at the onset of this article.

Stage the device using ZTP

Once on the CLI of the SteelConnect EX appliance, we need to navigate to the scripts directory.

cd /opt/versa/scripts/

Once there we run the deployment script. Here is an example of the staging script that I ran in the topology I’m working in. Essentially we set the local and remote IKE identities, define the serial number, set the controller IP address and then use DHCP on Wan 0. Now, a few things to clear up here. First, the controller IP is a “Public” address. In my lab that’s the “192.168.122.x” address space. Because this is an Internet-only branch, I need to onboard through the firewall at the edge of the data center. I’ve already configured static NAT and access rules on the firewall to allow this to happen. The second thing to clear up is that I said to use Wan 0 in the script, but I’m really plugged into eth1. That’s because eth0 is dedicated to management so the first physical port is port 0 according to the SteelConnect EX software. This maps to vni-0/0.

sudo ./staging.py –l SDWAN-Branch@Riverbed.com –r Controller-1-staging@Riverbed.com -n SC-EX-BRD-1 –c 192.168.122.25 –d –w 0

Once the command is executed, the FlexVNF instance will initiate an IPSec connection to the controller. You will also see the following output on the command line.

=> Setting up staging config
=> Checking if all required services are up
=> Checking if there is any existing config
=> Generating staging config
=> Config file saved staging.cfg
=> Saving serial number
=> Loading generated config into CDB

After you run the script, things sort of happen in the background. We can go to the CLI of the SteelConnect EX and use the show interfaces brief command to confirm:

[admin@versa-flexvnf: scripts] $ cli

admin connected from 127.0.0.1 using console on versa-flexvnf
admin@versa-flexvnf-cli> show interfaces brief | tab
NAME       MAC                OPER  ADMIN  TENANT  VRF     IP                  
-------------------------------------------------------------------------------
eth-0/0    0c:cb:e5:f9:c1:00  up    up     0       global                      
tvi-0/1    n/a                up    up     -       -                           
tvi-0/1.0  n/a                up    up     1       mgmt    10.254.33.3/24      
vni-0/0    0c:cb:e5:f9:c1:01  up    up     -       -                           
vni-0/0.0  0c:cb:e5:f9:c1:01  up    up     1       grt     192.168.122.157/24  
vni-0/1    0c:cb:e5:f9:c1:02  down  down   -       -                           
vni-0/2    0c:cb:e5:f9:c1:03  down  down   -       -                           
vni-0/3    0c:cb:e5:f9:c1:04  down  down   -       -                           
vni-0/4    0c:cb:e5:f9:c1:05  down  down   -       -

What happens next is expected. During the process of checking the tunnel, the device is being provisioned so we see messages that a commit was performed via ssh using netconf. This is the Director provisioning the device with the values we defined when we deployed the device in the Director GUI. Once provisioned, the device will reboot. You should see the following:

admin@versa-flexvnf-cli> 
System message at 2019-08-04 22:24:34...
Commit performed by admin via ssh using netconf.
admin@versa-flexvnf-cli> 
System message at 2019-08-04 22:24:40...
Commit performed by admin via ssh using netconf.
admin@versa-flexvnf-cli> 
Broadcast message from root@versa-flexvnf
        (unknown) at 22:24 ...

The system is going down for reboot NOW!

System message at 2019-08-04 22:24:50...
    Subsystem stopped: eventd

System message at 2019-08-04 22:24:50...
    Subsystem stopped: acctmgrd
admin@versa-flexvnf-cli> admin@versa-flexvnf-cli> 
System message at 2019-08-04 22:24:50...
    Subsystem stopped: versa-vmod
admin@versa-flexvnf-cli> 
versa-flexvnf login:

Digging a bit deeper into the provisioning output, note the following:

Tunnel Interfaces

Once successfully connected, the FlexVNF appliance will automatically go for a reboot and load the correct config. If that’s the case, you can skip the next step.

After the reboot of the FlexVNF appliance, you should see the different virtual routers:

We need to go back into the CLI of the SteelConnect EX at Branch D and issue the command to view the interfaces.

admin@SC-EX-BRD-1-cli> show interfaces brief | tab
NAME          MAC                OPER   ADMIN  TENANT  VRF                    IP
--------------------------------------------------------------------------------------------------
eth-0/0       0c:cb:e5:32:9c:00  up     up     0       global
ptvi1         n/a                up     up     2       Riverbed-Control-VR    10.254.24.1/32
tvi-0/2       n/a                up     up     -       -
tvi-0/2.0     n/a                up     up     2       Riverbed-Control-VR    10.254.17.44/32
tvi-0/2602    n/a                up     up     -       - 
tvi-0/2602.0  n/a                up     up     2       Internet-Transport-VR  169.254.7.210/31
tvi-0/2603    n/a                up     up     -       -
tvi-0/2603.0  n/a                up     up     2       global                 169.254.7.211/31
tvi-0/3       n/a                up     up     -       -
tvi-0/3.0     n/a                up     up     2       Riverbed-Control-VR    10.254.25.44/32
vni-0/0       0c:cb:e5:32:9c:01  up     up     -       -
vni-0/0.0     0c:cb:e5:32:9c:01  up     up     2       Internet-Transport-VR  192.168.122.149/24
vni-0/1       0c:cb:e5:32:9c:02  up     up     -       -
vni-0/1.0     0c:cb:e5:32:9c:02  up     up     2       Riverbed-LAN-VR        10.0.13.254/24
vni-0/2       0c:cb:e5:32:9c:03  down   down   -       -
vni-0/3       0c:cb:e5:32:9c:04  down   down   -       -
vni-0/4       0c:cb:e5:32:9c:05  down   down   -       -

We should also verify that we can reach addresses on the Internet from the CLI:

admin@SC-EX-BRD-1-cli> ping 1.1.1.1 routing-instance Internet-Transport-VR
Bind /etc/netns/Internet-Transport-VR/resolv.conf.augsave -> /etc/resolv.conf.augsave failed: No such file or directory
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=50 time=5.35 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=50 time=2.23 ms

And finally from the Ubuntu client that we placed in the branch we want to see if it received a DHCP address.

Check DHCP
Check DHCP

Since we have an IP address, we should try to ping and when we do we can see that DIA is working as expected.

Check DIA
Check DIA

Wrap up

Well, this is just one example of how to onboard branches in the SteelConnect EX SD-WAN solution. There’s also the URL-ZTP method, but we can save that for another article. Either way you choose the result should be the same. The device will become part of the SD-WAN fabric, establish an overlay to the controller, and then overlays between other sites once others are onboarded as well.

]]>
Enterprise-Grade SD-WAN: SteelConnect EX Advanced Routing Capabilities https://www.riverbed.com/blogs/steelconnect-ex-advanced-routing/ Mon, 27 Apr 2020 08:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14736 Advanced network routing is one of the most powerful features of Riverbed’s enterprise-grade SD-WAN solution SteelConnect EX – definitely one of my favorites. While other vendors took a different path offering the minimum feature set, SteelConnect EX implements all the advanced routing capabilities Enterprise Network Architects need to get full control of their infrastructure, at scale.

In previous posts, I gave an architecture overview of SteelConnect EX as well as provided general principles to integrate SteelConnect EX in a data center. In this blog, I will provide a deep dive into the routing and SD-WAN mechanisms of SteelConnect EX. I will not detail how to configure static routing, BGP, or OSPF, but will focus on the internal mechanisms of Riverbed’s SD-WAN solution.

So buckle up and let’s proceed.

Virtual Routers

When you consider a SteelConnect EX branch appliance, it’s not simply an SD-WAN router; it’s a system that runs multiple virtual routers (VR). Why multiple routers? That’s what we are going to address right now. Trust me, it makes our solution one of the most elegant and powerful SD-WAN solution for attaining maximum control.

So what is a virtual router in the first place?

By virtual router, I don’t mean a virtual appliance that you would deploy on a hypervisor. The architecture we are going to review is the same on any type of SteelConnect EX appliance: hardware, virtual and cloud images.

Virtual routing instances allow administrators to divide a device into multiple independent virtual routers, each with its routing table. Splitting a device into many virtual routing instances isolates traffic traveling across the network without requiring multiple devices to segment the network.

Virtual routing and forwarding (VRF) is often used in conjunction with Layer 3 sub-interfaces, allowing traffic on a single physical interface to be differentiated and associated with multiple virtual routers. Each logical Layer 3 sub-interface can belong to only one routing instance.

Besides the global routing instance, which is the main one and used for management, there are three types of instances:

  • Transport VR: each circuit has a separate VR with its routing table and routing protocols. You can create a Transport VR for MPLS, one for Internet, another one for 4G/LTE. The Transport VR is part of the underlay network; it interacts with the rest of the network and it owns a network interface (or sub-interface if you use VLANs). The system allows up to 16 uplinks.
  • The Control VR is tied to an organization (tenant). It has no physical interface attached to it. It is the entry point to the SD-WAN overlay. It forms tunnels with remote sites and with the Controller. It forwards “user” traffic through the overlay to other SD-WAN equipped sites. Several LAN VRF can be attached to one Control VR.
  • The LAN VRF is also tied to an organization because it is paired with a Control VR (and only one). Multiple LAN VRF can be created to segment the traffic.
SteelConnect EX – Virtual Routers

What is the benefit of having three types of instances? Let’s have a look at how we are using those VRs for SD-WAN.

Roles of the Routing Instances for SD-WAN

A simple way to summarize the role of each instance would be the following:

  • Transport VR is the underlay
  • Control VR is the overlay
  • LAN VRF is the LAN traffic
Routing instances roles

Let’s consider connecting to a server hosted in another site across the WAN. This site is also equipped with a SteelConnect EX gateway.

Our workstation will send traffic to its default gateway and will eventually hit the LAN VRF. The first thing that the appliance will do is a route lookup. Since the other site is also part of the SD-WAN overlay, the Control VR will advertise the server subnet to the LAN VR. Thus the packets will be routed to the Control VR, which is going to encapsulate in the overlay tunnel.

The tunnel is going over the Transport circuits. Depending on the SD-WAN policies, the uplinks will be bonded (by default) or App-SLA based path selection rules will kick in and steer the traffic in a particular uplink.

The overlay is a tunnel built on several layers of encapsulation:

  • On top of each transport domain (Internet, 4G/LTE, MPLS, etc.), a stateless VXLAN tunnel will be created between gateways.
  • Between Control VRs of two gateways are formed one (and only one) stateful IPSEC (over GRE) tunnel, which is transported on the VXLAN tunnels formed on the underlay (remember the Control VR has no physical interfaces).
SteelConnect EX overlay tunnels

Wait! Why do we have so many encapsulation happening? What is the impact on performance? I know these questions popped up in your head as you were reading the previous section.

Overlay Efficiency

Let’s rewind a bit and discuss the VXLAN piece first. Within a transport domain–by default and unless specified otherwise like creating Hub&Spoke topologies–all gateways will automatically form VXLAN between each other. As a result, two sites with an MPLS-A uplink will have a VXLAN tunnel between each other. If one site is Internet-only and the other MPLS-only, they won’t form tunnels; the only way for those two sites to communicate with each other will be to go to a hub connected to both transport domains.

VXLAN is a well-known technology in data centers that build Layer 2 networks on top of Layer 3. It uses flow-based forwarding and is known as being more efficient than a traditional Layer 3 routing that routes packets separately. Furthermore, VXLAN can scale much better than other tunneling technologies like IPSEC with an address space that can go over 16M entries.

On top of VXLAN, various IP transport tunnels can be implemented. In the case of SteelConnect EX, the Control VR will build IPsec over GRE for untrusted networks (by default) or simply GRE for the trusted ones.

Other SD-WAN solutions on the market form IPSEC tunnels on each uplink–most of them are always-on and rarely on-demand, otherwise performance is penalized during switchovers. In a full-mesh network, the complexity is O(n^2), in fact, O(n^2 x L^2) where n is the number of sites and L the number of uplinks, which becomes very quickly resource angry on a system.

Since Control VRs are creating only one IPSEC tunnel with remote sites, no matter how many uplinks there are, we have a much more efficient system that can very quickly failover in case of a WAN outage whilst consuming less resources.

Overlay encapsulation

All the encapsulation happens in the Control VR.

As you can see, an MPLS (VPN) label is attached to each LAN VRF. MPLS? Yes! We are leveraging MPLS technologies, too: Control VRs are forming a Virtual MPLS Core network.

In total, the overhead is 58 bytes for encrypted traffic hence the MTU would be 1342 bytes by default.

To be exact, enabling each path resiliency feature (like FEC, packet replication or packet stripping) would add 12 bytes of overhead each.

Split Tunnels

Now that we have a better understanding of the system architecture and the overlay mechanism, let’s have a look at the routing between VRs. Split tunnels refer to the menu that will be used to pre-configure the inter-VRs routing using Workflows on the Director.

When I teach a class on SteelConnect EX, I usually ask engineers in the room what they would need to do to have a packet routed between LAN and WAN with the following diagram:

Routing – Primer

The first thing we need to do is interconnect the routers with a cable. We also need to set an IP address on each of the routers’ interfaces. Finally, we need some sort of routing: static routes or a dynamic protocol like BGP.

It may sound obvious, but bear with me, this approach is super helpful to picture how the system works. On SteelConnect EX, the creation of all of those items is automatic and the configuration is pushed from the Director:

  • IP addresses will be automatically set on the VRs for internal use (LAN and WAN interfaces will need to be configured though)
  • The “virtual wire” is a tunnel to interconnect the routers that the system builds for us
  • BGP peering is configured to exchange routes

By default, a tunnel is created between the LAN VRF and the Control VR. BGP peering is established on the routing instances. The LAN-VRF advertises its direct connected subnets to the Control VR so they are visible on the SD-WAN overlay. The Control VR advertises all subnets from the SD-WAN fabric to the LAN-VRF. When you leave the split tunnel configuration empty, this is what happens.

“Passthrough”

During the template creation using Workflows, when the split tunnel is configured between the LAN-VRF and the Transport VR (say MPLS) with no options ticked, this is what we call the passthrough mode.

Split tunnel configuration – Passthrough

What happens when we implement that?

A tunnel is created between the LAN VRF and the Transport VR (here MPLS) to directly interconnect them. BGP peering is established between the two routing instances, which allows the LAN VRF to be aware of underlay subnets as well as the LAN VRF subnets to be advertised on the MPLS network. This is helpful in a hybrid deployment where SD-WAN and traditional routers will coexist.

Routing – Passthrough
DIA: Direct Internet Access

Again, leveraging the power of automation, when we select the option DIA in the split tunnel configuration, many things happen in the background to achieve your goal, which is to put in place direct Internet breakout.

In addition to the routes exchanged between the LAN VRF and the Control VR, a tunnel is created between the LAN VRF and the Transport VR (here Internet) to directly interconnect them. BGP peering is established between the two routing instances, which allows the LAN VRF to advertise its direct connected subnets to the Internet Transport VR. The latter will advertise a default route to the LAN VRF. Finally, CG-NAT is configured for all outbound traffic on the Internet.

Routing – Direct Internet Access
Gateway

Finally, the last option is to select “Gateway.”

Split tunnel configuration – Gateway

In this case, the subnets from the overlay will leak into the underlay (here MPLS) and vice-versa; subnets learned from the underlay will be advertised into the SD-WAN.

Routing – Gateway

This feature allows you to implement transit use cases between the SD-WAN fabric and underlay networks, as well as disjoint networks.

Conclusion

Today, we have learned that SteelConnect EX grants full control and flexibility to build the SD-WAN fabric on top of the traditional network.

There are three types of routing instances with different roles:

  • Transport VR is the underlay
  • Control VR is the overlay
  • LAN VRF is the LAN traffic

What we did not cover here is the multi-tenancy capability of the solution and this will be addressed in the next blog.

A question, a remark, some concerns? Please don’t hesitate to engage us directly on Riverbed Community.

Watch Video

]]>
Riverbed Introduces LTE/Wi-Fi Enabled Enterprise SD-WAN https://www.riverbed.com/blogs/riverbed-introduces-lte-wi-fi-enabled-enterprise-sd-wan/ Fri, 24 Apr 2020 15:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14770 The SteelConnect EX portfolio has expanded its power and flexibility with the addition of three new enterprise SD-WAN appliances: EX-385, EX-485, and EX-685. Ideal for Internet-only or hybrid WAN connectivity to small- and medium-sized branches, these xx85 platforms have LTE for cellular backup, the latest Wi-Fi 6 for branch user WLAN access, and Power over Ethernet (PoE) for video cameras and VoIP phones. These platforms can support a multitude of WAN transport technologies such as MPLS, private and public Internet broadband, and LTE.

Salient Capabilities:

  1. Enterprise-class SD-WAN
  2. Industry-leading Application Acceleration
  3. Advanced network and SD-WAN security

 

SD-WAN for branches
SteelConnect EX-685 SD-WAN Device for Branches: Front View

 

SD-WAN for Branches Port View
SteelConnect EX-685 SD-WAN Device for Branches: Connectivity Ports

Platform Details

Enterprise-class SD-WAN:

The SteelConnect EX xx85 platforms support the complete and power-packed enterprise SD-WAN feature set. These features include enterprise routing for overlay and underlay network communication, dynamic traffic conditioning for Internet links, advanced path resiliency, policy-based routing, and much more. Riverbed delivers IT agility with next-generation SD-WAN network architecture, moving from traditional packet-based networks to application-centric networks. Visit the What is SD-WAN? FAQ page to learn more about SD-WAN.

Industry-Leading Application Acceleration:

An industry leader in application performance over WANs, Riverbed extends its superior WAN optimization and application acceleration capabilities to enterprise SD-WAN. All three platforms seamlessly interface with SteelHead appliances for on-premises, cloud, and SaaS application acceleration. And SteelConnect EX-685 supports Universal Customer Premise Equipment (uCPE) with virtual SteelHead to accelerate applications in a 1-box configuration.

Advanced Network and SD-WAN Security:

All SteelConnect EX xx85 models deliver Next-Generation Firewall (NGFW) services. The SteelConnect EX-485 and EX-685 support additional advanced security functions including Next-Gen IPS, Malware Protection, Antivirus and Unified Threat Management (UTM) functionality. All models also provide flexible service-chaining to interface with any third-party security solution of the customer’s choosing.

Building the right Enterprise SD-WAN for your needs:

Read our latest blog on how to design the SD-WAN headend to connect these LTE/Wi-Fi enabled SD-WAN branch appliances to the datacenter or regional hubs. Download the SteelConnect EX Specification Sheet for SD-WAN models and technical details. 

Availability:

The xx85 platforms are currently orderable and generally available. Contact Riverbed for more information.

]]>
15 Surprising Stats on the Shift to Remote Work due to COVID-19 https://www.riverbed.com/blogs/15-surprising-stats-on-remote-work-due-to-covid-19/ Thu, 23 Apr 2020 12:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14757 As a result of the COVID-19 pandemic, “work from home” has rapidly escalated from one of many remote work options to “the remote work option.” For IT professionals, this means enabling employees with the basics (laptops and connectivity) and optimizing application delivery despite unpredictable network performance due to bandwidth contention and latency. Here are 15 stats to help you as you prepare for the new normal of working from home.

15 surprising stats on the shift to remote work

  1. There has been a massive shift to work from home. 88% of organizations have encouraged or required their employees to work from home and 91% of teams in Asia Pacific have implemented ‘work from home’ arrangements since the outbreak.[i]
  2. Coronavirus has been a catalyst for remote work. 31% of people said that Coronavirus (COVID-19) was the trigger to begin allowing remote work at their company.[ii]
  3. Organizations are mobilizing, using crisis response teams to coordinate their response. 81% of companies now have a crisis response team in place. [iii]
  4. Business continuity tops C-level concerns. 71% of executives are worried about continuity and remote work productivity during the pandemic.[iv]
  5. Cloud investment will continue. Software is expected to post positive growth of just under 2% overall this year, largely due to cloud/SaaS investments.[v]
  6. SaaS usage soars to meet collaboration needs. Use of video conferencing is exploding as Zoom reached 200 million daily participants (paid and free) up from just 10 million in December.[vi]
  7. Microsoft 365 video usage jumps. People are turning on video in Teams meetings two times more than before and total video calls in Teams grew by over 1,000 percent in the month of March.[vii]
  8. Cybercriminals are taking advantage of the crisis. Over a 24-hour period, Microsoft detected a massive phishing campaign using 2,300 different web pages attached to messages and disguised as COVID-19 financial compensation information that actually led to a fake Office 365 sign-in page.[viii]
  9. Technology and infrastructure are some of the biggest barriers to connectivity and remote workforce productivity. 54% of HR leaders indicated that poor technology and/or infrastructure for remote working is the biggest barrier to effective remote working in their organization.[ix]
  10. Poor SaaS performance hampers remote worker productivity. 42% of enterprises report that at least of their half distributed/int’l workers suffer consistently poor experience with the SaaS apps they use to get their jobs done.[x]
  11. New tools can improve SaaS performance. 81% report that it is important to adopt tools and techniques to address network latency issues that impact Microsoft 365 performance.[xi]
  12. Remote work has a positive impact on workforce retention. To retain staff in recovering from the COVID- 19 pandemic response later in 2020, organizations should expect that 75% of their staff will ask to expand their remote work hours by 35%.[xii]
  13. Remote work boosts productivity. Remote workers are 35% to 40% more productive than people who work in corporate offices.[xiii]
  14. Lower OpEx is an important benefit of work-from-home. 77% of executives say allowing employees to work remotely may lead to lower operating costs.[xiv]
  15. Remote work is here to stay. 74% of companies plan to permanently shift to more remote work post COVID.[xv]

Learn new short- and long-term strategies to enable your remote workforce and improve remote work productivity

The Gartner Report, Coronavirus (COVID-19) Outbreak: Short- and Long-Term Strategies for CIOs can provide you with insightful recommendations on how to respond, recover, and thrive. For more resources on how to optimize performance for your work-from-home employees, check out our solutions page www.riverbed.com/remote-workforce-productivity.

 

 

[i] Gartner, Coronavirus in Mind: Make Remote Work Successful!, 5 March 2020

[ii] https://www.owllabs.com/blog/coronavirus-work-from-home-statistics

[iii] Gartner, COVID-19 Bulletin: Executive Pulse, 3 April 2020

[iv] Gartner, COVID-19 Bulletin: Executive Pulse, 3 April 2020

[v] https://www.idc.com/getdoc.jsp?containerId=prUS46186120

[vi] https://blog.zoom.us/wordpress/2020/04/01/a-message-to-our-users/

[vii] https://www.microsoft.com/en-us/microsoft-365/blog/2020/04/09/remote-work-trend-report-meetings/

[viii] https://www.darkreading.com/threat-intelligence/after-adopting-covid-19-lures-sophisticated-groups-target-remote-workers/d/d-id/1337523

[ix] Gartner, Coronavirus in Mind: Make Remote Work Successful, 5 March 2020

[x] ESG, March 2019

[xi] TechTarget, Feb 2020

[xii] Gartner, What CIOs Need to Know About Managing Remote and Virtual Teams through the COVID-19 Crisis

[xiii] Gartner, How to Cultivate Effective “Remote Work” Programs, 2019

[xiv] https://www.flexjobs.com/blog/post/remote-work-statistics/

[xv] Gartner, COVID-19 Bulletin: Executive Pulse, 3 April 2020

]]>
The Golden Age of Spear Phishing https://www.riverbed.com/blogs/the-golden-age-of-spear-phishing/ Thu, 16 Apr 2020 14:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14701 I get it, everybody is working from home, and it is changing things on the network. The limits of VPNs have been pushed, stretched, and exceeded. Video conferencing systems have shown some “growing pains.” And, online SaaS applications have seen a lot of “resource unavailable” errors. These are examples of some of the effects we can easily see. What is less easy to see, however, has me much more worried.

Expect more Spear Phishing attacks

With face-to-face interactions removed, spear phishing has become a bit easier. Follow me on my thought exercise: you forget to lock your video conferencing room, a malicious actor joins (without video this time) and learns a detail or two of the on-goings in the business. Next, this hacker crafts a spear phishing email: “Attached is a link to the document I promised you during our 3:00 PM call. Ping me if you have further questions.” The link contains the malware, which now installs on the worker’s computer.

This malware has a signature that the corporate firewall might have blocked. The command & control (C2) communications perhaps go to a well-known C2 server, which the IDS (intrusion detection system) could have spotted. But because the VPN is struggling to keep up with demand, most workers have enabled split-tunneling1 so requests for resources outside the corporate network go direct to Internet. The firewall and IDS are not seeing the malware. Even if this particular scenario does not apply to your network, it does not stretch the imagination much to see how the current WFH environment has ushered in the Golden Age of Spear Phishing.

Data theft now easier than ever

In a similar vein, performance degradations and access to a company’s sensitive resources has become much harder to understand. It is as if we have all picked up and started working from the coffee shop. To enable access to resources, IT security teams are punching holes faster than a prize fighter. Which ones will get closed when people return to their offices?

Data theft is also much harder to control with so many employees working from home.
Data theft is also much harder to control with so many employees working from home.

Which data accesses are benign and which ones are malicious? What does data theft look like in these WFH times? Time will tell, but one thing is certain: what once appeared to be highly abnormal is now the new norm. It is going to take time to figure out what changed, how it changed, and how to tell right from wrong.

So, the new reality is that we do not know today what we will need to be looking at tomorrow. Especially if we work under the assumption that attack vectors have now moved outside of most corporate security visibility and that more system compromises are taking place where we are unable to directly detect them. Our best hope may be to detect the knock-on behaviors that result from these compromises: brute-force attempts at corporate resources, large data movements, scanning and reconnaissance behavior, etc. These “Network Behavioral Anomaly Detection” techniques have at times been accused of inconclusive alerting, yet a notification of an odd or changing behavior may be the only indicator the cyber defender is going to get these days.

Full fidelity visibility is the last line of defense

In fact, the best preparation we have is to simply record network data such as packets, flows, and logs – and store it for future forensic analysis. This, incidentally, separates the field of available visibility solutions. There are those that record everything they see vs. those that only record graphs and derived metrics. Full fidelity, or “forensically accurate” visibility, may seem like a last line of defense in normal times. But in changing times, it certainly shines at the front line.

In conclusion, even during the best of times threats are evolving. Investing in telemetry collection, and storage can help any organization prepare for an unexpected reality, whatever that may be. Just remember: packets don’t lie!

 

1 “Split-tunneling” is a VPN trick where only the traffic destined for the corporate network goes into the tunnel and all other traffic goes out the normal path to the Internet to reduce VPN congestion and delays.

]]>
The Key to Telework Productivity: Accelerated Network and Application Performance https://www.riverbed.com/blogs/the-key-to-telework-productivity-accelerated-network-and-app-performance/ Tue, 14 Apr 2020 15:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14597 As the world adjusts to our new normal, we find ourselves at a crossroads with complications and difficulties never faced before. As government agencies pivot their workforces to maximum telework per OMB guidance, IT departments are working overtime to enable networks to handle increased traffic and bridge latency issues to meet 24/7 uptime and productivity expectations. Telework productivity is a critical component of continuity of operations–mission operations and citizen services depend on it.

Telework introduces many complexities for IT departments. They experience a significant loss of visibility and control of networks and applications as the number of users with remote connections rapidly expands, thus creating unpredictable challenges and outcomes that can negatively impact employee productivity and ultimately the mission. To provide organizations guidance, Riverbed recently hosted a webinar (now available on-demand) focused on empowering remote employees, and I wanted to share some key takeaways as they relate to the federal workforce.

When teleworking, users struggle to remain productive when their connections and applications aren’t up to the task of the demands placed on them and thus, fail to perform. Users compete with spouses, children, roommates, and neighbors for bandwidth as network traffic explodes to support video streaming and gaming in addition to the collaboration and office applications that allow them to get their jobs done. Add latency to this–the time and speed of the traffic as it traverses from the user to the server and back–and productivity takes a hit. There is a misperception that bandwidth equates to max speed. In reality, latency can be the performance killer.

Agencies need a few key things to have optimal application performance:

  • Access to connect to all needed applications and services
  • Speed to provide agility and the ability to work efficiently
  • Availability to ensure dependability and to minimize risk

Agencies can achieve these through deployment of network optimization, application acceleration, and network visibility solutions, from each app location through to each remote user’s computer. Such solutions can enhance user experience by:

  • Avoiding duplication of data to reduce the amount of data being sent across the network
  • Modifying and prioritizing the transport of traffic over the network
  • Reducing application round trips across the network
  • Providing detailed visibility into bottlenecks that affect user experience

The right solutions can deliver up to 75-times faster downloads, 40-times faster collaboration, 10-times faster SaaS (including Office 365, Salesforce, ServiceNow and other applications), and nearly 99 percent data reduction on any network. All of these things work together to make applications fast, consistent and available at home or anywhere.

We’re more vulnerable when we telework–typically relying on one connection and no backup, along with a standardized ISP that is already struggling to keep up with increased traffic and network connectivity. You can improve the telework experience through deployment of application acceleration and network optimization solutions that provide accelerated access to on-prem, IaaS, or SaaS-based applications, even in less than ideal conditions. While other things feel uncertain right now, our at-home office experience doesn’t have to.

Riverbed understands the challenges private and public sectors organizations are facing right now. We are in this with you and are ready to help you maximize application and network performance to keep your workforce productive. We are offering FREE 90-day licenses to Riverbed Client Accelerator (formerly SteelHead Mobile), as well as FREE webinars to help you improve telework productivity during these challenging times. Please check back often for updates and new training materials.

 

]]>
Your Workforce is Working at Home. What Now? https://www.riverbed.com/blogs/serving-your-at-home-workforce/ Wed, 08 Apr 2020 17:28:39 +0000 https://live-riverbed-blog.pantheonsite.io?p=14604 Organizations around the world are grappling with exactly how to slow the spread of COVID-19. As they implement stringent measures designed to combat the virus and protect citizens, organizations are likewise taking decisive steps to safeguard employees and their communities, while prioritizing remote workforce productivity to support business continuity. Here’s how we think about it.

People come first. The word “unprecedented” has been used over and over again to define a situation that strains our collective understanding. Companies, governments, and people are united in that we are all carefully navigating new and uncertain terrain. No one alive has been through this. At a company level, asking “How does this put people first?” can serve as a good guidepost on doing right by employees, customers and partners.

Focus on serving your mobile and remote workforce. For organizations that were already adept at serving a mobile and modern workforce, this is the chance to outline best practices for your teams and serve as an example for others. Share what business applications your teams rely on – and how you weather variable network connectivity to keep them up and running. For organizations that haven’t yet fully embraced a distributed remote model, this is the time to step up and enable your workforce with the technology, tips and tools they need to be productive. Make sure to listen to the challenges they’re encountering and be as responsive as you can.

Performance is paramount. Collaboration tools have gone from useful to non-negotiable in short order. Everyone has to use them. IT organizations in companies large and small universally recognize that at-home and business networks will be under tremendous strain at a time when performance of apps like these and other tools is more important than ever. Companies will have to keep employees well connected to corporate networks, the cloud, and business-critical SaaS applications.

Beyond the focus on IT performance to ensure productivity, every company has to identify fresh markers of performance. In this new paradigm, plans and projects are being reshaped, reimagined, or sometimes scrapped entirely. It will take time but we all must laser in on what’s critical and communicate those priorities effectively to employees.

Visibility is vital. Companies need to be able to take real-time stock of network and application performance. This is more than just anecdotal evidence – although calls and emails to internal customer service channels are important signals worth elevating. This means reviewing the technology tools in place that provide that performance visibility.

Are you able to quickly diagnose and fix network issues? That’s important when surges arising from sudden demand are common. Can you easily optimize experience and workforce productivity regardless of location? That is also essential as companies attempt to deliver consistent, reliable uptime to workers.

For more insight, check out our work from home webinar with tips and demos on boosting at-home network performance.

What’s on your mind as you consider how best to support your workers at home?

]]>
Using SD-WAN Templates for Simplicity, Scale, and Cost Effectiveness https://www.riverbed.com/blogs/using-sd-wan-templates-for-simplicity-scale-and-cost-effectiveness/ https://www.riverbed.com/blogs/using-sd-wan-templates-for-simplicity-scale-and-cost-effectiveness/#comments Wed, 08 Apr 2020 06:00:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14573 Changing market dynamics require businesses to embrace digital transformation and to adopt new technologies that improve productivity and customer experience and reduce costs. Enterprises are rapidly adopting cloud services such as Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as Service (PaaS) across multiple clouds. As a result, network administrators are struggling with never-ending changes to networks and with constant mergers and acquisitions, it’s difficult to integrate new networks into a single network.

When implementing complex network changes, it is always useful to rely on a set of guided templates. An SD-WAN template is a framework to create or modify a specific device’s configuration for global and local deployments. Using templates, network administrators can group branches with similar business roles together. And, they can avoid the need to repeat common configurations across multiple branch offices and data centers.

SD-WAN templates also help create standardisation, thereby avoiding mistakes in network deployments. Templates solve problems of scale, cost, and agility and also provide role-based access control to different administrators. For example, a highly-skilled IT administrator can design templates used for complex deployments that a commissioning engineer can deploy at a branch office. SD-WAN templates can help IT teams:

  • Build in scale
  • Reduce network deployment and management costs
  • Avoid configuration errors
  • Reduce complexity

SteelConnect EX Templates

Riverbed’s enterprise-grade SD-WAN solution, SteelConnect EX, offers both device and service templates.

Device Templates

Using device templates, network administrators can automate most of the device-specific configurations for branch devices. This feature helps to configure WAN and LAN interfaces (Static or DHCP), Routing, NAT, DHCP, and other device-specific parameters. Each branch type can have multiple device templates such as:

  • MPLS and Internet WAN uplinks
  • Dual Internet WAN
  • DHCP LAN
  • Cloud services, such as AWS or Azure

There are two types of device templates: staging and post staging. Staging templates require minimum set-up for the branch to reach the SD-WAN controller. When staging is done at a different location (DC or NOC), the device is shipped with pre-configured information.

Select type SDWAN Staging, give the template a name, and select parent organization
Select type SDWAN Staging, give the template a name, and select parent organization
Create a new WAN Network
Create a new WAN Network
Name the WAN Network and select a transport domain
Name the WAN Network and select a transport domain
Select Interface Addressing type
Select Interface Addressing type

Post staging templates are typically used to create final branch configurations. Organisation details, bandwidth subscription, Routing, NAT (Network Address Translation), DIA (Direct Internet Access), DHCP, NTP and other management details are entered. 

Create template, select controllers, organization, bandwidth
Create template, select controllers, organization, bandwidth

 

Assign LAN and WAN ports
Assign LAN and WAN ports
Configure BGP, OSPF and static routes
Configure BGP, OSPF and static routes
DIA (Direct Internet Access) configurations
DIA (Direct Internet Access) configurations
NAT, DHCP, Relay configuration and management details
NAT, DHCP, Relay configuration and management details

Network administrators can then can add a Device Group and associate a staging or post staging template.

Select Devices/Device Groups
Select Devices/Device Groups

 

Service Templates

Service templates help configure services such as:

  • Stateful Firewall
  • NextGen Firewall
  • Quality of Service (QOS)
  • General
  • Application
  • Service Chain
Service Template Types
Service Template Types

Let’s use the NextGen Firewall service template as an example. It defines various policies and profiles that enforce rules with appropriate actions for:

  • DDOS
  • Authentication
  • Decryption
  • Security

DDOS attacks the machine and the network becomes inaccessible by flooding the target with a huge rate of traffic. With service templates, network administrators can configure profiles and set thresholds for various events as described in the graphic below:

Configure DDOS profile
Configure DDOS profile

Kerberos Authentication profile, LDAP Authentication profile, or the SAML Authentication profile can be used. Authentication timeout based on IP or Cache modes can also be configured as shown in the graphic below:

Authentication profile
Authentication profile

SSL decryption profiles can be defined based on configuration for each of the server certificates as shown below. Network administrators can decrypt the content with minimum key length supported. Various actions can be set for expired certificates or untrusted certificates to allow packets, drop packet, drop session, reject and alert. Similar actions for unsupported Cipher and Key Lengths can be configured.

SSL profile setting for the branch
SSL profile setting for the branch

The following graphic shows the configurations of various security aspects such as URL filtering, IP Filtering, Anti-Virus, and predefined vulnerabilities profiles.

Security profile
Security profile

SteelConnect EX Workflows

The configuration of Controllers, Organization, Templates, and Device creation can be simplified by the use of workflows. To create a branch device, workflows need to create templates (staging/post staging), device groups, and bind device data.

To Onboard Branch/DC devices using a workflow, enter branch-specific information for the templates used by this branch. An existing Device Group is selected or created. Device groups contain information about which templates to use for this branch. Hence, automation and deployment sites or groups of sites are easier, enabling scale at lower costs.

Add a device
Add a device

What Have We Learned?

Overall, SteelConnect EX templates offer an advantage to managing complex network deployments so network administrators can adapt networks to changing business dynamics with minimal costs.

]]>
https://www.riverbed.com/blogs/using-sd-wan-templates-for-simplicity-scale-and-cost-effectiveness/feed/ 2
5 Ways to Reduce Hybrid Cloud Complexity https://www.riverbed.com/blogs/5-ways-to-reduce-hybrid-cloud-complexity/ Tue, 07 Apr 2020 05:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14523 How can you manage multi-cloud complexity in today’s distributed environments? A common approach–using disparate cloud vendor tools–is often complex. Each vendor tool was designed for a specific public cloud. If you use multiple clouds (and 84% do according to the 2019 RightScale State of the Cloud Report), you’ll have multiple tools to manage. Pulling together information across all of them is time consuming and highly manual, which means it will take longer to isolate problems. Worst of all, you’ll have less time to focus on the really important stuff like supporting your mobile workforce.

With so much to manage, how can you get ahead in a hybrid and multi-cloud world? Here are five ways that you as an IT professional can reduce hybrid and multi-cloud complexity:

5 ways to manage multi-cloud complexity

  1. Provide full transparency across private clouds, multiple public clouds, and the networks that support them. 91% of enterprises have adopted public cloud and 72% private cloud, according to RightScale, so it’s critical to provide visibility across both as well as the networks that underpin them. And, with today’s work-from-home focus, the investment in cloud computing and cloud performance is only increasing.
  2. Deliver visibility into legacy and emerging apps. This requirement will be more important for more established organizations as they transition to the modern apps but still have a large percentage of legacy application environments (physical and virtual).
  3. Automate the discovery of application environments. Automation is a highly effective way to manage the complexity of distributed environments. One way to leverage automation is to auto-discover application environments to show where applications, services and workloads are located and how they are connected. This capability is critical to isolate issues quickly and resolve intermittent problems and brown outs and ensure your digital workforce can get the job done.
  4. Expand your data set. You should have the ability to collect data from each cloud service and integrate it with your existing visibility solution to provide holistic insights. For example, you should be able to use AWS metrics with your APM and NPM tools to connect the dots between apps, workloads, networks, and locations.
  5. Get granular. Since modern application services spin up and down in seconds, your data needs to be highly granular, ideally with 1-second monitoring intervals. It’s also a best practice to combine detailed data from network packets with flow data and device telemetry to provide a complete picture.

Learn more about reducing multi-cloud cloud complexity

To learn more about how to reduce hybrid and multi-cloud complexity and how you can ensure performance for your digital workforce everywhere and anywhere, see the ESG Report: Reducing Hybrid and Multi-cloud Complexity: The Importance of Visibility in Multi-cloud Environments.

Multi-cloud complexity

 

]]>
Building an SD-WAN Headend https://www.riverbed.com/blogs/building-an-sd-wan-headend/ https://www.riverbed.com/blogs/building-an-sd-wan-headend/#comments Tue, 24 Mar 2020 12:30:07 +0000 https://live-riverbed-blog.pantheonsite.io?p=14177 When working with SteelConnect EX an important concept to understand is that of a SD-WAN headend. You simply cannot operate a SteelConnect EX SD-WAN network without the headend. We covered the high-level overview of the headend in our Lightboard video and you can view it on the Riverbed YouTube Channel. However, for the sake of being thorough, let’s review what a SD-WAN headend is and the components involved.

Components of a Headend

There are three main components of a headend. These include:

  • SteelConnect Analytics
  • SteelConnect Director
  • Controller

These three entities are responsible for the management and control plane of the SteelConnect EX solution.

SteelConnect EX Director (Director)

SteelConnect Director is the Management interface that you work in. As you configure templates and settings here the Direction uses Netconf over SSH to provision the SteelConnect EX devices via the Controller.

SteelConnect EX Controller (Controller)

The controller establishes secure management tunnels to each SteelConnect EX. The Controller acts as a BGP route-reflector, reflecting overlay prefixes to each site to establish reachability between sites using the SD-WAN overlay.

SteelConnect EX Analytics (Analytics)

SteelConnect Analytics receives all telemetry information from the SteelConnect EX sites and provides you with that data by means of dashboards and log files.

Installing the SD-WAN Headend

There are several steps that must be followed to install a headend. One item to note is that the headend may live in the data center, but it is not part of the data plane. To establish connectivity from the SD-WAN overlay to the data center, a SteelConnect EX FlexVNF must be installed in the data center. Let’s walk through the configuration of the headend.

Step 1: Add headend components to topology

For the purpose of this article, I’m going to make the assumption that the network infrastructure is already configured to support the addition of our three new devices: Director, Controller, and Analytics. We will add them according to the following diagram.

Riverbed SD-WAN Headend
Riverbed SD-WAN Headend

To elaborate a bit further on the diagram, all devices define ethernet0 as the management interface and thus the Director, Controller, and Analytics are connected to the management network on ethernet. From the point of view of the SteelConnect Director, the GUI management performed by an admin is done via the Northbound interface. This is also where API calls happen. We use the Director southbound interface, in this case, ethernet1, as the control network.

Step 2: Perform the initial setup of the SteelConnect Director

The implement the Director, we must begin by following these steps:

  1. Open up the Director CLI
  2. Log in to the Director using the default credentials
  3. Perform the initial setup script.

Below is a CLI output of the Director. As you’ll note, upon initial login with the Administrator account we are automatically prompted to enter the setup. Answering yes to this prompt begins the setup script.

Ubuntu 14.04.6 LTS Director-1 ttyS0

Director-1 login: Administrator
Password: 
===============================================================================
WARNING!
This is a Proprietary System
You have accessed a Proprietary System

If you are not authorized to use this computer system, you MUST log off now.
Unauthorized use of this computer system, including unauthorized attempts or
acts to deny service on, upload information to, download information from,
change information on, or access a non-public site from, this computer system,
are strictly prohibited and may be punishable under federal and state criminal
and civil laws. All data contained on this computer systems may be monitored,
intercepted, recorded, read, copied, or captured in any manner by authorized
System personnel. System personnel may use or transfer this data as required or
permitted under applicable law. Without limiting the previous sentence, system
personnel may give to law enforcement officials any potential evidence of crime
found on this computer system. Use of this system by any user, authorized or
unauthorized, constitutes EXPRESS CONSENT to this monitoring, interception,
recording, reading, copying, or capturing, and use and transfer. Please verify
if this is the current version of the banner when deploying to the system.
===============================================================================

  ____  _                _              _                                
 |  _ \(_)_   _____ _ __| |__   ___  __| |                               
 | |_) | \ \ / / _ \ '__| '_ \ / _ \/ _` |                               
 |  _ <| |\ V /  __/ |  | |_) |  __/ (_| |                               
 |_| \_\_| \_/ \___|_|  |_.__/ \___|\__,_|                               
  ____  _            _  ____                            _     _______  __
 / ___|| |_ ___  ___| |/ ___|___  _ __  _ __   ___  ___| |_  | ____\ \/ /
 \___ \| __/ _ \/ _ \ | |   / _ \| '_ \| '_ \ / _ \/ __| __| |  _|  \  / 
  ___) | ||  __/  __/ | |__| (_) | | | | | | |  __/ (__| |_  | |___ /  \ 
 |____/ \__\___|\___|_|\____\___/|_| |_|_| |_|\___|\___|\__| |_____/_/\_\
 Release     : 16.1R2
Release date: 20191101
Package ID  : 117dde1
------------------------------------
VERSA DIRECTOR SETUP
-bash: /var/log/vnms/setup.log: Permission denied
------------------------------------
Do you want to enter setup? (y/n)? y
[sudo] password for Administrator: 
------------------------------------
Running /opt/versa/vnms/scripts/vnms-startup.sh ...
------------------------------------
Do you want to setup hostname for system? (y/n)? y
Enter hostname: Director-1
Added new hostname entry to /etc/hosts
Added new hostname entry to /etc/hostname
Restarting network service ...
Do you want to setup network interface configuration? (y/n)? y
------------------------------------
Setup Network Interfaces
------------------------------------
Enter interface name [eg. eth0]: eth0
Existing IP for eth0 is 192.168.122.174
Configuration present for eth0, do you want to re-configure? (y/n)? 192.168.122.174
Answer not understood
Configuration present for eth0, do you want to re-configure? (y/n)? y
Re-configuring interface eth0
Enter IP Address: 192.168.122.174
Enter Netmask Address: 255.255.255.0
Configure Gateway Address? (y/n)? y
Enter Gateway Address: 192.168.122.1
------------------------------------
Adding default route - route add default gw 192.168.122.1
Added interface eth0
Configure another interface? (y/n)? y
Enter interface name [eg. eth0]: eth1
Existing IP for eth1 is 10.100.3.10
Configuration present for eth1, do you want to re-configure? (y/n)? y
Re-configuring interface eth1
Enter IP Address: 10.100.3.10
Enter Netmask Address: 255.255.255.0
------------------------------------
Added interface eth1
Configure another interface? (y/n)? n
Configure North-Bound interface (If not configured, default 0.0.0.0 will be accepted) (y/n)? y
------------------------------------
Select North-Bound Interface 
------------------------------------
Enter interface name [eg. eth0]: eth0
------------------------------------
Select South-Bound Interface(s) 
------------------------------------
Enter interface name [eg. eth0]: eth1
Configure another South-Bound interface? (y/n)? n
Restarting network service ...
Enable secure mode for Director HA ports? (y/n)? n
 => Clearing VNMSHA iptables rules
 => Persist iptable rules and reload..
 => Done.
Secure Director HA communication? (y/n)? n
 => Clearing strongSwan ipsec configuration..
 => Restarting ipsec service..
 => Done.
Prompt to set new password at first time UI login? (y/n)? n
Restarting versa director services, please standby ...
------------------------------------
Stopping VNMS service
------------------------------------
Stopping VNMS:TOMCAT.............[Stopped]
Stopping VNMS:KARAF..............[Stopped]
Stopping VNMS:REDIS..............[Stopped]
Stopping VNMS:POSTGRE............[Stopped]
Stopping VNMS:SPRING-BOOT........[Stopped]
Stopping VNMS:SPACKMGR...........[Stopped]
Stopping VNMS:NCS................[Stopped]
 * Stopping daemon monitor monit
   ...done.
  ____  _                _              _                                
 |  _ \(_)_   _____ _ __| |__   ___  __| |                               
 | |_) | \ \ / / _ \ '__| '_ \ / _ \/ _` |                               
 |  _ <| |\ V /  __/ |  | |_) |  __/ (_| |                               
 |_| \_\_| \_/ \___|_|  |_.__/ \___|\__,_|                               
  ____  _            _  ____                            _     _______  __
 / ___|| |_ ___  ___| |/ ___|___  _ __  _ __   ___  ___| |_  | ____\ \/ /
 \___ \| __/ _ \/ _ \ | |   / _ \| '_ \| '_ \ / _ \/ __| __| |  _|  \  / 
  ___) | ||  __/  __/ | |__| (_) | | | | | | |  __/ (__| |_  | |___ /  \ 
 |____/ \__\___|\___|_|\____\___/|_| |_|_| |_|\___|\___|\__| |_____/_/\_\
 Starting VNMS service
------------------------------------
Starting VNMS:NCS................[Started]
Starting VNMS:POSTGRE............[Started]
Starting VNMS:SPRING-BOOT........[Started]
Starting VNMS:REDIS..............[Started]
Starting VNMS:KARAF..............[Started]
Starting VNMS:TOMCAT.............[Started]
------------------------------------
Completed Setup
------------------------------------
Press ENTER to continue
------------------------------------
To run setup manually: /opt/versa/vnms/scripts/vnms-startup.sh
------------------------------------

Once you’ve finished the script you’ll need to reboot the server.  I’ll do that in the following output.

Ubuntu 14.04.6 LTS Director-1 ttyS0

Director-1 login: Administrator
Password: 
Last login: Mon Mar 16 22:23:09 UTC 2020 on ttyS0
===============================================================================
WARNING!
This is a Proprietary System
You have accessed a Proprietary System

If you are not authorized to use this computer system, you MUST log off now.
Unauthorized use of this computer system, including unauthorized attempts or
acts to deny service on, upload information to, download information from,
change information on, or access a non-public site from, this computer system,
are strictly prohibited and may be punishable under federal and state criminal
and civil laws. All data contained on this computer systems may be monitored,
intercepted, recorded, read, copied, or captured in any manner by authorized
System personnel. System personnel may use or transfer this data as required or
permitted under applicable law. Without limiting the previous sentence, system
personnel may give to law enforcement officials any potential evidence of crime
found on this computer system. Use of this system by any user, authorized or
unauthorized, constitutes EXPRESS CONSENT to this monitoring, interception,
recording, reading, copying, or capturing, and use and transfer. Please verify
if this is the current version of the banner when deploying to the system.
===============================================================================

  ____  _                _              _                                
 |  _ \(_)_   _____ _ __| |__   ___  __| |                               
 | |_) | \ \ / / _ \ '__| '_ \ / _ \/ _` |                               
 |  _ <| |\ V /  __/ |  | |_) |  __/ (_| |                               
 |_| \_\_| \_/ \___|_|  |_.__/ \___|\__,_|                               
  ____  _            _  ____                            _     _______  __
 / ___|| |_ ___  ___| |/ ___|___  _ __  _ __   ___  ___| |_  | ____\ \/ /
 \___ \| __/ _ \/ _ \ | |   / _ \| '_ \| '_ \ / _ \/ __| __| |  _|  \  / 
  ___) | ||  __/  __/ | |__| (_) | | | | | | |  __/ (__| |_  | |___ /  \ 
 |____/ \__\___|\___|_|\____\___/|_| |_|_| |_|\___|\___|\__| |_____/_/\_\
 Release     : 16.1R2
Release date: 20191101
Package ID  : 117dde1
[Administrator@Director-1: ~] $ sudo reboot
[sudo] password for Administrator: 

Broadcast message from Administrator@Director-1
        (/dev/ttyS0) at 22:29 ...

The system is going down for reboot NOW!
[Administrator@Director-1: ~] $ 
Ubuntu 14.04.6 LTS Director-1 ttyS0

Director-1 login:

Step 3: Perform the initial setup of Analytics

The next step in bringing up a headend is to configure the Analytics server. Analytics and Director will need to communicate securely so we are going to setup the network configuration first, then we are going to sync certificates between the two. Perform the following tasks to implement the Analytics server.

  1. Double click on the Analytics icon to open up the CLI
  2. Log into Analytics with the credentials “versa/versa123
  3. Edit the /etc/network/interfaces file with static IP addressing.

Use sudo nano /etc/network/interfaces for task 3 above.

 GNU nano 2.2.6         File: /etc/network/interfaces                Modified  

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static 
address 192.168.122.175
netmask 255.255.255.0
gateway 192.168.122.1

auto eth1
iface eth1 inet static
address 10.100.3.11
netmask 255.255.255.0

Next, bounce each interface.

[versa@versa-analytics: ~] $ sudo ifdown eth0
[versa@versa-analytics: ~] $ sudo ifdown eth1                
ifdown: interface eth1 not configured
[versa@versa-analytics: ~] $ sudo ifup eth0
[versa@versa-analytics: ~] $ sudo ifup eth1                  
[versa@versa-analytics: ~] $

Once the interfaces have been bounced we need to confirm the IP addressing and ping the Director. I’ll do that in the following output.

[versa@versa-analytics: ~] $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 0c:5d:40:dd:78:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.175/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::e5d:40ff:fedd:7800/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 0c:5d:40:dd:78:01 brd ff:ff:ff:ff:ff:ff
    inet 10.100.3.11/24 brd 10.100.3.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::e5d:40ff:fedd:7801/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:5d:40:dd:78:02 brd ff:ff:ff:ff:ff:ff
[versa@versa-analytics: ~] $ ping 192.168.122.174
PING 192.168.122.174 (192.168.122.174) 56(84) bytes of data.
64 bytes from 192.168.122.174: icmp_seq=1 ttl=64 time=1.38 ms
64 bytes from 192.168.122.174: icmp_seq=2 ttl=64 time=0.895 ms
^C
--- 192.168.122.174 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.895/1.140/1.385/0.245 ms
[versa@versa-analytics: ~] $

Now that I have the basic connectivity from Analytics I need to add resolution for Director–1. This step is important because later on, I need to register the Director with the Analytics server in the GUI interface and this is done by name. That name must be resolvable.

[versa@versa-analytics: ~] $ sudo nano /etc/hosts
  1 127.0.0.1   localhost
  2 127.0.1.1   versa-analytics
  3 192.168.122.174 Director-1
  4 
  5 # The following lines are desirable for IPv6 capable hosts
  6 ::1     localhost ip6-localhost ip6-loopback
  7 ff02::1 ip6-allnodes
  8 ff02::2 ip6-allrouters

Now we need to navigate to the scripts directory so we can run the vansetup script.

[versa@versa-analytics: ~] cd /opt/versa/scripts/van-scripts

Now that I’m in the van-scripts directory I can execute the vansetup python script

[versa@versa-analytics: van-scripts] $ sudo ./vansetup.py 
[sudo] password for versa: 
/usr/local/lib/python2.7/dist-packages/cassandra_driver-2.1.3.post-py2.7-linux-x86_64.egg/cassandra/util.py:360: UserWarning: The blist library is not available, so a pure python list-based set will be used in place of blist.sortedset for set collection values. You can find the blist library here: https://pypi.python.org/pypi/blist/
VAN Setup configuration start
<-- output omitted -->

Update config files

As the script runs you will be asked to delete the database. We want to do this so that it’s rebuilt from scratch with no existing data. Basically, we want a fresh start.

Delete the database? (y/N) y

Proceeding to delete the database in 5 seconds

Next, we will reboot when prompted to do so.

Reboot the node(recommended)? (y/N) y

After the reboot, we want to identify if the database successfully restarted after we deleted it. To perform this task you need to scroll up into the output text and find the statement that identifies a successful restart of the Cassandra database. You can see an example of the output you’re looking for below.

DSE daemon starting with Solr enabled (edit /etc/default/dse to disable)
   ...done.
Waiting for host 127.0.0.1 to come up 
0
UN  127.0.0.1  53.6 KB    ?       fa7139b0-77c1-4b0f-a967-6d754ea7aa28  -3572760821973264000                     RAC1


We can also check the state of the database after reboot by logging back in and using the nodetool status command. Specifically, look for the UN that indicates the database is UP and NORMAL. This is the same output that you would have f0und by scrolling back up through the script output.

[versa@versa-analytics: ~] $ nodetool status
Datacenter: Search-Analytics
============================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address    Load       Owns    Host ID                               Token                                    Rack
UN  127.0.0.1  287.27 KB  ?       fa7139b0-77c1-4b0f-a967-6d754ea7aa28  -3572760821973264000                     RAC1

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
[versa@versa-analytics: ~] $

Now we are going to enter the CLI of Analytics. You access the CLI by entering the command cli. You can see this below.

[versa@versa-analytics: ~] $ cli

versa connected from 127.0.0.1 using console on versa-analytics
versa@versa-analytics>

Next, we will enter the configuration mode using the configure command.

versa@versa-analytics> configure
Entering configuration mode private
[ok][2019-07-14 15:42:01]

[edit]
versa@versa-analytics%

Now that we are in configuration mode, we want to set the analytics node to the southbound interface IP address. This will also include defining the port to use for communication, the storage directory, and the format. Here’s the information we need:

  • Use Port 1234
  • Set the storage directory to /var/tmp/log
  • Use the syslog format
versa@versa-analytics% set log-collector-exporter local collectors VAN address 10.100.3.11 port 1234 storage directory /var/tmp/log format syslog
[ok][2019-07-14 15:48:24]

Now we need to commit the changes and exit. You can see that in the following output.

versa@versa-analytics% commit
Commit complete.
[ok][2019-07-14 15:49:24]

[edit]
versa@versa-analytics% exit
[ok][2019-07-14 15:49:26]
versa@versa-analytics> exit
[versa@versa-analytics: ~] $

Step 4: Connect to the Director Web Interface

My next step will be to connect to the Director GUI. We browse to the northbound interface IP address, which is the address we set on eth0 earlier. The screenshot below is not using the same IP address that we configured, but hopefully, you get the point. It’s an HTTPS connection and we are going to be warned about the self-signed certificate. Once you accept the certificate you can log in with the Administrator credentials.

Accept the certificate
Accept the certificate

Next, log in to the Director with the default credentials.

Director Login
SteelConnect Director Login

You’ll be asked to reset the password for the GUI.  Follow those instructions and click change.

Director Password Reset
SteelConnect Director Password Reset

Now we need to log in a second time with the new credentials.

The new password is only used for the GUI.

Second Login
Second Login

Step 5: Define the Analytics Cluster

After we’ve logged into the Director GUI we need to define our analytics cluster. To do so, navigate to Administration>Connectors>Analytics Cluster and click the + button to add a new Analytics Cluster. The northbound IP of our analytics cluster is 192.168.122.175 and the southbound IP is 10.100.3.110 (Yes, there is a type in my screenshot).

Add Analytics Cluster
Add Analytics Cluster

You’ll also notice that we give the cluster a name, in this case, Analytics, and we also name the northbound IP Analytics-1. Also, the connector port is left at the default value of 8080. We will use this port to connect to the Analytics GUI later on.

Analytics Cluster details
Analytics Cluster Details

Step 6: Generate and Sync certificates between Director and Analytics

Now that the Analytics cluster has been defined in the Director GUI we need to sync certificates between the two. To do so we will generate the certificate from the CLI of the Director. This is seen in the following output from the director CLI.

Director-1 login: Administrator 
Password: 
Last login: Mon Mar 16 22:28:56 UTC 2020 on ttyS0
===============================================================================
WARNING!
This is a Proprietary System
You have accessed a Proprietary System

If you are not authorized to use this computer system, you MUST log off now.
Unauthorized use of this computer system, including unauthorized attempts or
acts to deny service on, upload information to, download information from,
change information on, or access a non-public site from, this computer system,
are strictly prohibited and may be punishable under federal and state criminal
and civil laws. All data contained on this computer systems may be monitored,
intercepted, recorded, read, copied, or captured in any manner by authorized
System personnel. System personnel may use or transfer this data as required or
permitted under applicable law. Without limiting the previous sentence, system
personnel may give to law enforcement officials any potential evidence of crime
found on this computer system. Use of this system by any user, authorized or
unauthorized, constitutes EXPRESS CONSENT to this monitoring, interception,
recording, reading, copying, or capturing, and use and transfer. Please verify
if this is the current version of the banner when deploying to the system.
===============================================================================

  ____  _                _              _                                
 |  _ \(_)_   _____ _ __| |__   ___  __| |                               
 | |_) | \ \ / / _ \ '__| '_ \ / _ \/ _` |                               
 |  _ <| |\ V /  __/ |  | |_) |  __/ (_| |                               
 |_| \_\_| \_/ \___|_|  |_.__/ \___|\__,_|                               
  ____  _            _  ____                            _     _______  __
 / ___|| |_ ___  ___| |/ ___|___  _ __  _ __   ___  ___| |_  | ____\ \/ /
 \___ \| __/ _ \/ _ \ | |   / _ \| '_ \| '_ \ / _ \/ __| __| |  _|  \  / 
  ___) | ||  __/  __/ | |__| (_) | | | | | | |  __/ (__| |_  | |___ /  \ 
 |____/ \__\___|\___|_|\____\___/|_| |_|_| |_|\___|\___|\__| |_____/_/\_\
 Release     : 16.1R2
Release date: 20191101
Package ID  : 117dde1
[Administrator@Director-1: ~] $ cd /opt/versa/vnms/scripts/
[Administrator@Director-1: scripts] $ sudo su versa
[sudo] password for Administrator: 
orepass versa123 --overwritevnms/scripts$ ./vnms-certgen.sh --cn Director-1 --st 
 => Generating certificate for domain: Director-1
 => Generating ca_config.cnf
 => Generated CA key and CA cert files
 => Generating SSO certificates
 => Generating websockify certificates
 => Saving storepass and keypass

This must be done from the user account versa. After generating the certificate be sure to exit this user and return to Administrator.

Next, we will sync the certificate with Analytics. This is done using the vnms-cert-sync.py script. The script SCP’s the certificate to the correct location on Analytics.

versa@Director-1:/opt/versa/vnms/scripts$ exit
exit
[Administrator@Director-1: scripts] $ ./vnms-cert-sync.sh --sync
Syncing Director certificates to VAN CLuster
Enter VAN Cluster Name:
Analytics
VAN Clusters IPs: 192.168.122.175 
Attempting Key Based Auth..
Can we pick Private Key from ~/.ssh/id_rsa[y/n]y    
Enter password for Versa User for sudo:
Password: 
[Errno 2] No such file or directory: '/home/Administrator/.ssh/id_rsa'
Looks like SSH Key exchange not setup, falling back to password
Please Enter password for User - versa: 
Password: 
/usr/lib/python2.7/dist-packages/Crypto/Cipher/blockalgo.py:141: FutureWarning: CTR mode needs counter parameter, not IV
  self._cipher = factory.new(key, *args, **kwargs)
Connected to 192.168.122.175
[sudo] password for versa: rm: cannot remove '/opt/versa/var/van-app/certificates/versa_director_client.cer': No such file or directory

[sudo] password for versa: rm: cannot remove '/opt/versa/var/van-app/certificates/versa_director_truststore.ts': No such file or directory

DEleted Existing Certificate
SFTPed certificate File
Locate keytool utility:

/usr/lib/jvm/jre1.8.0_231/bin/keytool

Copy certificate:

Certificate: /opt/versa/var/van-app/certificates/versa_director_client.cer

 * Stopping versa-confd

 * Stopping versa-lced

 * -n  ... waiting for versa-lced to exit

 * Stopping versa-analytics-app

 * -n  ... waiting for versa-analytics-app to exit

 * Stopping daemon monitor monit

   ...done.

 * Versa Analytics Stopped

   ...done.

   ...done.

 * Restarting daemon monitor monit

   ...done.

 * Starting versa-analytics-app

 * Versa Analytics Started



             .---.,

            (      ``.

       _     \        )    __      ________ _____   _____

      (  `.   \      /     \ \    / /  ____|  __ \ / ____|  /\

       \    `. )    /       \ \  / /| |__  | |__) | (___   /  \

        \     |    /         \ \/ / |  __| |  _  / \___ \ / /\ \

         \    |   /           \  /  | |____| | \ \ ____) / ____ \

          \   |  /             \/   |______|_|  \_\_____/_/    \_\

           \  | /

            \_|/                   _   _  _   _   _ __   _______ ___ ___ ___

                                  /_\ | \| | /_\ | |\ \ / /_   _|_ _/ __/ __|

                                 / _ \| .` |/ _ \| |_\ V /  | |  | | (__\__ \

                                /_/ \_\_|\_/_/ \_\____|_|   |_| |___\___|___/





[sudo] password for versa: cp: '/opt/versa/var/van-app/certificates/versa_director_client.cer' and '/opt/versa/var/van-app/certificates/versa_director_client.cer' are the same file

Certificate was added to keystore

Certificate Installed

Next, we need to reboot the server.

[Administrator@Director-1: scripts] $ sudo reboot

Broadcast message from Administrator@Director-1
        (/dev/ttyS0) at 22:50 ...

The system is going down for reboot NOW!

Ubuntu 14.04.6 LTS Director-1 ttyS0

Director-1 login: 

Step 7: Log in to the Analytics GUI

Now we log into the Analytics GUI using the northbound interface and port 8080.

Analytics GUI login
SteelConnect Analytics GUI Login

Step 8: Add the Director hostname

After logging into the Analytics GUI we need to add the Director hostname. Recall earlier when we set up Analytics from the CLI we created a resolution for the Director hostname. To complete this step we need to navigate to Admin>Authentication and add the Director hostname.

This will match the entry placed in /ect/hosts.

Register the Director
Register the Director

To finish this step, don’t forget to click Register.

Step 9: Add the first organization

Now we are going to return to the Director GUI and add our first organization. We need a top-level “Parent” organization before we can add any controllers.

  1. Return to the Director GUI.
  2. Navigate to Administrator>Organization and click the + button.
  3. Provide the following values:
Name Subscription Profile
Riverbed Default-All-Services-Plan
  1. Click on the Analytics Cluster tab.
  2. Add the Analytics Cluster as seen below.
Add an Organization
Add an Organization

After the analytics cluster has been added we need to navigate to the Supported User Roles tab and add all roles for the parent organization.

  1. Click on the Supported Users Role tab.
  2. Click Add All.
Update User Roles
Update User Roles
Finish up by clicking OK.

 

Step 10: Configure the Controller IP

Well, we’re getting close to having a functional headend. If you’re still following along you may be thinking that this is a lot of work. In reality, what we’ve done here is not significant. We’ve brought two of the three devices in our headend up and the process has taken us less than an hour. To add to that, this is something you will only do once. After the headend is up and running you’ll mostly work with templates to apply configurations to onboarded branches. We’ll cover that in another article. However, I digress. Let’s return to the process.

The next step is to deploy the controller. To do so, we need to enable the eth0 interface on the controller itself. Remember that the controller runs SteelConnect EX software, which is the same software as what you will run in the branch. The difference is that it’s defined as a controller in the initial setup. So, let’s follow these steps to bring the controller into the headend deployment.

  1. Connect to the console of the Controller.
  2. Login to the controller using the username and password admin/versa123.
  3. Edit the /etc/network/interfaces file.
[admin@versa-flexvnf: ~] $ sudo nano /etc/network/interfaces
[sudo] password for admin:

In the interfaces file, set the IP address for the controller based on the table below.

IP address Netmask Gateway
192.168.122.176 255.255.255.0 192.168.122.1

You can see an example of the configuration file below.

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.122.176
netmask 255.255.255.0
gateway 192.168.122.1

Now we need to bounce the interface.

[admin@versa-flexvnf: ~] $ sudo ifdown eth0 
RTNETLINK answers: No such process
[admin@versa-flexvnf: ~] $ sudo ifup eth0   
[admin@versa-flexvnf: ~] $

And of course, we want to ping the Director to make sure we have connectivity. Once this is done we can move on to deploy the controller in the Director GUI.

[admin@versa-flexvnf: ~] $ ping 192.168.122.174
PING 192.168.122.174 (192.168.122.174) 56(84) bytes of data.
64 bytes from 192.168.122.174: icmp_seq=1 ttl=64 time=1.28 ms
64 bytes from 192.168.122.174: icmp_seq=2 ttl=64 time=0.782 ms
^C
--- 192.168.122.174 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.782/1.034/1.286/0.252 ms
[admin@versa-flexvnf: ~] $ 

Step 11: Deploy the Controller in the Director GUI

The next step is to deploy the controller in the Director GUI. We’re going to deploy the controller in the Riverbed organization. Remember that this was our parent organization. We can use this organization as our only organization or we can deploy multiple tenants with SteelConnect EX. For our examples in this blog series, we will use a single-tenant, Riverbed. Follow these steps to deploy the controller.

  1. Return to the Director GUI.
  2. Navigate to Workflows>Infrastructure>Controllers
  3. Click the + button to add a workflow.
  4. Provide the following elements to the General page.
  5. Name.
  6. Provider Organization.
  7. Check Staging Controller.
  8. Enter the IP address that you applied in the previous step.
  9. Select the Analytics cluster.
  10. Click Continue
Controller General Settings
Controller General Settings

When you enter the IP address of the controller it will test connectivity. You will see this in the window in the form of a spinning image, although it may be brief.

Next, enter location information. This requires City, State, Country and then clicking Get Coordinates, in which the Lattitude and Longitude will be populated. Then you can click Continue.

Controller Location Settings
Controller Location Settings

On the next tab, you need to enter the Control Network information, which includes the Network Name, interface, and IP address, as seen in the image below. Click Continue when the values have been entered.

Controller Control Network Settings
Controller Control Network Settings

On the controller, eth0 is connected to the management network. eth1 is connected to the control network but within the cli of the controller it is identified as vni–0/0/. This means eth2 will be identified in the controller cli as vni–0/1 and is connected to the MPLS network via the MPLS_SWITCH.

Next, configure the WAN interfaces. This task has multiple substeps as seen below. You need to repeat the following process for MPLS.

  1. Click on the +WAN Interface link on the top right side of the interface.
  2. Create an interface names Internet and select Internet as the Transport Domain.
  3. Click OK.

Create a WAN interface
Create a WAN Interface

Now select the VNI interfaces that connect to Internet and MPLS.

Selecting VNI Interfaces
Selecting VNI Interfaces

In our topology, VNI–0/1 is eth2 and VNI–0/2 is eth3. This is important because eth0 is connected to the management network, the northbound side of Director, and eth1 is connected to the control network, the southbound side of Director.

Select the appropriate network names, and provide the IP address and gateway for each. There’s a table below to show you the values I used.

Addressing VNI interfaces
Addressing VNI interfaces

 

Network Name IP address Mask Gateway Public IP
MPLS 10.100.21.3 /24 10.100.21.1
Internet 10.100.19.2 /30 10.100.19.1 192.168.122.25

Also, an important step for us here is that we need to advertise a public IP address for the Internet-Only branches to reach the controller. If we fail to add the public IP address here when an Internet-only branch is onboarded they will not be able to reach the controller. That being said, we also need to make sure that Static NAT and Access Rules are configured on the perimeter firewall (I’m not showing that in this article).

To finish things up you need to click Deploy. When you click deploy you should see a popup asking you to create the overlay scheme. Be careful here to allocate this addressing based on the sizing of your organization. Using a /24 would limit you to 256 branch sites as this space is used to address each site in the SD-WAN fabric.

In the following output, I have entered the IPv4-Prefix for the overlay addressing pool, as well as the maximum number of organizations as seen below. Therefore each organization has around 65K branch sites that we can address, not that we would have that many.

IPv4-Prefix Maximum Organizations
10.254.0.0/16 16

Create Overlay Addressing Scheme
Create Overlay Addressing Scheme

I’ll wrap this up by clicking Update.

Note in the bottom of the Director GUI that the controller workflow is immediately deployed.

View progress
View Progress

Step 12: View the progress of the controller deployment

There is a tasks view that we can open to see the progress. You can access this in the Director GUI by clicking the Tasks icon. This is the icon on the top right-hand side of the interface that looks like a checklist. This will open a list of tasks that you can expand and view as seen below. In the following output, you can see that the controller was deployed and in the running messages, you can see what happened at each step of the deployment that took place behind the scenes.

Progress details
Progress Details

Step 13: Log in to the Controller CLI and confirm the deployment.

Next, we are going to connect to the command line of the controller and have a look at how to verify the deployment there.

In the following output, you can see that I have accessed the CLI.

[admin@Controller-1: ~] $ cli


             .---.,
            (      ``.
       _     \        )    __      ________ _____   _____
      (  `.   \      /     \ \    / /  ____|  __ \ / ____|  /\
       \    `. )    /       \ \  / /| |__  | |__) | (___   /  \
        \     |    /         \ \/ / |  __| |  _  / \___ \ / /\ \
         \    |   /           \  /  | |____| | \ \ ____) / ____ \
          \   |  /             \/   |______|_|  \_\_____/_/    \_\
           \  | /
            \_|/                   _  _ ___ _______      _____  ___ _  _____
                                  | \| | __|_   _\ \    / / _ \| _ \ |/ / __|
                                  | .` | _|  | |  \ \/\/ / (_) |   / ' <\__ \
                                  |_|\_|___| |_|   \_/\_/ \___/|_|_\_|\_\___/



admin connected from 127.0.0.1 using console on Controller-1
admin@Controller-1-cli>

Once I’m in the CLI, I can use the show interfaces brief | tab command to view the interfaces that have been configured. You can see a sample of that output below. Let’s dig into what we’re seeing here.

admin@Controller-1-cli> show interfaces brief | tab
NAME         MAC                OPER  ADMIN  TENANT  VRF                    IP                  
------------------------------------------------------------------------------------------------
eth-0/0      0c:5d:40:be:eb:00  up    up     0       global                 192.168.122.176/24  
tvi-0/2      n/a                up    up     -       -                                          
tvi-0/2.0    n/a                up    up     1       Riverbed-Control-VR    10.254.16.1/32      
tvi-0/3      n/a                up    up     -       -                                          
tvi-0/3.0    n/a                up    up     1       Riverbed-Control-VR    10.254.24.1/32      
tvi-0/602    n/a                up    up     -       -                                          
tvi-0/602.0  n/a                up    up     1       Riverbed-Control-VR    169.254.0.2/31      
tvi-0/603    n/a                up    up     -       -                                          
tvi-0/603.0  n/a                up    up     1       Analytics-VR           169.254.0.3/31      
vni-0/0      0c:5d:40:be:eb:01  up    up     -       -                                          
vni-0/0.0    0c:5d:40:be:eb:01  up    up     1       Riverbed-Control-VR    10.100.3.12/24      
vni-0/1      0c:5d:40:be:eb:02  up    up     -       -                                          
vni-0/1.0    0c:5d:40:be:eb:02  up    up     1       MPLS-Transport-VR      10.100.21.3/24      
vni-0/2      0c:5d:40:be:eb:03  up    up     -       -                                          
vni-0/2.0    0c:5d:40:be:eb:03  up    up     1       Internet-Transport-VR  10.100.19.2/30      
vni-0/3      0c:5d:40:be:eb:04  down  down   -       -                                          
vni-0/4      0c:5d:40:be:eb:05  down  down   -       -                                          

[ok][2020-03-16 17:06:45]
admin@Controller-1-cli>

In the above output, the IP address assigned to the Riverbed-Control-VR on vni–0/0.0 10.10.254.16.1. This is from the subnet that we defined as the overlay network when we deployed the controller (remember the popup?). The IP address applied to the MPLS-Transport-VR is 10.100.21.3. This was the IP address that you applied to the MPLS interface vni–0/1. The IP address applied to the Internet-Transport-VR is 10.100.19.2. This is the IP address that you assigned to vni–0/2 when you deployed the controller in the Director interface.

Now, this output brings up a very good question. We know what the vni’s are. We assigned IP addresses to them when we onboarded the controller. VNI stands for Virtual Network Interface, and they are virtual in the sense that the controller software maps them to a physical interface on the hardware. For example, since eth0 is used for management, the SteelConnect EX software maps eth1 to vni-0/0 which is the control network, and eth2 gets mapped to vni-0/1. Eth3 then gets mapped to vni-0/2. But what are these TVI’s? We will save the discussion of that topic for another article, however, so that we understand what we are looking at here, a TVI is a Tunnel Virtual Interface. A TVI is not mapped to a physical interface. There are two of each TVI because SteelConnect EX sets up an unencrypted channel as well as an encrypted channel.

tvi-0/2      n/a                up    up     -       -                                      
tvi-0/2.0    n/a                up    up     1       Riverbed-Control-VR    10.254.16.1/32  
tvi-0/3      n/a                up    up     -       -                                      
tvi-0/3.0    n/a                up    up     1       Riverbed-Control-VR    10.254.24.1/32  
tvi-0/602    n/a                up    up     -       -                                      
tvi-0/602.0  n/a                up    up     1       Riverbed-Control-VR    169.254.0.2/31  
tvi-0/603    n/a                up    up     -       -                                      
tvi-0/603.0  n/a                up    up     1       Analytics-VR           169.254.0.3/31

Step 14: Configure a static route for Director

We are so close! This is the final step of my headend deployment and this step is important! We now have to tell the Director how to reach SteelConnect EX Control-VRs or we will not be able to onboard our branches. Recall that the Director has two interfaces: Management and Control. The default route points to the management interface, but the 10.254.0.0/16 overlay network is reachable on the control site or southbound side. This is how the Director connects to the branches via SSH and delivers netfonf connamds. If you miss this step then it just doesn’t work. So, let’s wrap this up. Follow these steps:

  1. From the director command line edit the /etc/network/interfaces file.
  2. Add the following line under eth1, the Southbound/Control network.

SSH to Director
SSH to Director

Enter the following command.  You can see an example in the image below.

post-up route add -net 10.254.0.0 netmask 255.255.0.0 gw 10.100.3.12
Add route to overlay
After you save the interfaces file with the route added, you need to bounce the eth1 interface.

 

sudo ifdown eth1
sudo ifup eth1

Next, we make sure that the route has been applied.

admin@Director-1:~$ netstat -rn
Kernel IP routing table
Destination     Gateway            Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.122.1      0.0.0.0         UG        0 0          0 eth0
10.100.2.0      0.0.0.0            255.255.255.0   U         0 0          0 eth0
10.100.3.0      0.0.0.0            255.255.255.0   U         0 0          0 eth1
10.254.0.0      10.100.3.12        255.255.0.0     UG        0 0          0 eth1
admin@Director-1:~$

And just like that, we have a headend ready to onboard. Let’s take a minute to review what we’ve done here.

Wrap up

We’ve covered a lot of ground in this article. The good news is that this is the most difficult part of the deployment (and it wasn’t even that difficult). But here is what we’re left with at the end of this article:

  • The Director has been configured.
  • Analytics has been configured.
  • We have GUI access to the Director and Analytics.
  • The Controller has been configured.
  • VNI’s and TVI’s are up on the controller.

Please stay tuned for more articles in this series as we onboard branches, configure routing and traffic steering, and explore the many, many technical features of SteelConnect EX.

]]>
https://www.riverbed.com/blogs/building-an-sd-wan-headend/feed/ 3
SD-WAN Data Center Integration https://www.riverbed.com/blogs/sd-wan-data-center-integration/ Wed, 18 Mar 2020 12:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14448
We continue our learning journey on SteelConnect EX, Riverbed’s Enterprise SD-WAN offering. This time, we are going to address one of the hottest and most complex topics when leading a major transformation like SD-WAN: the integration of the solution in your data center.
Unfortunately, a blog post would not be long enough to detail all the possible options and anyway, it would be foolish of me trying to address this topic in an exhaustive manner: there are as many data centers as there are enterprise customers…As a result, I am going to focus on the main principles that an architect should follow when integrating SteelConnect-EX in their network and some good questions to ask yourself.

Data center = Head-End

In a previous post, we reviewed the components of the solution:

  • Director is the component responsible for the management plane of the SD-WAN fabric;
  • Analytics is offering visibility on the network by collecting metrics and events via IPFIX and Syslog from branch gateways;
  • Controller is in charge of the control plane for the SD-WAN Fabric;
  • Branch Gateways – also known as SteelConnect EX appliances

The Director, analytics and controller form what we call the Head-End. Although they can be hosted in a traditional data center, they—and specifically the controller—are not part of the data plane, therefore a “branch” gateway will be required in the data center to join this particular site to the SD-WAN fabric.

Starter or dessert, that is the question

In any case, the first brick to deploy should always be the Head-End: whether it is hosted in your data center, in the Cloud, in a dedicated site or a managed service/hosted service.

Then, shall we start the rollout of the SD-WAN infrastructure with the datacenter or keep it at the end? This is a question that pops up all the time and the best answer to give is: it depends…

Data centers are traditionally more complex networks so my preference is to start here then the rest of the rollout will be easier and incremental. Additionally, since the data center is terminating most of the connections from branch offices that are consuming apps, you can quickly benefit from offloading traffic from MPLS to Internet uplink and leverage path resiliency features (FEC, Packet Racing, load balancing…) along with Application SLAs to enhance the user experience. Furthermore, as we deploy SD-WAN gateways in remote sites, we can track the performance of the data center appliances and validate initial assumptions made for the sizing.

Nevertheless, there are cases where it can make sense to conclude the rollout with the data center. It really depends on your drivers for adopting SD-WAN, your constraints (say a network freeze for a given period in the data center) and how you will be able to get immediate value. For example, should Direct Internet Breakout be a requirement for you to offload your MPLS and enhance the performance for SaaS or Cloud based applications, deploying gateways in the remote sites first will certainly deliver value. There is no need for the data center to be ready in that case. Another example could be routers’ end of life. Should you need to replace your routers in the branches, a SteelConnect EX appliance can be installed as a plain router first. SD-WAN features can be enabled at a later stage.

There are no good or bad answers here. Review your drivers for adopting SD-WAN and plan accordingly.

The golden rules

Deploying SteelConnect EX in your data center should be hassle free as long as you follow these few rules:

  • It is a router! As long as you are using standard routing protocols like BGP and OSPF, you can deploy the gateway the way you want. As opposed to most of the other solutions on the market, with SteelConnect EX you will benefit from all the bells and whistles of the routing protocols so you have full control and a lot of flexibility.
  • The controller must be on the WAN side of the gateway. Should you deploy the Head-End in the data center, you need to make sure that the only way for the appliance to form overlay tunnels with the controller is from its WAN interfaces.
  • The data center gateway can’t seat between the controller and remote sites gateways. This is the corollary of the previous rule. Should you deploy the Head-End in the datacenter and, for example, you replace the MPLS CE router with the SD-WAN gateway, you need to make sure that the controller has a different connection to MPLS or, if that’s not possible, the controller should only be available via the Internet.
  • The data center gateway can’t get access to the control network. It is a best practice to keep the control network that interconnects the Head-End components together (see our previous post about the architecture) isolated. As a result, should you deploy the Head-End in the data center, make sure the Control network subnet does not leak into the LAN. Use firewalls, Access Lists or routing redistribution policies to avoid that behavior.

Examples and counter-examples

In the following example, the Head-End is hosted in a different site or Cloud hosted or a managed service. The data center appliances are inserted between the aggregation layers and the CE routers.

Note that it is not always possible to grant direct access to all WANs to the controllers – in particular for Cloud-Hosted setup. As long as there is network connectivity between all the SD-WAN gateways and the controllers, this is fine. This will be the topic of a next blog post.

We could easily replace the CE routers as well. At the moment, our appliances are only offering ethernet (copper or fiber) NICS though.

For risk-adverse organizations that want to adopt SD-WAN with minimal disruption, it is also completely fine to deploy the following architecture. Here, the data center gateways are out-of-path and rely of route attraction, conversely route retraction if one route disappears from the SD-WAN overlay network. dc architecture 3

In the above example, there is only one connection depicted between the SteelConnect EX gateway and the WAN distribution router. In reality, we would need one per uplink (in this example, three connections: MPLS A, MPLS B and Internet) plus one for the LAN side. However, we could also rely on VLANs and have trunk connection(s) to transport LAN and WAN traffic.

We can achieve high-scalability and high-throughput by horizontally scaling the number of gateways. This deployment is called Hub Cluster and can be seen in the following topology example.

In this previous examples, the Head-End was not hosted in the data center. For organizations requiring all components to be deployed on-premises, solutions exist. Simply follow the golden rules. This following setup is not supported as the controllers seat on the LAN side of the gateways.

 

A potential solution to comply with that rule is depicted as follows:

Note that in order for the gateways to communicate with the controllers via the Internet uplink, they will need to use controllers’ public IP addresses. Indeed, when the Director pushes the configuration down to the appliance, if a public IP address is setup on the controller’s Internet uplink, this IP address will be part of the configuration, not the private IP address. Therefore, the firewall should be configured to allow that communication.

It may happen that there is no LAN interfaces left on the CE routers, in this case, you could have the controllers only connected to the Internet. However, you would need to make sure that all SD-WAN sites have network reachability to the controllers either with a direct Internet connection or an Internet Gateway within the MPLS cloud.

Should you keep your WAN distribution routers, having data center gateways and controllers at the same level will work too.

Checklist for a successful implementation

All data center networks are different. There are questions to ask yourself when you are approaching a design. Here is a list which does not pretend to be exhaustive:

What are our goals and drivers? The answer to that question should remain at the center of all decisions and answers to the following questions.

    • How many remote sites?
  • What are our throughput requirements today and in the coming months?
  • What are your requirements in terms of service resiliency, SLAs?
  • What are the routing protocols in use? Can we use BGP or OSPF?
  • Are we replacing the CE routers with the SD-WAN gateways or not?
  • Can we integrate with WAN distribution routers?
  • Do we need hardware appliances or will we go virtual?
  • What are the interface type and speed requirements?
  • Is there a WAN optimization solution in place?
  • Can we allocate public IP addresses to the controller?
  • How will we deploy the controller?
  • Are we using the data center as a hub or transit site?
  • Are there firewalls in the network path?

What have we learned today?

A data center is just “another branch” that requires its own SD-WAN gateway appliance—even if you host the Head-End here.

Please note that in the upcoming version 20.2, it will be possible to use a gateway appliance as a controller too, it will assume both roles at the same time. However, we will always need at least one dedicated primary controller. More details to come in a further post.

The SteelConnect EX is a router. Leverage all your routing knowledge to deploy it in your data center.

A question, a remark, some concerns? Please don’t hesitate to engage us directly on Riverbed Community.

Watch Video

]]> The Unpredictability of Office 365 Performance in a Work-from-Home Culture https://www.riverbed.com/blogs/unpredictability-office-365-performance-in-work-from-home-culture/ Tue, 25 Feb 2020 13:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14265 I often talk about modern workforces and how we have evolved from the 9 to 5 culture of going to work at some office or branch, and 8-hours later we come home. Sort of funny to even think that those days ever existed when today we’re expected to be able to respond 24 x 7, no matter where we are. We work from airports, coffee shops, planes and trains… I work from a ferry… we work from client sites far away from home and we work, of course… from home. And we work from home a lot. The expectation is that we are responsive so that we are never the bottleneck between a happy customer, a growing pipeline, a new design coming to market or a social media campaign being launched.

And to be responsive, we need our collaboration and file sharing apps to respond too—these days, often apps that sit in the cloud like Office 365 (O365). In other words, we need these apps to perform on demand.

Technically speaking, latency plays a significant role in in whether or not an app performs as we expect, when we need it. Often the misconception is that network bandwidth does the trick—but in reality, while adding bandwidth to a network can help streamline traffic, and ultimately allow us to get the most bang for our network spend, it doesn’t much assist with the experience a user has of an application. That requires a change in latency, and perhaps a boost that comes from tools purpose-built to accelerate apps regardless of latency.

An easy way to think about this is a drive from San Francisco to New York takes 44 hours. Even if I add more lanes to the freeway to make space for more cars, the drive is still going to take roughly 44 hours. Apps behave similarly. Unless the latency changes between the starting point and end point, my app response time will remain as is.

SF to NYC
Courtesy of Google Maps

Interestingly, I was running a test while working from my home office yesterday. Pretty straight forward stuff. Sitting in Marin County just north of San Francisco, one would think that I am relatively close to an O365 cloud pop, and my latency low.

At Riverbed these days we often talk about the incredible IT complexity of navigating today’s hybrid enterprise networks and apps. We talk about how unpredictable application performance can be as a result. But it wasn’t until yesterday that I realized just how significant that statement is. We absolutely NEED applications to perform to get our jobs done and done well for our company. But as network conditions seems to change like the wind, the performance of our business apps aligns.

So I was in my home office uploading a very large file with embedded video and graphics yesterday to SharePoint as a part of my recent endeavor to better understand the impact of application performance on business outcomes. In this case I was looking at the difference between a cold upload with the Riverbed SaaS Accelerator cloud service, versus an upload that was NOT enabled with Riverbed SaaS Accelerator.

In this particular case, while the cold file upload to O365 using SaaS Accelerator performed 45% faster than the upload of the same file without SaaS Accelerator, it took a little longer than I expected. Mind you—I was working from a home network (for me it was Comcast Infinity), but we all do that, so it’s a reasonable test.

After the upload, I decided to ping my system to see what my average latency was telling me. FROM THE SF BAY AREA, where you would expect latencies to always be low for O365, my latency average at this particular time was 196ms. You would think I was on the other side of the world! Comcast is getting a call from me!

Ping!
Ping!

Later in the day, I did a warm upload of the same large file, also using SaaS Accelerator. First of all, the result of the warm upload performed over 4000% faster than the original cold upload (4413.51% to be exact, going from several minutes to just a few seconds), for the most part, a testament to the Riverbed application acceleration. I also checked the latency at my home office. Now it was 91ms. Not as low as I would expect, but improved since earlier for whatever reason.

Ping!
Ping!

So again, we talk about our modern workforces accessing varying networks, and the unpredictability of application performance because of always changing conditions. As we work from home and other places such as airports, client sites, coffee shops and so on, IT teams may have no control over the conditions employees encounter as they need applications to help them execute their jobs.

So the moral of this story for enterprises with many employees who are making things happen at any hour of the day:

  • Application performance is incredibly unpredictable in today’s digital climate
  • The biggest impact on app performance—latency—will change for reasons that are out of IT control
  • Riverbed SaaS Accelerator can make sure business apps like O365 perform—no matter what the conditions may be.

Incidentally, I recently noted on LinkedIn that a common misconception in a global enterprise is that SD-WAN alone will eliminate application performance concerns. But as we have discussed in this blog, ensuring the applications we invest in always perform at their best requires us to take on latency—and perhaps even more complex than network management, latency is incredibly unpredictable in today’s digital world! Here’s a short video from my partner Brandon Carroll, CCIE #23837 introducing a way to address both.

]]>
24×7 Enterprise Apps: Office 365 on Planes, Trains, Automobiles and Home Offices Part 2 https://www.riverbed.com/blogs/24x7-enterprise-apps-o365-part-2/ Tue, 18 Feb 2020 13:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14243 How would you feel about 150% faster time to revenue?

Revenue growth

If you read Part 1 of this story, you know that my personal experience and observations in the first few weeks of having Riverbed SaaS Accelerator running to boost the performance of my Office 365 (O365) has been noticeably better.

But just how much would you believe me if I didn’t CLOCK IT?

So that’s what I set out to do. Mind you—I’m really not a naturally technical person, but I’ve been around the networking space for a little more than 20 years, so I suppose I have learned a thing or two…

These days when I speak with sellers, partners and enterprise customers, we often find ourselves considering the many ways users access the apps they use to do their jobs.  It’s become what I call ‘the Planes, Trains and Automobiles’ talk. I mean—in reality, in these digital times we try our best to be available wherever we happen to be. And more often than not, that can also mean a lot of logging in from home and local coffee shops! So that’s what I decided to test first: my home and my local Equator Coffee in Larkspur, California.

THE GOAL: To prove that the experience I shared on stage and in the previous blog wasn’t just a fluke—and so that I could share proof with YOU!

Coffee shop

Working at Equator Coffee, Larkspur
Equator Coffee, Larkspur, CA

In this initial test, I decided to VPN in from my local coffee shop to simulate a real world backhaul scenario. First, I would clock and upload a large file to OneDrive, enabled WITH Riverbed SaaS Accelerator. Then I would clock the same file upload to Dropbox, NOT enabled with SaaS Accelerator. I used the stopwatch on my phone for this test and hit START at the same time as I hit the UPLOAD button. In a future blog I’ll show you what happens when I do this without the VPN too. All of the tests here are cold uploads. In another blog I’ll get into the distinctions between first-effort cold and subsequent warm uploads. Here’s what happened:

OneDrive (with SaaS Accelerator enabled for O365 and Client Accelerator enabled on my laptop)

  • 129MB ppt upload
  • Avg latency 91.614ms
  • 1 min 55 sec upload
  • VPN active

 

DropBox (no SaaS Accelerator)

  • 129MB ppt upload
  • Avg latency 31.675ms
  • 5:04+++

It’s important to note what I mean by adding the ‘+++’ after the 5:04. That just means that the file was not finished uploading and I got frustrated and shut it down before the upload completed. I mean—has this happened to you? You’re working outside the office and doing some sort of file share to an enterprise SaaS platform, and the upload or download takes so long that you get distracted and walk away, putting off what you were focused on for another time? I wonder how much work time we all waste on this sort of frustration?

Anyway, the conclusion here was that the OneDrive through the VPN with SaaS Accelerator was more than 225% faster—and since the upload done without SaaS Accelerator was never completed, who knows when that would have completed.

Incidentally, I looked at the latency from the coffee shop, and as you can see from the averages noted above, the latency in this case was manageable. Imagine if my latencies were even higher—as they can often be for employees who are often on the road and mobile.

Now let’s take a look at my home office.

Home office

And so I went home and ventured to test from there. After all, many of us work from home regularly—whether it’s logging in at night to get an urgent something out to a customer, to meet a deadline, or working some days of the week from a home office, working from home is hardly unusual behavior in 2020. In fact, this morning I was reviewing a survey of 104 executives at large enterprises done by one of our teams. This was focused on the use of enterprise SaaS applications, and 78% of those surveyed noted ‘home’ as a place where they regularly access O365.

For this test, I decided to use a slightly larger file, and also go direct to the Internet. It was a relatively arbitrary choice, but in this case, I looked at this without the VPN. Here’s what resulted:

OneDrive (SaaS Accelerator enabled)

  • 173MB ppt upload
  • Avg latency 73ms
  • 39-second upload

 

Stopwatch on my cell phone

DropBox (no SaaS Accelerator)

  • 173MB ppt upload
  • Avg latency 21ms
  • 2:37.47 minutes
  • What I noticed: a lot of hanging, wondering when the file was going to finish uploading; risk of losing patience as I did in the coffee shop

With Riverbed SaaS Accelerator, the upload was 75% faster

Now by no means is this meant as a negative on either SaaS application. Whether it’s O365 or Dropbox or Box or Salesforce or otherwise, these modern tools have given us new roads into collaboration and sharing on a global scale that we really were unable to achieve by way of old-school data center-based application approaches.

However, the question now becomes this: are we getting the most out of these applications into which we are investing hundreds of thousands and millions of budget dollars on behalf of our companies.

What happens when we apply the concept of files-sharing to and from a SaaS cloud such as O365 to 100, 200, 1000 or more revenue-generating employees uploading and downloading files every business day in order to get a new product to market, to execute a time sensitive mission, to collaborate on a big R&D project or automotive design, to process orders, connect with a customer, or any other business-critical transaction?

And if we take just the average acceleration of the 2 examples I have noted here, 225% faster and 75% faster respectively, how would your business be impacted if every SharePoint and OneDrive action performed 150% faster?

]]>
SteelConnect EX SD-WAN Architecture Overview https://www.riverbed.com/blogs/steelconnect-ex-sdwan-architecture-overview/ Thu, 13 Feb 2020 13:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14296 The holiday season is just over and while I was looking at my kids taking apart their brand new toys—and telling them they probably should not, I remembered that I was actually the very same years ago. I wanted to understand how things were built and how this new cool 1/18 racing car was able to reproduce the sound of an actual engine and have lights on.

The truth is, now as a grown-up, I still enjoy that, drilling down and get my hands dirty. I like to understand how things are working under the hood. That helps me to anticipate the capabilities and limitations of a product, beyond marketing shiny announcements.

If you are like me and interested in SD-WAN, you are in the right spot: we are going to explore the world of SteelConnect EX, Riverbed’s Enterprise SD-WAN offering.

In this first episode of the series, we are going to discuss the overall architecture of the Riverbed SD-WAN solution.

Components

Following SDN’s disaggregation principles, SteelConnect EX enterprise SD-WAN solution is comprised of several stacks:

  • Director is the component responsible for the management plane of the SD-WAN fabric;
  • Analytics is offering visibility on the Network by collecting metrics and events via IPFIX and Sysflow from branch gateways;
  • Controller is in charge of the Control plane for the SD-WAN Fabric;
  • Branch Gateways—also known as SteelConnect-EX appliances—are the SD-WAN appliances that will be deployed in the various sites. They are available in various form factors including Hardware, Virtual and Cloud (for IaaS platform like AWS, Azure…). Gateways are actually the data plane and will be deployed in all SD-WAN sites: Data centers, Hubs, Cloud (IaaS) and offices.
SteelConnect EX architecture
SteelConnect EX architecture

Each of the components can be deployed in High-Availability mode.

Each of those components is multi-tenant. All of them. Even the Branch Gateways! This will be the topic of a dedicated upcoming blog post.

Head-Ends

Director, Analytics and Controller are the three components that we call Head-Ends. They can be deployed in a data center, in the Cloud (Azure, AWS…) or hosted and operated by a Telco Service Provider on their network.

SteelConnect-EX Head-Ends
SteelConnect-EX Head-Ends

Director

The Director is a management system for the provisioning, management and monitoring of the SD-WAN infrastructure. It means that we can:

  • Create template of configurations for networking, SD-WAN policies (overlays, Path-Selection, path resiliency features…), Security and so on.
  • Manage gateways’s full lifecycle (on-boarding, configuration, firmware upgrade, RMA…)
  • Monitor and get alerts

Director can be configured via a web GUI, RESTful APIs or even CLI.
Director pushed the configuration to the Branch Gateways via NetConf. The NetConf commands are routed via the Controller.

Director
Director

Director offers Role-Based Management Access Control (RBAC) which means that one can delegate the management of a portion of the network to different individuals or teams.

Director can integrate with third-party solution as well and orchestrate the deployment of virtual SteelConnect-EX on private and public clouds.

Visibility and monitoring with analytics

SteelConnect Analytics is a big data solution that provides real-time and historical visibility, baselining, correlation, prediction and closed-loop feedback for SteelConnect EX software-defined solutions.
The key features include:

  • Policy driven data logging framework
  • Reporting for multiple networks and security services
  • Real-time and Historical traffic usage and anomaly detection
  • Multi-organizational reporting
  • Analytics will collect IPFIX and Syslog from gateways via the Controller.
SteelConnect Analytics

Analytics is an optional component of the solution but highly recommended to get visibility into the SD-WAN fabric.

Controller

From a software point of view, a Controller runs the exact same code (i.e same firmware) than the Branch Gateway. When on-boarded on the Director, that particular appliance is given a role, the controller role, and will be in charge of the control plane.

The Controller is in charge of on-boarding SD-WAN gateways into the network. It uses IKE and PKI certificates to authenticate branch SteelConnect-EX appliances.

From a routing point of view, a Controller acts as a route reflector for SD-WAN branches. When one branch gateway advertises a route to the Controller, it will be “reflected” to all other SD-WAN gateways (within a specific Transport Domain, we will discuss it in a following article). In fact, in addition to route information, the Controller reflects as well the Security Association (SA) information so that the destination branches in a same VPN can establish secure data channels between each others.

The Controller enables IPSEC connectivity between SD-WAN sites without the overhead of maintaining a full mesh of IKE Keys among all branches. This optimization reduces the complexity and overhead of maintaining N2 links and keys. The Control Plane between the Controller and the SteelConnect-EX appliances distributes IPSEC keys to other branch nodes.

The Controller will never route user traffic (data plane). The tunnels formed with branch appliances are only used for the control plane: routing (MP-BGP), security key information, NetConf via SSH, IPFIX, probing… It means that, should you deploy the Head-Ends in your data-center, you will need to have a SD-WAN gateway there too to send traffic across the SD-WAN fabric.

The controller will route control traffic between the Head-Ends and the SD-WAN gateways via the overlay network.
A Controller can handle up to 2’500 sites. Should we need to scale to higher numbers, we can scale horizontally and add more Controllers.

Network topology

Director, Analytics and Controller are colocated and will be connected between each other by a Control Network (Southbound for Analytics and Director). All communications between Analytics, Director and the SD-WAN gateways will be done via the Control Network and routed by the Controller.

This Control Network will not be routed and not advertised on the network.

Head-End Network Topology
Head-End Network Topology

A Management Network is also configured to expose GUI and APIs to the administrators as well as third-party tools.

To conclude

In this first episode of the series about SteelConnect-EX, we highlighted the role of the 4 main components of the solution: the three Head-End devices: Director, Analytics and Controller. SteelConnect-EX gateways that are deployed in all SD-WAN sites.

In the following post, we are going to have a look at the routing principles of the SteelConnect-EX gateways.

If you enjoyed it or if you have questions, feel free to leave a comment or engage us on the Riverbed Community web site and Twitter.

Watch video 

 

]]>
24×7 Enterprise Apps: Office 365 Performance on Planes, Trains, Automobiles and Home Offices, Part 1 https://www.riverbed.com/blogs/o365-performance-acceleration-saas-acceleration-part-1/ https://www.riverbed.com/blogs/o365-performance-acceleration-saas-acceleration-part-1/#comments Tue, 11 Feb 2020 13:30:00 +0000 https://live-riverbed-blog.pantheonsite.io?p=14221 Alison Conigliaro-Hubbard presenting at Riverbed SKOI was standing on a stage in front of an audience of sellers last week sharing a personal experience I recently had using some of Riverbed’s modern application performance technology. I was fortunate to be one of the early internal users of Riverbed SaaS Accelerator for Office 365 (O365) and Client Accelerator as IT rolls these out companywide—sometimes when you know people in the right places it works out quite nicely. Before the holidays I started using these on my system—my goal was to experience them for a few weeks myself before our annual Sales Kickoff (SKO) with the hope of being able to share my excitement!

In my role I do a lot of file sharing on OneDrive and SharePoint, and as we lead up to this very important annual event for our global sales teams, as someone responsible for the content, I share some really big files all day every day as we aim to hit deadlines. I also do not work from the office in San Francisco every day. Some days I work from my home office in Marin County. Sometimes I work from a coffee shop. Sometimes an airport or a client site. Like many of us these days, work doesn’t go away when I leave the office.  In order to stay customer-focused – the reality is I am generally available wherever I might be.

App experience can vary dramatically

Unfortunately, depending on where I am, the experience I have of the apps I need to collaborate and get my work done—and in my case O365 apps such as SharePoint and OneDrive—can vary dramatically. Technically speaking, networks change all the time depending on where I log in, and so like most of us, I end up with fairly unpredictable performance—sometimes slow, sometimes fast, sometimes not at all. Often inconsistent. Not exactly reliable.  And in the enterprise when time equals money for my company, consistent, reliable, and fast apps are a difference maker!

So leading up to Riverbed SKO, I am working with some very heavy files—ones you might equate to mega design files as a manufacturing company or AEC firm. These can be 900MB+ files! And I am uploading and downloading to and from OneDrive several times a day.

For a few weeks I had been working with Riverbed SaaS Accelerator in the background as I spent the holidays in a hotel in Southern California, and worked in a variety of locations. Riverbed SaaS Accelerator is cloud-based software that maximizes performance for enterprise SaaS apps, and in my case, specifically assigned by my IT organization as an insurance plan to make sure O365 apps perform as expected. I also have Riverbed Client Accelerator installed on my laptop. (If you’re reading this and have used Riverbed for WAN Optimization over the years, what you may not know is that today Riverbed makes is super easy to accelerate performance of critical SaaS applications like O365 and others, so that no matter where we may be working at any given moment, we are always set up to make things happen!)

Is this thing going to upload?

Anyway, it’s a couple days before SKO and I had to upload this 940MB file to OneDrive to share with my colleagues for final review. I’m working from home on this day and things are minute to minute, deadline-driven as we are only days ahead of the most important internal event of the year. I was a little nervous before pressing the upload button—almost wishing I could transport myself to the office by snapping my fingers just to access the network there! Is this thing going to upload???

Riverbed SaaS Accelerator for O365 - Uploading a Large File

Not only did it upload—but it was FAST! Now I didn’t clock it because I just did it, and from my personal user experience of it—it took no time at all—not even close to my low expectations. And this is when I checked in with myself and noticed something really interesting… it’s like I had this AH HA MOMENT!

I had SaaS Accelerator running in the background for a few weeks, and out of nowhere I felt like my entire experience of O365 had changed. I trusted O365 to just do what it was supposed to do—I was getting things done as soon as I wanted them to be done. It was just WORKING! It was fast and reliable, and it was consistent no matter where I was or how big a file I threw at it.

But wait, there’s more!

After the 940MB file upload to OneDrive, I ALSO had to upload this same file to and external Dropbox folder, because I needed to get the file to show organizers who did not have access to our O365. Unfortunately, Dropbox was NOT enabled with a SaaS Accelerator license. So for this file upload I needed to walk away and do other things because the upload just hung there. And hung there. Ultimately it took well over an hour.

And so this is the anecdotal story I shared on stage with the Riverbed sellers in my session last week. And if you like that… just wait until you read what happened when I got home from SKO and decided to CLOCK IT! 

]]>
https://www.riverbed.com/blogs/o365-performance-acceleration-saas-acceleration-part-1/feed/ 1
Protecting End Users in an SD-WAN World https://www.riverbed.com/blogs/protecting-end-users-in-an-sdwan-world/ Mon, 03 Feb 2020 13:30:19 +0000 https://live-riverbed-blog.pantheonsite.io?p=14157 When it comes to an SD-WAN deployment we tend to spend a lot of time thinking about connectivity, reachability, protocols, traffic steering and so on. One area that we sometimes overlook is SD-WAN security. It’s easy to do. Take for example a network deployment with several MPLS branches. All traffic is backhauled to the data center and then pushed through our high-end firewalls. The security group handles the firewalls. The infrastructure group handles the WAN and routing. Everyone has their own lane to stay in. Life is okay. But now the infrastructure group is talking about SD-WAN and how it’s going to help save money. The plan is to replace our WAN-edge routers with Riverbed’s SteelConnect EX SDWAN solution. From that replacement, we gain the ability to move to multiple lower-cost Internet circuits, perform application identification and path-quality-path-selection. Our routing protocols are compatible. All the bases seem to be covered, or are they?

Does SD-WAN deployment require backhaul?

Once an SD-WAN deployment is in place and Internet circuits are in use we look at how we can improve performance for our end-users. Backhauling user data over an Internet-based VPN can add latency and cause the end-user to experience delays.

Backhaul

This obviously impacts the human experience and we need to avoid that. This happens to be one of the benefits of an SDWAN deployment. With Internet circuits deployed at each branch, we can shave some of that latency by sending select traffic direct to the Internet. An example of the type of traffic that is normally sent “direct-to-net” as it’s referred to, is Microsoft Office 365, Salesforce, or Workday bound traffic.

Direct-to-Net

What this translates to is the WAN-edge device now being required to perform Network Address Translation (NAT) and at minimum a Stateful Firewall service. This allows outbound sessions to be tracked in a state table. Inbound traffic is referenced against that table to determine if it is a valid reply to an existing outbound connection. If it is, the traffic can pass. If it’s not, the traffic is discarded. The good news is that the Riverbed SteelConnect EX SDWAN solution provides this capability, and a whole lot more.

SteelConnect EX SD-WAN security capabilities

The SteelConnect EX offers a rich security feature set that’s licensed-based. There are three license levels:

  1. Secure SD-WAN Essentials, includes Stateful and Next-generation firewall (NGFW) capabilities
  2. Secure SD-WAN Standard, also includes Stateful and NGFW capabilities
  3. Secure SD-WAN Advanced, includes Stateful, NGFW, as well as Unified Threat Management features.

We will discuss these capabilities in the following sections.

Stateful Firewall

The stateful firewall provides a mechanism to enable full visibility of the traffic that traverses through the firewall and also enforces very fine grain access control on the traffic. To begin making use of this capability you must classify traffic. This is the process of identifying and separating traffic in a manner that makes it identifiable to the firewall service. To classify the traffic, the stateful firewall verifies its destination port and then tracks the state of the traffic. SteelConnect EX monitors every interaction of each connection until the session is closed.

stateful firewall

The stateful firewall grants or rejects access based not only on port and protocol but also on the history of the packet in the state table. When the SteelConnect EX stateful firewall receives a packet, first it checks the state table for an established connection or for a request for the incoming packet from an internal host. For example, when an internal host establishes an HTTP session to an external server, it begins by establishing a TCP session. This is the process of SYN, SYN-ACK, ACK. Until that three-way-handshake is completed the flow of packets is not considered a “session.” Therefore, when a TCP SYN is sent from an internal host, outbound, this is entered into the state table. The returning SYN-ACK is verified against the information in the state table. If nothing is found then the packet’s access is subject to the access policy rule.

An access policy rule gives us a way to decide if traffic can pass even if it does not match an entry in the state table. An example of this would be ICMP traffic. ICMP is not a stateful protocol, so therefore we could say in an access policy rule that all ICMP traffic inbound is allowed, regardless of a state entry or not. Most of the time an access policy is used to allow inbound access to services such as Web and FTP servers. This is not very common for the branch office.

NGFW

The Next-generation firewall (NGFW) is a robust security module that has the intelligence to distinguish different types of traffic. Recall that the Stateful firewall made use of ports, protocols, and IP addresses to identify traffic and create an entry in the state table. The NGFW provides network protection beyond the protection based on ports, protocols, IP addresses. In addition to traditional firewall capabilities, the NGFW includes filtering functions such as an application firewall, an intrusion prevention system (IPS), TLS/SSL encrypted traffic inspection, website filtering, and QoS/bandwidth management.

next generation firewall

These features can all be enabled, based on your license, and applied to a group of devices. It’s expected to see some type of performance impact when implementing these features, however, this should be a nominal impact and you should weigh out the need for the feature versus the impact prior to rolling the feature out to a large number of sites. The way I like to look at these features is like a toolbox filled with specialty tools. Not every situation requires the use of a hammer, so figure out what tool you need for your situation and implement it accordingly.

Unified Threat Management

SteelConnect EX includes Unified Threat Management (UTM) capabilities, which can be turned on by configuring the threat profiles in the NGFW policy rules. This means that UTM requires the use of the NGFW first.

The following threat profiles are supported:

  • Antivirus
  • Vulnerability (IDS/IPS)

SteelConnect EX has a built-in antivirus engine. This engine will scan live traffic looking for threats. To accomplish this, the antivirus engine waits till the last byte of the file is received before processing the entire file at runtime. You will need to configure at least one antivirus profile to enable the scanning of files for viruses.

built-in antivirus engine

To enable and enforce and antivirus profile a NGFW policy rule must be configured. When configured, the antivirus profile is applicable to all traffic that matches the security policy rule. Taking things a step further, what you tell the antivirus profile to do, is extract files from certain types of traffic. This could include HTTP, FTP, and common email protocols. As you might have guessed, the protocols the antivirus engine extracts files from are commonly used to transmit these types of threats.

When a file is extracted from one of these protocols it is buffered, forwarded to the destination (with the exception of the last packet), and scanned. If a virus is found the profile action is applied, otherwise the last packet is sent.

An antivirus profile supports the following enforcement actions:

  • Alert—Alerts the user when a virus is found. Virus information is stored in a log file.
  • Allow— The antivirus profile does not scan the file. It just allows it.
  • Deny— The antivirus profile aborts the flow on which the virus file is received.
  • Reject— Both client and server connection is reset.

Final thoughts

In this article, we’ve discussed three levels of SD-WAN security capability featured in the Riverbed SteelConnect EX SD-WAN solution. Knowing that these features are available can help make the determination of how branch traffic is handled. If the decision is to backhaul all Internet-bound traffic to the data center, then there won’t be much of a need to employ these advanced security features, outside of basic protection for a device. If the decision is to enhance the user experience by sending specific traffic “direct-to-net” then these features should certainly be discussed and the degree to which they are implemented will need to be determined. All-in-all the SteelConnect EX solution provides a proper degree of protection for branch traffic when Internet uplinks are made available.

But what about performance? A decision to backhaul will provide some benefits but more is involved in ensuring the user experience is the best it can be. For example, Microsoft services are regional and users may still experience less than ideal performance. The same is true for other SaaS offerings, largely due to the location of the services. For this, I urge you to have a look at the Riverbed SaaS Accelerator service. SaaS Accelerator combined with SteelConnect EX provides the highest level of WAN connectivity, branch security, and end-user performance, focused on enhancing user productivity.

]]>
Are You Digitally Competent? https://www.riverbed.com/blogs/are-you-digitally-competent/ Tue, 28 Jan 2020 13:30:41 +0000 https://live-riverbed-blog.pantheonsite.io?p=14144 The role of digital competence in digital transformation

Everybody talks about digital transformation, but how can you be sure it’s working for your company?The role of digital competence in digital transformation In other words, how do you ensure you’re getting the business performance you expect from your digital investments?

This is where digital competency comes into play. This is how you translate the vague promises of digital transformation into on-the-ground, bottom-line digital performance, which in turn drives business outcomes that can make a real difference to your enterprise.

What are digital competencies? They are a whole spectrum of technology skills and processes you need to master to compete in the new economy. Digital competence includes everything from IT infrastructure automation and modernization to digital product and service innovation to digital talent management and much more.

From digital competence to business performance

Hundreds of respondents to a recent Economist Intelligence Unit survey said that 80% of digital competencies matter for the business, and two-thirds said that they’re producing positive business outcomes—such as faster speed to market, greater agility and innovation, more revenue, bigger margins, and perhaps most important, a better customer experience.

In fact, delivering a great experience for users—including both customers and employees—has become a reliable predictor of great business performance. For example, one study showed that improving UX by as little as 1% can lead to a 100X boost in business growth. A poor UX, on the other hand, can have the opposite effect. For example, the Aberdeen Group found that just a one-second delay in page load times leads to 11% fewer page views, 16% lower customer satisfaction, and a 7% loss in customer conversion.

Across cultures, there is a common desire for a simple and streamlined user experience. That’s why many companies are setting up internal app stores where employees can go and get what they need to be productive at work. I believe that more enterprises should start asking their employees, in effect, “how would you like to work?” and then try to deliver that experience.

Closing the gap between IT and other teams to improve digital competence

Still, building a great user experience—and developing other digital competencies—is harder than you think. The Economist survey, for example, revealed that misunderstandings between the IT department—which often plays a leading role in developing digital competencies—and other parts of the organization remain a stumbling block. In many cases, IT tends to overestimate the readiness of non-IT folks, while business leaders tend to assume that IT understands the business perspective. In the survey, nearly two-thirds of respondents said that poor communication between IT and other departments limits their organizations’ digital competencies. About 61% of IT people said their non-IT leaders do not understand the technical complexity of digital systems.

Business outcomesI believe IT leaders should take the lead in closing the communications gap. CIOs can start by forging a closer partnership with the CEO and helping to define their company’s business and technology strategies. In my opinion, the CIO should think and act more like the CEO. This will require tech leaders to learn another competency: translating the technology aspects of digital transformation into the business language that CEOs and board members can relate to.

That may explain the trend towards appointing chief digital officers, or CDOs, who are not only responsible for overseeing back-office IT tasks, but who also set the vision for and lead the company’s digital transformation. That’s one of the most important digital competencies you can have.

]]>