AI Adoption in Networking

By Greg Bryan, Feb 05, 2026
Networks Podcast AI

This podcast conversation covers:

Host Greg Bryan welcomes Jason Gintert back to the show. As the incoming President of the U.S. Networking User Association (US)NUA, Jason dives into the current state of AIOps (AI for IT Operations) in network management. This includes:
  • Low Adoption and Key Challenges: Despite being a popular topic, AIOps adoption remains low due to concerns about trusting AI output (hallucinations) and the indeterminate nature of large language models.

  • Data Integration and Security: The importance of integrating private data using methods like the model context protocol to make AI answers more deterministic. For security, jason strongly recommends starting with a read-only, zero-trust mindset when exposing data to AI tools.

  • Practical Use Cases: AI is most valuable for root cause analysis of common network problems (e.g., Wi-Fi authentication issues, circuit errors) and providing automated network summaries or predictive analysis.

  • The Human Element: AI is seen as a powerful tool to increase the productivity of network engineers by handling low-level tasks, but it will not replace humans. Engineers remain crucial for exercising judgment and taking responsibility for service-impacting changes. 

Key Takeaways

The current state of AIOps

Despite the media frenzy surrounding Large Language Models (LLMs), actual adoption of AIOps in network management remains nascent. Recent surveys suggest that only about 15% of organizations have deployed AIOps tools.

Jason points out that the hesitation stems largely from trust issues. Engineers are wary of "hallucinations," where an AI might confidently provide false information, leading troubleshooters down the wrong path. Furthermore, data quality remains a significant hurdle. Many organizations possess years of unformatted legacy data that must be "massaged" before it can be effectively utilized by AI models.

How to implement AIOps

For network managers looking to dip their toes into AIOps, the advice is straightforward: start with the tools you already have. Many vendors, such as Juniper (Mist) and HPE (Aruba Central), have been integrating AI capabilities into their platforms for years.

For those looking to integrate their own internal data with LLMs, Jason recommends exploring the Model Context Protocol (MCP). MCP acts as a translator, allowing LLMs to securely query databases via API calls or SQL without needing to ingest the data permanently.

However, security is paramount. When connecting AI to network data, engineers should adopt a "Zero Trust" mindset. This includes giving AI agents read-only access to prevent accidental data deletion or unauthorized configuration changes.

The human element: context and intent

The most compelling use cases for AIOps currently involve root cause analysis and routine troubleshooting. Instead of combing through logs for hours, an engineer might ask, "Why can't Sally connect to the Wi-Fi?" and receive an immediate diagnosis regarding password failures or signal strength. AI agents can also generate morning summaries, alerting engineers to overnight circuit flaps or anomalies.

However, AI currently lacks the ability to understand "intent" and organizational context. An AI might flag a maxed-out circuit as a critical failure, unaware that the office is closed or undergoing scheduled maintenance. Because AI cannot make judgment calls based on nuance, a "human in the loop" remains essential to authorize changes and interpret data.

A new way of working

By automating Tier 1 support tasks and rote data analysis, AI allows network engineers to escape the mundane and focus on complex, high-level problem solving. As the industry evolves, the most successful engineers will be those who learn to wield these new tools effectively.