Telecom companies face the ongoing challenge of meeting service level agreements (SLAs) to ensure network quality and quickly resolve complex network issues. This challenge often results in prolonged network downtime, affecting both operational efficiency and customer experience.
Infosys has developed a generative AI solution using NVIDIA NIM and NVIDIA NeMo Retriever to address these issues. This solution aims to streamline network operations centers (NOCs) by automating network troubleshooting, minimizing downtime, and optimizing performance.
Building Smart Network Operations Centers with Generative AI
Infosys, a global leader in digital services, has built a smart NOC using a generative AI customer engagement platform. This platform assists NOC operators, network administrators, and IT support staff by providing essential vendor-agnostic router commands for diagnostics and monitoring. This intelligent chatbot reduces mean time to resolution and enhances customer service.
Challenges with Vector Embeddings and Document Retrieval
Infosys encountered several challenges in developing the chatbot, including balancing high accuracy and low latency and addressing network-specific taxonomy and complex device documentation. The time-consuming nature of vector embedding processes on CPUs and latency issues with LLMs were also significant hurdles.
Data Collection and Preparation
To overcome these challenges, Infosys built a vector database of network device manuals and knowledge artifacts. The focus was initially on devices from Cisco and Juniper Networks. Custom embedding models and fine-tuned parameters populated the vector database to ensure accurate and contextual responses to user queries.
Solution Architecture
Infosys’s solution architecture included several key components:
- User Interface and Chatbot: Developed an intuitive interface using React for customized chatbots and advanced query scripting.
- Data Configuration Management: Provided flexible settings for chunking and embedding using NVIDIA NeMo Retriever.
- Vector Database Options: Implemented options like FAISS for high-speed data retrieval.
- Backend Services and Integration: Created robust backend services, including a RESTful API for integration with external systems.
- Integration with NIM: Used NIM microservices to improve accuracy and performance.
- Configuration: Utilized 10 NVIDIA A100 80-GB GPUs, 128 CPU cores, and 1 TB storage.
- Guardrails: Employed NVIDIA NeMo Guardrails for added security and reliability.
AI Workflow with NVIDIA NIM and NeMo Guardrails
Infosys used a self-hosted instance of NVIDIA NIM and NeMo to fine-tune and deploy foundational LLMs. NeMo Retriever powered the vector database retrieval and reranking workflows, enabling enterprises to connect custom models to business data and deliver accurate responses. For more information, see NVIDIA’s blog.
Using NeMo Retriever and NV-Embed-QA-Mistral-7B, Infosys achieved over 90% accuracy in text embedding models. This model excels across various tasks, enhancing accuracy and performance.
Results
Infosys measured LLM latency and accuracy with and without using NVIDIA NIM. Without NIM, LLM latency was 2.3 seconds, while using NIM reduced it to 0.9 seconds—a 61% improvement. Accuracy improved from 70% to 92% with the integration of NeMo Retriever embedding and reranking microservices.
Conclusion
By integrating NVIDIA NIM and NeMo Retriever, Infosys significantly improved the performance and accuracy of its smart NOC. These enhancements streamline network troubleshooting, reduce downtime, and optimize overall network performance.
Learn more about how Infosys eliminates network downtime through automated workflow, powered by NVIDIA, on NVIDIA’s official blog.
Image source: Shutterstock