Perplexity AI, a number one AI-powered search engine, is efficiently managing over 435 million search queries every month, due to NVIDIA’s superior inference stack. The platform has built-in NVIDIA H100 Tensor Core GPUs, Triton Inference Server, and TensorRT-LLM to effectively deploy giant language fashions (LLMs), in line with NVIDIA’s official weblog.
Serving A number of AI Fashions
To fulfill numerous person calls for, Perplexity AI operates over 20 AI fashions concurrently, together with variations of the open-source Llama 3.1 fashions. Every person request is matched with essentially the most appropriate mannequin utilizing smaller classifier fashions that decide person intent. These fashions are deployed throughout GPU pods, every managed by an NVIDIA Triton Inference Server, making certain effectivity beneath strict service-level agreements (SLAs).
The pods are hosted inside a Kubernetes cluster, that includes an in-house front-end scheduler that directs visitors based mostly on load and utilization. This ensures constant SLA adherence, optimizing efficiency and useful resource utilization.
Optimizing Efficiency and Prices
Perplexity AI employs a complete A/B testing technique to outline SLAs for various use instances. This course of goals to maximise GPU utilization whereas sustaining goal SLAs, optimizing inference serving prices. Smaller fashions deal with minimizing latency, whereas bigger, user-facing fashions like Llama 8B, 70B, and 405B endure detailed efficiency evaluation to stability prices and person expertise.
Efficiency is additional enhanced by parallelizing mannequin deployment throughout a number of GPUs, growing tensor parallelism to attain decrease serving prices for latency-sensitive requests. This strategic strategy has enabled Perplexity to save lots of roughly $1 million yearly by internet hosting fashions on cloud-based NVIDIA GPUs, surpassing third-party LLM API service prices.
Progressive Strategies for Enhanced Throughput
Perplexity AI is collaborating with NVIDIA to implement ‘disaggregating serving,’ a way that separates inference phases onto completely different GPUs, considerably boosting throughput whereas adhering to SLAs. This flexibility permits Perplexity to make the most of numerous NVIDIA GPU merchandise to optimize efficiency and cost-efficiency.
Additional enhancements are anticipated with the upcoming NVIDIA Blackwell platform, promising substantial efficiency positive factors by way of technological improvements, together with a second-generation Transformer Engine and superior NVLink capabilities.
Perplexity’s strategic use of NVIDIA’s inference stack underscores the potential for AI-powered platforms to handle huge question volumes effectively, delivering high-quality person experiences whereas sustaining cost-effectiveness.
Picture supply: Shutterstock