Skip to main content

Introduction

What is WoolyAI?​

WoolyAI is a GPU Runtime software that helps companies simplify and maximize GPU hardware utilization, as well as reduce costs. WoolyAI operates like a GPU hypervisor for ML platforms.

  • Dynamic GPU Compute cores scheduling & allocation: Measures and fractionally allocates GPU compute cores at runtime across multiple simultaneous kernel executions based on job priority and actual consumption.
  • VRAM Oversubscription and Swap Policy: Supports VRAM overcommit at scheduling time to increase GPU packing, using an idleness-aware VRAM swapping policy to keep active working jobs resident.
  • Model Memory Deduplication for Higher Density: Enables dedup of Model weights in the VRAM(e.g., shared base model weights across many LoRA adapters) to pack more inference stacks per GPU.
  • Works with existing ML CUDA pods: No special container images needed. Requires installation of Wooly Client libraries inside existing ML Nvidia/cuda pods (Pytorch/vllM etc).
  • Ecosystem-friendly: Works with your existing Kubernetes setup for GPU nodes. Requires setup of WoolyAI GPU operator.

The Problem​

Hardware is expensive and often underutilized. Instead of statically assigning fixed GPU resources upfront, WoolyAI makes allocation decisions in real-time based on actual usage. No coarse-grained time-slicing ot static partioning.

In short: WoolyAI is like "virtualization" for GPU resources. Rather than giving each user their own dedicated GPU (expensive and wasteful), WoolyAI measures what each workload actually needs and intelligently shares GPU cores and memory across many users in real-time.

Advantages​

  • Lower Infrastructure Costs: Maximize utilization per GPU and reduce costs by allowing less GPU hardware to do more.
  • True GPU Concurrency: Runs multiple kernel executions in a single GPU context without time-slicing overhead, unlike traditional static partitioning (MIG/MPS) that create rigid, underutilized segments.
  • Dynamic Resource Allocation: Real-time redistribution of GPU cores and VRAM based on active kernel processes, priority levels, and actual usage patterns - not fixed quotas.
  • Maximized GPU Utilization: Eliminates idle cycles by continuously analyzing and optimizing resource distribution, ensuring no GPU compute sits unused.
  • Memory Sharing: Deduplicates Model VRAM across multiple apps to save on expensive GPU memory. Share identical models in VRAM across multiple workloads.

See Deployment Options for the main ways to deploy WoolyAI in your organization.