Skip to main content

Introduction

What is WoolyAI?​

WoolyAI is a suite of software that helps companies simplify and maximize GPU hardware utilization, as well as reduce costs. While not completely accurate, you can think of WoolyAI as a GPU hypervisor for ML platforms.

  • Dynamic scheduling & allocation: Measures and allocates GPU cores and VRAM at runtime across multiple simultaneous requests. Includes deterministic scheduling options.
  • Memory efficiency: VRAM dedup (e.g., shared base model weights across many LoRA adapters) to pack more models per GPU.
  • Ecosystem-friendly: Works with your existing PyTorch scripts and models - no code rewrites or porting required.
  • Works with existing pods: No special container images needed.

The Problem​

Hardware is expensive and often underutilized. Instead of statically assigning fixed GPU resources upfront, WoolyAI makes allocation decisions in real-time based on actual usage. No coarse-grained time-slicing.

In short: WoolyAI is the "traffic controller" for GPU resources. Rather than giving each user their own dedicated GPU (expensive and wasteful), WoolyAI measures what each workload actually needs and intelligently shares GPU cores and memory across many users in real-time. Letting teams of 20-50 researchers share a small GPU pool.

Advantages​

  • Lower Infrastructure Costs: Maximize utilization per GPU and reduce costs by allowing less GPU hardware to do more.
  • True GPU Concurrency: Runs multiple kernel executions in a single GPU context without time-slicing overhead, unlike traditional static partitioning (MIG/MPS) that create rigid, underutilized segments.
  • Dynamic Resource Allocation: Real-time redistribution of GPU cores and VRAM based on active kernel processes, priority levels, and actual usage patterns - not fixed quotas.
  • Maximized GPU Utilization: Eliminates idle cycles by continuously analyzing and optimizing resource distribution, ensuring no GPU compute sits unused.
  • Memory Sharing: Deduplicates VRAM across multiple clients to save on expensive GPU memory. Share identical models in VRAM across multiple workloads.

Deployment Options​

WoolyAI can be deployed and used as a service in your organization, supporting multiple teams and models. There are three main ways to deploy WoolyAI:

  1. WoolyAI GPU Operator (useful for small, medium, and large scale deployments with Kubernetes available)
  2. WoolyAI Controller (useful for small, medium, and large scale deployments without Kubernetes available)
  3. Direct to WoolyAI Server (useful for small scale deployments with one GPU node)

WoolyAI GPU Operator​

The WoolyAI GPU Operator handles deploying the WoolyAI Server on all GPU nodes in your cluster as well as injecting the WoolyAI libraries into the pods where you want to run your ML workloads.

  1. Setup Guide for WoolyAI GPU Operator

WoolyAI Controller​

The WoolyAI Controller is a web interface and router that is responsible for routing WoolyAI client (ML container) execution requests to the GPU nodes running WoolyAI Server based on real-time GPU utilization. It does not rely on Kubernetes.

  1. Setup Guide for WoolyAI Controller
  2. Setup Guide for WoolyAI Server
  3. Setup Guide for WoolyAI Client

Direct to WoolyAI Server​

The Direct to WoolyAI Server is a simple way to deploy WoolyAI on a single GPU node. You simply run the WoolyAI Server container and your ML container on the same machine -- no Kubernetes or Controller required.

  1. Setup Guide for Direct to WoolyAI Server
  2. Setup Guide for Direct to WoolyAI Server Client