Product
storage
performance
benchmarks

Introducing TensorPool Object Storage

Globally distributed S3-compatible object storage with intelligent caching that follows your GPUs. Up to 20x lower cross-region latency than Cloudflare R2, with up to 3.5x higher throughput.

Tycho Svoboda
Tycho SvobodaCofounder & CEO
March 31, 2026
5 min read
Summarize with AI:

Training across regions shouldn't mean waiting on data. Today we're launching TensorPool Object Storage: S3-compatible, globally distributed storage that caches your data in the same region as your GPUs.

The problem

If you've ever had your data in one region and your GPUs in another, you already know. Your dataset lives in us-east-1, but the only available B200s are in the EU. Your training loop stalls on cross-Atlantic latency, and your $6/hr GPUs are twiddling their thumbs waiting for data.

The workarounds are all bad -- manually syncing data across regions, maintaining duplicate buckets, pre-fetching data. You end up managing infrastructure instead of training models.

How it works

TensorPool Object Storage, built with in collaboration with Accelerated Cloud Storage, puts your data where your GPUs are. Behind the scenes, our system eagerly distributes your data across all TensorPool cluster regions with eventual consistency.

This means that:

  • Guaranteed base case latency, since data always comes from the same region as your cluster with strong consistency within a region.
  • 99.99% of objects are globally available within 15 minutes. Small files propagate in milliseconds.
  • All with no ingress/egress fees and unlimited storage capacity!

Wherever you launch a cluster, your data is already nearby.

Benchmarks: TensorPool Object Storage vs. Cloudflare R2

We benchmarked against Cloudflare R2 from a TensorPool H100 cluster in us-east, against an R2 bucket in the same region (Eastern North America, ENAM) both and cross-region (Asia-Pacific, APAC)

Small objects (1KB-100KB)

Metric TensorPool Object Storage R2 (Eastern North America, ENAM) R2 (Asia-Pacific, APAC)
Upload latency (P50) 44-64ms 126-450ms 850-1,012ms
Download latency (P50) 52-67ms 58-124ms 552-586ms

TensorPool uploads are 2.9-7x faster than R2 in the same region. Downloads are 1.1-2.2x faster. In cross-region scenarios, the gap widens to 19x faster uploads and 11x faster downloads.

When you're loading thousands of small files per epoch, this latency adds up fast. We've seen the biggest wins from teams working with biological, geospatial, and graph datasets where that's the norm.

Large files (5-10GB)

Metric TensorPool Object Storage R2 (Eastern North America, ENAM) R2 (Asia-Pacific, APAC)
Upload throughput 433-451 MB/s 129-200 MB/s 106-117 MB/s
Download throughput 801-873 MB/s 323-440 MB/s 283-306 MB/s

Uploads run at 2.2-3.5x the throughput of same-region R2. Downloads are 2.0-2.5x faster.

A 10GB model checkpoint download that takes ~23 seconds from R2 in the same region finishes in ~12 seconds on TensorPool Object Storage.

Consistency

Throughput is one thing. Consistency is what actually matters for distributed training -- unpredictable I/O means unpredictable training times and wasted GPU cycles.

Metric TensorPool Object Storage R2 (Eastern North America, ENAM) R2 (Asia-Pacific, APAC)
Upload variance (σ) 1.1-1.2s 9.1-18.0s 4.3-4.5s
Download variance (σ) 0.006-0.03s 1.3-3.6s 0.4-0.9s

TensorPool uploads are 8-15x more consistent than same-region R2. Downloads are 43-600x more consistent. Predictable I/O means predictable training times and GPUs that aren't sitting around waiting on storage.

Getting started

All TensorPool Organizations can get started today!

# Enable object storage for your organization
tp object-storage enable

# Create a bucket
tp object-storage bucket create dataset

# Get your S3-compatible credentials
tp object-storage credentials

Then use any S3-compatible tools to upload data:

import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="<your-endpoint>",
    aws_access_key_id="<your-access-key>",
    aws_secret_access_key="<your-secret-key>",
    region_name="global"
)

# Upload your dataset
s3.upload_file("dataset.tar", "dataset", "dataset.tar")

For the full usage guide and check out our Object Storage documentation.

When you spin up a TensorPool Cluster in any region, your data is already cached nearby. No extra steps.

Stop choosing GPUs based on where your data lives.

Ready to Get Started?

Sign up for TensorPool and start building on powerful GPU infrastructure.