Nebula AI
  • overview
    • Introduction
    • $NAI
    • Tokenomics
    • Roadmap
      • Phase 1: Nebular Birth
      • Phase 2: Stellar Nursery
      • Phase 3: Supernova Illumination
    • FAQ
  • Getting Started
    • Manage account
    • QuickStart
    • Billing
    • Referral Program
  • GPU Marketplace
    • Overview
      • Detailed Hosting Guide
        • Nebula Hosting Client
        • Server Settings
        • Reinstalling Drivers
        • Pricing Your Servers
      • Detailed Rental Guide
        • Renting a GPU
        • Choosing your Rental
        • Rental Management
        • GPU Security
    • Use Cases
    • Platform & dApp
  • QUICKLINKs
    • KYC
    • Website
    • X
    • Telegram
    • Roadmap
  • Guidelines
    • Glossary
    • API Reference
    • Brand Kit
Powered by GitBook
On this page
  • Understanding Your Compute Needs
  • Browsing GPUs in the Marketplace
  • Comparing Spot vs. On-Demand Rentals
  • Evaluating GPU Host Ratings & Reliability
  • Choosing the Right GPU for Long-Term vs. Short-Term Rentals
  • Selecting Additional Features (If Needed)
  • Finalizing Your Selection & Renting the GPU
  1. GPU Marketplace
  2. Overview
  3. Detailed Rental Guide

Choosing your Rental

Understanding Your Compute Needs

Before renting a GPU, consider the type of workload you’re running. Different tasks have different GPU requirements, and selecting the wrong hardware can lead to wasted resources or suboptimal performance.

Common GPU Use Cases:

Workload Type

Recommended GPU Type

Key Factors

AI Training (Deep Learning, LLMs)

A100, H100, RTX 4090

High VRAM, Tensor Cores, Multi-GPU support

AI Inference (Model Deployment, Fine-Tuning)

RTX 3090, RTX 4090, A100

Lower VRAM needs, but high precision compute required

3D Rendering (Blender, Unreal, AI-Generated Art)

RTX 3090, RTX 4090, A100

High CUDA cores, VRAM, and Tensor RT support

Data Science (Simulations, High-Performance Computing)

RTX 4090, A100, 3090

High FP32/FP64 performance, large dataset processing

Video Processing (Encoding, AI Upscaling)

RTX 3090, 4090, A6000

Fast memory bandwidth, CUDA acceleration

Blockchain & Cryptographic Tasks (ZK-Proofs, Computation)

RTX 4090, A100

High core count, memory bandwidth, parallel processing

Each of these tasks requires different levels of compute power, memory, and bandwidth, so choosing the right GPU can significantly impact execution time and costs.


Browsing GPUs in the Marketplace

Once you understand your workload, you can browse available GPUs in Nebula AI’s marketplace. The platform provides detailed specifications for each listed GPU, including:

  • GPU Model & Generation – Determines compute power & efficiency (e.g., RTX 4090 vs. RTX 3090).

  • VRAM Size – Important for deep learning, rendering, and large dataset processing.

  • Performance Benchmarks – FLOPs, Tensor core efficiency, and past rental performance.

  • Availability & Location – Some GPUs may be located in specific geographic regions (lower latency).

  • Rental Price & Type – Choose between Spot vs. On-Demand pricing models.

Clicking on a listing provides a deeper breakdown, helping you make a data-driven decision.


Comparing Spot vs. On-Demand Rentals

Nebula AI provides two rental options:

On-Demand Rentals (Guaranteed, Higher Cost)

  • Guaranteed access to the GPU for the selected duration.

  • Fixed pricing, ensuring uninterrupted compute time.

  • Best for long AI training, stable inference workloads, and critical projects.

  • More expensive than Spot Rentals but ensures job completion without interruptions.

Spot Rentals (Lower Cost, Flexible Availability)

  • Rent at discounted prices, but can be outbid by other users.

  • Ideal for non-time-sensitive tasks, such as batch processing or exploratory workloads.

  • Can be interrupted if another user bids a higher price, requiring workload resumption.

If your workload requires consistent uptime, go for On-Demand Rentals. If you’re flexible and want to minimize costs, Spot Rentals can save up to 40%.


Evaluating GPU Host Ratings & Reliability

Since Nebula AI is a decentralized GPU marketplace, different GPU providers (hosts) have varying levels of uptime and reliability. Before renting, check:

  • Host Uptime % – Indicates how consistently the provider keeps their GPU available.

  • Rental History – Shows previous rental completions and renter satisfaction.

  • User Ratings – High ratings suggest a trusted host with stable performance.

  • Connection Speed – Some hosts have faster internet speeds, which is important for real-time AI inference.

Prioritizing high-uptime hosts ensures you avoid disruptions and maintain stable compute performance.


Choosing the Right GPU for Long-Term vs. Short-Term Rentals

  • Short-Term Rentals (<24 hours) – Best for quick tests, benchmarking, and exploratory workloads.

  • Medium-Term Rentals (1-7 days) – Ideal for training small AI models, rendering, and iterative deep learning.

  • Long-Term Rentals (>7 days) – Used for extended AI training, large-scale simulations, and continuous workloads.

For longer rental durations, consider negotiating bulk pricing with hosts or using future automated pricing discounts.


Selecting Additional Features (If Needed)

Some GPU listings may provide extra services to improve the rental experience:

  • Pre-installed AI/ML Environments – Comes with PyTorch, TensorFlow, Jupyter Notebooks pre-configured.

  • Remote Storage Options – Some hosts offer persistent storage for long-term workloads (upcoming feature).

  • High-Speed Networking – Low-latency connections for real-time applications.

  • Custom Software Configurations – Tailored setups for specific AI models or research use cases.

If you need plug-and-play GPU access, prioritize listings that include pre-configured software environments.


Finalizing Your Selection & Renting the GPU

Once you’ve identified the best GPU for your task, proceed with the rental process:

Confirm Pricing & Duration – Ensure you’ve selected Spot or On-Demand based on your needs.

Review Host Reliability – Check uptime, network speeds, and past renter feedback.

Check GPU Specs One Last Time – Make sure VRAM, CUDA cores, and compute power match your workload.

Click "Rent Now" & Confirm Payment – Approve the on-chain transaction in your connected wallet.

Deploy Workloads Immediately – Access the GPU via SSH, Jupyter Notebook, or API.

PreviousRenting a GPUNextRental Management

Last updated 3 months ago