Glossary
A
AI Inference – The process of running a trained AI model to generate predictions or outputs AI Training – The process of teaching an AI model using large datasets and powerful GPUs to improve its ability to recognize patterns. ASIC (Application-Specific Integrated Circuit) – Specialized hardware designed for a specific task. Nebula AI focuses on GPU-based compute, not ASICs. Auto-Pricing – A feature that dynamically adjusts GPU rental prices based on market demand and resource availability.
B
Blockchain – A decentralized digital ledger that records transactions securely and transparently. Used in Nebula AI for payments and smart contracts. Bulk Renting – The ability to rent multiple GPUs at once for larger workloads, such as multi-GPU AI training or parallel computing.
C
CUDA (Compute Unified Device Architecture) – NVIDIA’s GPU programming framework that enables parallel processing for AI, simulations, and high-performance computing (HPC). Cryptographic Computation – Using GPUs to process zero-knowledge proofs (ZKPs), homomorphic encryption, and blockchain consensus mechanisms.
D
Decentralized Compute Marketplace – A system where GPU owners can rent out their hardware, and AI developers can lease GPU power on demand, without relying on centralized cloud providers like AWS or Google Cloud. Deep Learning – A subset of machine learning that uses neural networks to analyze large amounts of data and make intelligent predictions. dApp (Decentralized Application) – A blockchain-based application that runs on smart contracts instead of centralized servers. Nebula AI’s platform operates as a dApp for GPU rentals. Distributed Training – The process of training AI models across multiple GPUs or servers to speed up learning and improve performance.
E
Ephemeral Containers – Temporary GPU computing environments that are automatically erased after rental completion, ensuring data privacy and security. ETH (Ethereum) – A blockchain used for smart contracts and decentralized finance (DeFi). Nebula AI supports ETH for transactions but converts it to $NAI for platform payments.
F
Federated Learning – A machine learning approach where AI models are trained across multiple devices without sharing raw data, improving privacy. FP16 / FP32 / FP64 (Floating Point Precision) – Different levels of computational accuracy in GPU processing.
FP16 (Half-Precision): Used for AI inference and deep learning to reduce memory usage.
FP32 (Single-Precision): Standard for AI training and high-performance computing.
FP64 (Double-Precision): Required for scientific simulations and advanced calculations.
G
GPU (Graphics Processing Unit) – A specialized processor that accelerates parallel computing, used for AI, deep learning, 3D rendering, and scientific simulations. GPU Clusters – A group of multiple GPUs working together to handle large workloads. GPU Mining – The process of using GPUs to mine cryptocurrencies like Kaspa, Ergo, and Radiant.
H
High-Performance Computing (HPC) – The use of powerful computing resources, including GPUs, to process large-scale simulations, AI training, and cryptographic calculations. HiveOS – A popular operating system for GPU mining that allows users to configure and optimize their rigs.
I
Inference Optimization – The process of making AI models run faster and more efficiently on GPUs, improving real-time response speeds. Instance Termination – The automatic shutdown and data wipe of a rented GPU after session completion to ensure privacy and security.
L
LLM (Large Language Model) – AI models trained on massive text datasets to generate human-like text, answer questions, and assist with AI-powered applications (e.g., GPT-4, Llama, DeepSeek). Locked Staking – A staking mechanism where users lock up their tokens ($NAI) for a fixed period to earn higher rewards.
M
Machine Learning (ML) – A branch of AI where computers learn from data and improve their performance without explicit programming. Max Fair Price (MFP) – A pricing mechanism that ensures GPU rentals remain competitive and fair based on demand.
N
Neural Networks – AI architectures modeled after the human brain, used in deep learning to recognize patterns in images, text, and data. NFT Compute Access (Coming Soon) – The ability to use NFTs as keys for accessing GPU rentals, enabling subscription-based compute power.
O
On-Demand Rentals – A GPU rental type where users get guaranteed access to a GPU for a fixed period, preventing interruptions.
Open-Source AI – AI models and frameworks that are publicly available for use, modification, and research (e.g., Stable Diffusion, Llama, Falcon).
P
Parallel Processing – A method where multiple GPU cores process tasks simultaneously, increasing computational efficiency. Proof of Holding (PoH) (Future Feature) – A potential staking mechanism that rewards users for holding $NAI in their wallets.
R
Ray Tracing – A rendering technique that simulates light behavior in real-time to create photo-realistic graphics in gaming and CGI. Rewards Pool – A staking and earnings system where GPU hosts and renters earn additional $NAI for contributing to the platform.
S
Smart Contracts – Self-executing contracts on a blockchain that automate payments, enforce rental agreements, and ensure security.
Spot Rentals – A GPU rental type that offers lower costs but allows other users to outbid and take over the session. Staking – The act of locking $NAI tokens to earn passive rewards and participate in platform incentives.
T
Tensor Cores – Specialized GPU cores designed by NVIDIA for AI acceleration, improving deep learning and matrix operations. TPU (Tensor Processing Unit) – Google's AI-specific hardware designed for neural network training and inference.
V
VRAM (Video RAM) – GPU memory that determines how much data a GPU can process at once, crucial for AI models, rendering, and large datasets.
W
Web3 Integration – The ability to connect decentralized applications (dApps), wallets, and smart contracts within Nebula AI.
Last updated