A space data center consists of a satellite or group of satellites that handle data processing, storage, and networking, placed in orbit around Earth or other celestial bodies. It expands terrestrial digital infrastructure into space, leveraging environmental and orbital conditions that are hard or costly to reproduce on the ground. Growing interest in space data centers is fueled by two main factors: the recent push to build larger AI data centers for training bigger large language models (LLMs) that could lead to artificial general intelligence, and continuous improvements in the spatial, temporal, and spectral resolutions of remote sensing instruments, which produce significantly larger data volumes and create bottlenecks in the space-based measurement and ground processing system.
While current AI data centers operate at around 100 megawatts (MW), future data centers will scale to 1 gigawatt (GW) or more, which is equivalent to the annual electricity consumption of about 825,000 households. This places significant strain on existing infrastructure and is not sustainable. Oppositions to AI data centers have led to calls for a National Moratorium on New Data Centers. As a result, space data centers have gained popularity as alternatives to ground-based data centers because they can access unlimited solar energy and natural cooling. In addition, the space data centers create a space ecosystem involving space data, networks, and computer servers, which is set to introduce a new paradigm to process space-based instrument data, potentially reducing latency in delivering actionable information to users. For example, faster prediction and monitoring of wildfires and tornadoes can significantly enhance public safety.
There are two main technical challenges for a space data center compared to one on Earth: the high-radiation environment in space can cause instability in computer chips, and thermal management is crucial in a vacuum environment. The CPUs and GPUs in space data centers are the same as those used in terrestrial data centers and are not designed for the high-radiation environment in space. Research on space computing started in 2019 by a team in China. They developed a multi-mode redundancy architecture that allows multiple computing units to support each other and compare outputs in real time, leveraging redundancy to enable the use of advanced commercial process chips in orbit. They also implemented a hybrid active-passive cooling system for thermal management of the computer chips, using a fluid loop between the chips and the radiator to dissipate heat. These innovations address the challenges of high radiation and thermal management for computer chips. Starcloud, an Nvidia-supported startup, is developing a lightweight, deployable radiator design with a very large area—by far the largest radiators deployed in space—radiating primarily toward deep space.
The initial deployment of the space data centers has started in both China and the U.S. A 12-satellite constellation called ‘TianSuan 01’ was launched on May 14, 2025, in China. Each satellite can perform up to 774 trillion operations per second and is connected via high-speed laser links with data transfer rates of up to 100 gigabits per second. The combined computing power of the 12-satellite constellation provides 5 POPS and 30 terabytes of onboard storage. The satellites carry an AI model with 8 billion parameters, enabling direct processing of the satellite instrument data. ‘Tiansuan 01’ has focused on satellite instrument data processing and space network testing, and has demonstrated that space-based processing of remote sensing data significantly reduces latency in delivering the final data product to users while saving the bandwidth from space to ground.
The development of U.S. space data centers has focused on the inference and training of LLMs. Starcloud, a Nvidia-supported startup, recently launched the Starcloud-1 satellite equipped with an Nvidia H100 GPU, a chip 100 times more powerful than any previously sent to space. The satellite operated successfully and queried Google's Gemma LLM in orbit, demonstrating the feasibility of in-space AI training.
More space data center projects are being planned in both the U.S. and China: Starcloud plans to launch its more powerful Starcloud-2 commercial satellite in 2026 and will incorporate Nvidia's Blackwell platform into its next launch scheduled for October 2026. Google has a Suncatcher project with a planned prototype launch in 2027. SpaceX, Blue Origin, and startup AetherFlux are also working on their own space data center projects.
The ‘Tiansuan 02’ has been planned as the next phase of the ‘Tiansuan 01’ constellation in China. The goal is to establish a true space supercomputing center by deploying a space-based Vanka superintelligent agent cluster with up to 10 EOPS of computing power in orbit. The cluster includes three core modules and supports modular assembly, deployment, and replacement, such as the 100MW-class energy capsule, the 10Tbps-level communication cabin for high-speed data transfer, and the 10 EOPS computing cabin. Additionally, the Astro-future Institute of Space Technology , a research institute based in Beijing, plans to build its own space computing center in low Earth orbit at altitudes of 700 to 800 kilometers within the next five years, with its first experimental satellite, “Chenguang-1,” scheduled for launch early next year. The constellation’s computing power is projected to reach 1,000 PFlops within three years and grow to 400,000 PFlops by 2030, matching the combined capacity of all existing ground data centers in China. It is expected to support applications such as 6G technology, autonomous driving, and weather forecasting.
Although highlighting different aspects of space data centers in the U.S. and China, space data centers are no longer just ideas on paper; they are gradually becoming part of the future space AI infrastructure that complements ground-based data centers and supports upcoming space missions.