The Dumbest Thing I’ve Seen This Week

Given that it is 2025, the dumbest thing I’ve seen this week is some stiff competition, but “AI datacenters in space” is some impressive idiocy. I’ve seen a few breathless media reports on how AI companies are planning to launch entire datacenters into space. Some of these articles point to a single H100 GPU that was used to train a LLM while in orbit. Obviously, humanity has the capacity to launch a single GPU into orbit, and small LLMs can be trained on an H100. However, that’s a far cry from a datacenter, and so it is somewhat surprising that this is being hyped by companies who are selling the idea of massive datacenters in orbit.

To be blunt, the entire stupid idea is a giant middle finger to multiple fundamentals of physics, and the fact that it is apparently being taken seriously by our tech lords, mainstream journalism, and political leaders is a damning indictment of not just the ridiculous amount of money chasing bad ideas in the tech/LLM/hype sector that has eaten the American economy, political power centers, and people who really should know better, but yet another demonstration of how people who built their economic empire on a claim of STEM-based rigor and quantitative genius either can’t do basic physics or know that no one out there who matters is going to call them on it.

Let’s investigate why this is a bad idea.

To set the stage, let me start with a statement: the United States aerospace and technology sectors likely have the capacity to launch a GPU-based datacenter into orbit. That is: sure, you can do this. However, what I’m going to argue is that there is *no good reason* to do this, and the supposed advantages that are being flogged by the people behind this do not make sense. The only reason to launch a shitload of GPUs into orbit is to say you did it, because they sure as hell aren’t going to add meaningful capacity to the datacenters on Earth, will cost far more, and would have short usable lifespans. However, despite the US ceasing meaningful investment in basic science, in training the next generation of engineers, medical doctors, and researchers, in operating hospitals, clean energy, transportation infrastructure, or lifesaving low-cost interventions around the world, the tech sphere has hilariously stupid amounts of money that they could use to launch GPUs into space, if that’s what they so desire. After all, Meta lost *$70 billion* on a legless virtual reality space that no one wanted. GPUs in space is yet another way that we can set money on fire instead of making real advancements.

To investigate the physical limitations faced by a datacenter in orbit, I will use the parameters I can infer from this whitepaper.

In particular, for my purposes, my “spherical cow” orbital datacenter will assume:

  • a datacenter that pulls 1 GW power. (I will also run the numbers for a not-insane datacenter of 10 MW consumption, which will be in parenthesis throughout).

  • Low-earth Sun-synchronous orbit.

The problems that I want to walk through are:

  • Launch costs

  • Maintenance

  • Power requirements

  • Security

  • Cooling

I’m going to spend most of my time on the cooling issue, because that’s the real “what the fuck are you thinking” issue. I’m go through the others quickly, in part because they touch on issues I’m not as qualified to speak about. But again, if you’re goal is to put a GPU farm in orbit and have it turn on, all of these issues can be defeated via the power of giant piles of money. So none of this is saying you can’t build a datacenter in orbit. I just want to point out physical limitations that make it really questionable to me as to why you’d view this as a solution to anything.

As I kick it off, note that the 1GW datacenter I’m assuming is not small. Current datacenters here on Earth appear to have power requirements between 1 MW and 100 MW — so between 1/1000th and 1/10th of our spherical cow. The ISS pulls about 100 kW of power for life support and operations, which seems to be the upper end of public satellite designs. That’s 1/10th of the minimum datacenter usage, and 1/10,000 our spherical cow.

Let’s Fermi Problem what this datacenter must look like. I’ll assume H100 GPUs, which cost at least $25K each. The power requirement per card are 350 W-700 W. Presumably we’re going to need energy for the infrastructure and the communications and all that, but I’ll split the difference and say 500 W = 0.5 kW per card and ignore the overhead power requirements. So my 1 GW datacenter is 2 million H100s (20,000). That seems like a lot.

An individual H100 masses around 1.3 kg for the card itself. I’ll give the datacenter a break and say 1 kg. That’s just the card. So just the cards for our spherical cow is 2 million kilograms (20,000 kg) to orbit. Here on Earth, it looks like the server rack is about 3.5 kg for a single card, and an 8 card full server is 130 kg total (16.25 kg per GPU). So there’s potentially an entire additional order of magnitude of weight that needs to be carried with the GPU. Again, I’ll cut the spherical cow datacenter a break and say you just need 1 kg of overhead for every kg of GPU. So now our datacenter masses $4 \times 10^6$ kg, or 4,000 metric tons (40,000 kg = 40 metric tons). The dry mass of a the space shuttle was 75 metric tons. This is twice the mass of a fully fueled orbiter on the launch pad. The ISS masses 450,000 kg, so this is 9 ISS’s of mass (1/10th of an ISS). Just for the of the GPUs and computing infrastructure.

Launch Costs:

The orbit needs to keep the datacenter in sunlight all the time, otherwise you need to carry significant battery backup. A standard low-Earth orbit will give you ~40 minutes of darkness out of every ~80 minutes, so that’s not viable for our needs.

One option is to go to high orbit, and incline the orbit a bit so your time in shadow is minimal. This will increase launch costs, and latency. A dawn-dusk Sun-synchronous LEO (going over the poles above the terminator) will keep you in daylight nearly all the time while only being a few hundred km up.

However, to get into that orbit requires a lot more launch Delta v. A normal launch goes east-ward, and as close to the Equator as possible, to maximize the initial velocity of the rocket due to the rotation of the Earth. To launch over the poles, you have to kill the rotation of the rocket (as it was on the rotating surface of the Earth), while boosting it into a polar orbit, and then spin the orbit to be Sun-synchronous. You will also have to occasionally nudge the orbit, but that’s always going to be a concern.

Looking at existing satellite missions, I think a rough estimate should be that launching to this orbit is a factor of 2 more expensive. That is, looking at a few NASA mission profiles and assuming that everyone is using a rocket with the minimum viable launch profile (which is probably not a crazy assumption), I roughly estimate that a given rocket looks like it can only place half the mass into such an orbit than launching to a standard LEO.

In 2020, a Falcon launch to LEO was 2,700 dollars/kg. The Falcon Heavy is advertising 1,500 dollars/kg, with a payload of something like 50 metric tons to LEO. We’re going to cut that payload in half, doubling the price, because of the bespoke orbit. So just to launch the GPUs and modest overhead, we need 160 launches, costing 6 billion dollars. This is the full launch capacity of SpaceX in 2025. (Our baby 10 MW datacenter needs just 1.6 launches, at only 60 million dollars.)

I suppose it’s not that bad though, since the actual cost of 2 million H100s is 60 billion dollars (500 million for the 20,000 H100s in the small datacenter).

I note that the whitepaper prices out the launch costs assuming that orbital launches reduce down to 10 dollars/kg, which is a decrease of two orders magnitude from the current costs. That would make spaceflight as cheap as current air freight costs. I would certainly believe that a proposal to build a datacenter in space doesn’t have an insane cost if the price tag to build it is the same as if you were just flying the materials in airplanes.

Maintenance

This is somewhat far afield of my area of expertise, but folks, I don’t know if you know this, but shit breaks sometimes man. Now, when we actual science in space, we spend a huge amount of time and effort to build things so that they don’t break, or if they do break the mission can still continue. That’s why it costs billions to build JWST, and why it worked. Or why we can send a rover to Mars on a few month planned mission that lasts for years. All that artisanal work costs money though, making each the cost of each card and the infrastructure around it vastly more expensive.

LEO is a more radiation-heavy environment than on Earth, and if you’re doing your job you should redesign and shield the chips. I think this is likely a non-trivial issue for a datacenter in orbit, but I’m going to cut this spherical cow a break because the useful lifetime for a GPU in datacenter on Earth is a year or so. So we’re spending 60 billion dollars to put this datacenter in orbit, and if it works, it only works for a year — less if the radiation speeds this degradation up. And I’m not sure it will but it sure as hell won’t make them last *longer.* And chips will burn out stochastically. On Earth I gather you just have people going around constantly replacing them, but either you’re sending up a living space for engineers along with your GPUs (and either have to pressurize the entire GPU farm so it can be worked in or design it so that humans in spacesuits can move and work inside it, which is going to be incredible hard), or you build and design robots that can do that job (and then the robots will break sometime and you need to repair them), or you just accept you’re spending $6 billion a year to launch another set of chips up.

Power Requirements

A 1 GW datacenter requires… 1 GW of power. Obviously, we’re going to do this with solar panels because we’re in fucking space and that was the entire fucking point (ostensibly). Solar insolation is 1 kW/m^2, and a panel is usually about 20% efficient. I’ve heard of some cool ideas that might increase that, so I’ll be generous and double it: 400 W per square meter.

So you only need 2.5 million square meters of solar panel (25,000 m^2). That’s a solar panel farm 1.5 km on a side (150 m). A solar panel on the ISS weighs around 1 metric ton for 400 m^2 of solar panel, meaning that our datacenter has around 6 million kg (60,000 kg) of solar panels attached. That adds another 9 billion dollars (90 million) in launch costs. But hey, the solar panels don’t break constantly at least. Hopefully.

Security

So lots of people online are pointing out that they reason to do this is to keep the datacenters out of the hands of any nosy peons or, even worse, government actors. First, anyone who has access to 160 rocket launches is working on a scale that even if they aren’t a government actor, the government is taking an interest in their actions. Or, as we’ve seen in the US, the government sold itself over to those people. But now *other* governments are interested in their actions.

And here’s the thing about objects in space. They aren’t something I can get to personally, but they are something any peer nation can reach out and touch. And you cannot hide in orbit. Especially not a 1 GW power source that will be blazing bright. It will be on a predictable path, easy to spot, and passing over the head of any and every nation on Earth at some point. Any nation that wants to compromise this thing to a permanent end just needs to launch a cheap suborbital filled with a payload of kitty litter.

Is it safe from the mob with pitchforks? Sure, I guess. Is it safe when the balloon goes up and shit gets real? Hell no, it will be the second wave of targets (right after the spy and comms satellites). Hiding and securing something in the noisy and robust background environment on Earth is far easier than keeping a box in orbit from being blown to pieces if someone wants to badly enough. This is the same issue that comes up with space-based weapons systems. Sure, they’re capable of raining hellfire (ok, tungsten rods) down on the yokels in Hiluxes, but for anything serious, all you’ve done is spent a shitload of money to put something you want to keep safe in a known orbit, for all to see.

And note: the whitepaper’s claim that you can build this thing for costs that aren’t completely ludicrous assumes that launch costs have dropped by two orders of magnitude. That means that getting your hands on a rocket is not necessarily the sort of thing that only state actors and the very wealthiest techlords can do. Putting a datacenter in space isn’t making it inaccessible in the world that makes it possible to put a datacenter in space.

Cooling

OK, enough fucking around. Here’s the real problem with this dumb-as-shit idea.

This thing is going to melt.

Now, space is cold, sure.

It is also, notably, empty. That means that things that are hot cool only through blackbody radiation. Here on Earth, you cool through evaporative and transfer mechanisms — this is why water feels colder than air at the same temperature, or a breeze cools you even if it isn’t any cooler than the ambient air. You can build evaporative cooling in space, but that will be really expensive in coolant as you’re definitionally throwing it all away (ok, there are capture systems but at this point you’re doing radiative cooling with extra steps).

Radiative cooling is inefficient compared to direct heat transfer. An object of a temperature $T$ (in Kelvins, of course) will radiate energy proportional to the fourth power of temperature and the surface area: \begin{equation} P = \epsilon \sigma A T^4 \end{equation} where $\sigma = 5.67\times 10^{-8} {\rm W/m^2/K^4}$, $A$ is the surface area, and $\epsilon$ is an efficiency factor (which is 1 if this is a perfect blackbody, and less than 1 otherwise).

Assuming a perfect blackbody, if our 1 GW spherical cow datacenter was contained in a sphere 30 m in radius (which is quite large), then in order to radiate away 1 GW, it would need to be at a temperature of 1000 K (725C, 1350 F). GPUs are going to have a hard time. T

o keep the system at a more manageable 300 K (basically room temperature), we will need radiators to reject heat. For our GW spherical cow, the surface area of those radiators can be no less than 2 million square meters (our 10 MW version needs 22,000 square meters.) , or as big again as the solar panel array. The two numbers are not exactly the same, but it isn’t a coincidence that the Solar insolation is approximately the same as the energy loss rate of an object at the approximate temperature of the Earth. You can use both sides of the radiator fin, though geometry becomes a concern: if the flat areas of the fins face towards the Sun, the Earth, or other panels they will absorb radiated energy and be less efficient in rejecting waste heat.

To utilize these radiator fins, you need to move the heat from the GPUs themselves, which requires active coolant flow, just as with an Earth-based datacenters. I’m costing out the coolant mass with the GPUs themselves (and giving the GPUs a massive discount). But keep in mind, this datacenter needs to move the coolant from the GPUs out through kilometers of tubing in the radiator fins, and then back. Putting these things together in orbit will be non-trivial, and add significant costs. Doing construction in space is hard.

The ISS, the largest heat rejection system in space I know of, has internal coolant systems to move the heat away from the living areas. These IACTS move the heat to the external thermal control systems, the PVTCS and and EACTS. The EACTS each reject 35 kW of heat using $42{m^2}$ of surface area. ISS has four. Our spherical cow datacenter just needs 29,000 of them. Looking at the ISS specs, a full EACTS system seems to mass 1,000 kg, which would add some $10^7 {\rm kg}$ in mass, more than doubling the mass that needs to be launched (and doubling the launch costs). Keep in mind, each of these radiators need to work continuously. If the coolant freezes or boils or breaks, and you don’t have backup rejection capacity or a way to fix it relatively quickly, your GPU cluster melts.

For What Purpose

Every one of these problems can, in principle, be solved. Humanity can do things, even now.

But the question to me is: why? For what purpose are you imagining spending many tens of billions of dollars to place a rapidly depreciating asset that needs constant maintenance into a difficult-to-get-to orbit in a high radiation environment, requiring immense solar arrays and just-as-immense heat radiators larger than anything we’ve ever placed in orbit by orders of magnitude?

You can build a gigawatt data center here on Earth. You can build a solar array right next to it. You need coolant for the GPUs, but you can then use standard cooling mechanisms (evaporation, transfer into a thermal sink) to dump the heat. When something breaks, the guy who lives nearby comes over to fix it, and doesn’t need a spacesuit. If you’re worried about security, build it into a mountain and hire a guard or two. Or 1,000. All of these are cheaper options, proven options.

So why are a bunch of techbros and their pet journalists hyping this?

Partly, it’s because American elites are completely innumerate. It took a bit of googling to get some of these numbers, but it isn’t hard to estimate some of the orders of magnitude. A datacenter is big. There are a lot of GPUs. They are going to weigh a lot. Launching to orbit is expensive. It is definitionally cheaper to build these things here on Earth. But hey, doing math is hard, thinking about thermodynamics is boring, and doing journalism is harder than doing stenography. And there’s good money for everyone here in saying that you need the US government to hand you all the money to have more lift capacity. A small number of people are going to make a disgusting amount of money not building a datacenter in orbit.

In addition, the venture capital tech sector bros behind all of this have signed on with the far right political project. Power for datacenters is a real issue (assuming you want to build these things in the first place). China is solving that by massively building solar capacity. But in America, the techbros can’t talk about building solar power on Earth, because they’ve hitched their wagon to a political movement that believes that solar panels are intrinsically evil, unlike good clean coal (note: sarcasm). More palatable for everyone involved if you build the datacenter in space, with the solar panels as footnote, than building them here on Earth like a normal person.

Finally, the Silicon Valley right-wing tech sector has a split personality. They are fundamentally not about a vision of a future that has anything in it beyond a boot, stamping on a human face, forever. What do they dream about using their LLMs for? Replacing office jobs with sweatshop work, an excuse to cut basic science funding, porn, and ways to automate scams and stock market manipulation. There is nothing here that’s interesting or exciting or motivating, just line go up and getting one over on the rubes.

But in the popular culture, the box marked “AI” is supposed to contain more than that. If you’re People in tech have some exposure to science fiction, but at this point, it’s pretty clear that most of them haven’t actually read the stuff they’re trying to cloak themselves in. There is a long speculative fiction tradition of thinking about kilo- and megascale engineering involving AI and computational substrates in space. It’s fun to imagine the ideal configuration of computronium and how to maximize the energy of a star to power it (the answer: matrioshka brains). The guys building porn-and-scams AI in league with censorious far right theocrats know exactly what they’re doing, but it’s more fun for them and for their fanboys if they pretend that they’re building the AI space utopia. Datacenters in space sounds like a step in that direction, even if none of it maths out. Plus, when called on it, they can just say that AI will solve the problem. Need 2 orders of magnitude reduction in launch costs? AI. Massive cooling fins that have never been built before? AI. GPU failure? AI. Meanwhile, just give them all the money please.