We already radiate heat away just fine in space, it’s just a matter of how much space do you need to use to do it and all the implications of what that would mean for any given satellite. I wouldn’t call it free, because you need the hardware to do it and the extra weight reduces the payload capacity of whatever you’re sending up, but we can do it.
Starlink also uses laser links to talk to each other which these satellites would also use. How they work can depend, but generally they bounce around in space until they can’t, and they might come back down to land, to move somewhere else over fiber to another ground station until they can go back up to reach you. But the more laser links the less they have to come down for technical reason, but they might still come down for bandwidth reason. I don’t really know how likely it is that any given connection is point to point.
Example of what could happen.
Your dish -> starlink -> starlink -> ground station -> Google -> ground station -> starlink -> ground station -> starlink -> groundstation -> starlink -> starlink -> your dish.
Fiber is still the better option on land if you can get it there, but there are a lot of places it’s never going to get laid, and will never be in the air, or on bodies of water.
Edit: Corrections on the laser links with an example.
I get a circular radiator at least a kilometer wide, assuming the radiator is quite efficient, a rather modest datacenter, and very hot coolant (70C).
…Realistically, the coolant temperature would need to be much lower. See how it’s a power of four in the formula? That means the radiator area gets very large real quick.
I cannot emphasize how expensive a functional 1km+ radiator would be in space. It’s mind bogglingly expensive.
If a space datacenter is in LEO like Starlink, then it’s in Earth’s shadow a lot of the time, and would have to be “part” of the starlink network constantly zooming over the ground. If it’s geosynchronous, then laser communication (or any communication) gets real tricky, and latency is limited by the speed of light. I’m not saying it’s impossible, but reliable high data rates would be an expensive engineering challenge.
For 100kW? I’m not going to try and figure things out from that massive site. A pre made calculator would have been nice if they had one.
edit: It is going to be LEO and likely connected to starlink with the same laser link they use.
Edit: Looking at orbits they might use sun synchronous orbits? It might not be in sun 100% of the time, but they are nearly always in sun.
Edit: I have no way to know if this is right, but a couple AI responses are saying for 100kW it would be ~150-170 square meters with temperatures around 70c
100kW? Nvidia BGX 200 servers are 14kW each, not counting the interconnect, or anything else. According to nuggets I’ve read online, we’re talking 200 megawatts for an Earth-based AI datacenter these days, without something exotic like underclocked Cerebras WSEs (which would be pretty neat, actually…)
I get about 0.46 square kilometers, depending on the coolant temperature, and ultimate efficiency of the system (with how you orient the thing relative to solar panels, how you circulate coolant…)
I have no clue what the construction of such a huge structure would look like, but if it was a simple 0.5 inch aluminum sheet, it would weigh like 15,000 metric tons. Even much thinner, that’s still on the order of “mass of a cargo ship”
Why is that, though?
Well, something like the ISS doesn’t generate much heat, and hypothetical rockets that need big radiators have very hot coolant to dissipate heat quickly. But space data centers are the sinister combination of “tons of waste heat” and “needs a low coolant temperature.”
They aren’t making a datacenter like on earth. They’re putting up a ton of satellites that will each generate about 100kW.
Everyone keeps thinking they’re putting these massive things up there, they are not doing that.
Edit: Oh I missed your tool this time was a real calculator this time, thank you! That says 127 square meters, with black body, 70c and 1 (but no idea if those are good values)
That’s interesting, but what’s the point? If it’s like 2 DGX boxes in each satellite, spaced out, the interconnect between them is going to be very slow, and the individual computational power of each satellite will not be that impressive.
And if you connect them all in one constructed mesh and wire them together, well, you’ve made a 200MW datacenter! The economies remain the same.
If hardware gets more power efficient, well… Then why do you need to go to space anymore?
Ya, the economies of how much total space / material for the global network is similar, although lets say higher due to losses in efficiency in distributing it over so many dishes, but in terms of how big any individual radiator is and how much space each one is going to take, the smaller sizes make it easier to manage. Trying to figure out a 150-200m2solarpanel radiator is a lot easier than trying to figure out a 1km2
The individual power of each satellite having to use a mesh network to train might not be fast enough, maybe they’ll still use land based ones for training, but no single person needs more compute than what a satellite can provide. So from the inference / customer computation side of things, it isn’t a problem.
We already radiate heat away just fine in space, it’s just a matter of how much space do you need to use to do it and all the implications of what that would mean for any given satellite. I wouldn’t call it free, because you need the hardware to do it and the extra weight reduces the payload capacity of whatever you’re sending up, but we can do it.
Starlink also uses laser links to talk to each other which these satellites would also use. How they work can depend, but generally they bounce around in space until they can’t, and they might come back down to land, to move somewhere else over fiber to another ground station until they can go back up to reach you. But the more laser links the less they have to come down for technical reason, but they might still come down for bandwidth reason. I don’t really know how likely it is that any given connection is point to point.
Example of what could happen.
Your dish -> starlink -> starlink -> ground station -> Google -> ground station -> starlink -> ground station -> starlink -> groundstation -> starlink -> starlink -> your dish.
Fiber is still the better option on land if you can get it there, but there are a lot of places it’s never going to get laid, and will never be in the air, or on bodies of water.
Edit: Corrections on the laser links with an example.
On radiators, plugging it into this formula:
https://projectrho.com/public_html/rocket/heatrad.php
I get a circular radiator at least a kilometer wide, assuming the radiator is quite efficient, a rather modest datacenter, and very hot coolant (70C).
…Realistically, the coolant temperature would need to be much lower. See how it’s a power of four in the formula? That means the radiator area gets very large real quick.
I cannot emphasize how expensive a functional 1km+ radiator would be in space. It’s mind bogglingly expensive.
If a space datacenter is in LEO like Starlink, then it’s in Earth’s shadow a lot of the time, and would have to be “part” of the starlink network constantly zooming over the ground. If it’s geosynchronous, then laser communication (or any communication) gets real tricky, and latency is limited by the speed of light. I’m not saying it’s impossible, but reliable high data rates would be an expensive engineering challenge.
For 100kW? I’m not going to try and figure things out from that massive site. A pre made calculator would have been nice if they had one.
edit: It is going to be LEO and likely connected to starlink with the same laser link they use.
Edit: Looking at orbits they might use sun synchronous orbits? It might not be in sun 100% of the time, but they are nearly always in sun.
Edit: I have no way to know if this is right, but a couple AI responses are saying for 100kW it would be ~150-170 square meters with temperatures around 70c
100kW? Nvidia BGX 200 servers are 14kW each, not counting the interconnect, or anything else. According to nuggets I’ve read online, we’re talking 200 megawatts for an Earth-based AI datacenter these days, without something exotic like underclocked Cerebras WSEs (which would be pretty neat, actually…)
Plugging 200 megawatts into this:
https://www.calctool.org/quantum-mechanics/stefan-boltzmann-law
I get about 0.46 square kilometers, depending on the coolant temperature, and ultimate efficiency of the system (with how you orient the thing relative to solar panels, how you circulate coolant…)
I have no clue what the construction of such a huge structure would look like, but if it was a simple 0.5 inch aluminum sheet, it would weigh like 15,000 metric tons. Even much thinner, that’s still on the order of “mass of a cargo ship”
Why is that, though?
Well, something like the ISS doesn’t generate much heat, and hypothetical rockets that need big radiators have very hot coolant to dissipate heat quickly. But space data centers are the sinister combination of “tons of waste heat” and “needs a low coolant temperature.”
They aren’t making a datacenter like on earth. They’re putting up a ton of satellites that will each generate about 100kW.
Everyone keeps thinking they’re putting these massive things up there, they are not doing that.
Edit: Oh I missed your tool this time was a real calculator this time, thank you! That says 127 square meters, with black body, 70c and 1 (but no idea if those are good values)
That’s interesting, but what’s the point? If it’s like 2 DGX boxes in each satellite, spaced out, the interconnect between them is going to be very slow, and the individual computational power of each satellite will not be that impressive.
And if you connect them all in one constructed mesh and wire them together, well, you’ve made a 200MW datacenter! The economies remain the same.
If hardware gets more power efficient, well… Then why do you need to go to space anymore?
Ya, the economies of how much total space / material for the global network is similar, although lets say higher due to losses in efficiency in distributing it over so many dishes, but in terms of how big any individual radiator is and how much space each one is going to take, the smaller sizes make it easier to manage. Trying to figure out a 150-200m2
solarpanelradiator is a lot easier than trying to figure out a 1km2The individual power of each satellite having to use a mesh network to train might not be fast enough, maybe they’ll still use land based ones for training, but no single person needs more compute than what a satellite can provide. So from the inference / customer computation side of things, it isn’t a problem.
edit: I meant radiator, not solar panel