A solar panel reaps only a small portion of its potential due to night, weather, and seasons, simultaneously introducing intermittency so that large-scale storage is required to make solar power work at a large scale. A perennial proposition for surmounting these impediments is that we launch solar collectors into space—where the sun always shines, clouds are impossible, and the tilt of the Earth’s axis is irrelevant. On Earth, a flat panel inclined toward the south averages about 5 full-sun-equivalent hours per day for typical locations, which is about a factor of five worse than what could be expected in space. More importantly, the constancy of solar flux in space reduces the need for storage—especially over seasonal timescales. I love solar power. And I am connected to the space enterprise. Surely putting the two together really floats my boat, no? No.
I’ll take a break from writing about behavioral adaptations and get back to Do the Math roots with an evaluation of solar power from space and the giant hurdles such a scheme would face. On balance, I don’t expect to see this technology escape the realm of fantasy and find a place in our world. The expense and difficulty are incommensurate with the gains.
How Much Better is Space?
First, let’s understand the ground-based alternative well enough to know what space buys us. But in comparing ground-based solar to space-based solar, I will depart from what I think may be the most practical/economic path for ground-based solar. I do this because space-based solar adds so much expense and complexity that we gain a large margin for upping the expense and complexity on the ground as well.
For example, transmission of power from space-based solar installations would likely be by microwave link to the ground. If we’re talking about sending power 36,000 km from geosynchronous orbit, I presume we would not balk about transporting it a few thousand kilometers across the surface of the Earth. This allows us to put solar collectors in hotspots, like the Desert Southwest of the U.S. or Northern Africa to supply Europe. A flat panel tilted south at latitude in the Mojave Desert of California would gather an annual average of 6.6 full-sun-equivalent hours per day across the year, varying from 5.2 to 7.4 across the months of the year, according to the NREL redbook study.
Next, surely we would allow our fancy ground-based panels to articulate and track the sun through the sky. One-axis tracking about a north-south axis tilted to the site latitude improves our Mojave site to an annual average of 9.1 hours per day, ranging from 6.3 to 11.2 throughout the year. A step up in complexity, two-axis tracking moves the yearly average to 9.4 hours per day, ranging from 6.8 to 12.0 hours. We only gain a few percent in going from one to two axes, because the one-axis tracker is always pointing within 23.5° of the direction to the sun, and the cosine projection of this angle is never less than 92%. In other words, it is useful to know that a simple one-axis tracker does almost as well as a more sophisticated two-axis tracker. Nonetheless, we will use the full-up two-axis performance against which to benchmark the space gain.
On a yearly basis, then, getting continuous 24-hour solar illumination beats the California desert by a factor of 2.6 averaged over the year, ranging from 2.0 in the summer to 3.5 in the winter. One of my points will be that launching into space is a heck of a lot of work and expense to gain a factor of three in exposure. It seems a good bet that it’s cheaper to build three times as many panels and stick them on the ground. It’s not rocket science.
For technical accuracy, we would also want to correct for the atmosphere, which takes a 21% hit for the energy available to a silicon photovoltaic (PV) on the ground vs. space, using the 1.5 airmass standard. Even though the 1347 W/m² solar constant in space is 35% larger than that on the ground, much of the atmospheric absorption is at infrared wavelengths, where silicon PV is ineffective. But taking the 21% hit into account, we’ll just put the space gain at a factor of three and call it close enough.
What follows can apply to straight-up PV panels as collectors, or to concentrated reflectors so that less photovoltaic material is used. Once we are comparing to two-axis tracking on the ground, concentration is on the table.
Are we indeed dealing with 24 hours of exposure in space? A common run-of-the mill low-earth-orbit (LEO) satellite orbits at a height of about 500 km. At this height, the earth-hugging satellite spends almost half its time blocked from the Sun by the Earth. The actual number for that altitude is 38% of the time, or 15 hours per day of sun exposure. It is possible to arrange a nearly polar “sun synchronous” orbit that rides the sunrise/sunset line on Earth so that the satellite is always bathed in sunlight, with no eclipsing by Earth.
But any LEO satellite will sweep past the ground at over 7 km/s, appearing for only 2 minutes above a 30° elevation even for a direct overhead pass (and only about 6 minutes from horizon to horizon). What’s worse, this particular satellite in a sun-synchronous orbit will not frequently generate overhead passes at the same point on the Earth, which rotates underneath the orbit.
In short, solar installations in LEO could at best provide intermittent power to any given site—which is the main rationale for leaving the ground in the first place. Possibly an armada of smaller installations could zip by, each squirting out energy as it passes by. But besides being a colossal headache to coordinate, the sun-synchronous full-sun satellites would necessarily only pass over sites experiencing sunrise or sunset. You would get all your energy in two doses per day, which is not a very smooth packaging, and seems to defeat a primary advantage of space-based solar power in avoiding the need for storage.
Any serious talk of solar power in space is based on geosynchronous orbits. The period of a satellite around the Earth can be computed from Kepler’s Law relating the square of the period, T, to the cube of the semi-major axis, a: T² = 4p²a³/GM, where GM ˜ 3.98×1014 m³/s² is Newton’s gravitational constant times the mass of the Earth. For a 500 km-high orbit (a ˜ 6878 km), we get a 94 minute period. The period becomes 86400 seconds (24 hours) at a ˜ 42.2 thousand kilometers, or about 6.6 Earth radii. For a standard-sized Earth globe, this is about a meter from the center of the globe, if you want to visualize the geometry.
A geosynchronous satellite indeed orbits the Earth, but the Earth rotates underneath it at like rate, so that a given location on Earth always has a sight-line to the satellite, which seems to hover in the sky near the celestial equator. It is for this reason that satellite receivers are often seen tilted to the south (in the northern hemisphere) to point at the perched platform.
Being so far from the Earth, the satellite rarely enters eclipse. When it does, the duration will be something like 70 minutes. But this only happens once per day during periods when the Sun is near the equatorial plane, within about ±22 days of the equinox, twice per year. In sum, we can expect shading about 0.7% of the time. Not too bad.
Now here’s the tricky part. Getting the power back to the ground is non-trivial. We are accustomed to using copper wire for power transmission. For the space-Earth interconnect, we must resort to electromagnetic means. Most discussions of electromagnetic power transmission centers on lasers or microwaves. I’ll immediately dismiss lasers as impractical for this purpose, because clouds block transmission, because converting the power into electricity is not as direct/efficient as it can be for microwaves, and because generation of laser power tends to be inefficient (my laser pointer is about 2%, for instance, though one can do far better).
So let’s go microwave! For reasons that will become clear later, we want the highest frequency (shortest wavelength) we can get without losing too much in the atmosphere. Below is a plot generated from an interactive tool associated with the Caltech Submillimeter Observatory (where I had my first Mauna Kea observing experience). This plot corresponds to a dry sky with only 2.0 mm of precipitable water vapor. Even so, water takes its toll, absorbing/scattering the high-frequency radiation so that the fraction transmitted through the atmosphere is tiny. Only at frequencies of 100 GHz and below does the atmosphere become nearly transparent.
But if we have 25 mm of precipitable water (and thick clouds have far more than this), we get the following picture, which is already down to 75% transmission at 100 GHz. Our system is not entirely immune to clouds and weather.
But we will go with 100 GHz and see what this gets us. Note that even though microwave ovens use a much lower frequency of 2.45 GHz (λ = 122 mm), the same dielectric heating mechanism operates at 100 GHz (peaking around 10 GHz). In order to evade both water absorption and dielectric heating, we would have to drop the frequency to the radio regime.
At 100 GHz, the wavelength is about λ ˜ 3 mm. In order to transmit a microwave beam to the ground, one must contend with the diffractive nature of electromagnetic radiation. If we formed a perfectly collimated (parallel) beam of microwave energy from a dish in space with diameter Ds—where the ‘s’ subscript represents the space segment—we might naively anticipate the perfectly-formed beam to arrive at Earth still fitting in a tidy diameter Ds. But no. Diffraction imposes an angular spread of about λ/Ds radians, so that the beam spreads to a diameter at the ground, Dg ˜ rλ/Ds, where r is the distance between transmitter and receiver (about 36,000 km in our case). We can rearrange this to say that the product of the diameters of the transmitter and receiver dishes must approximately equal the product of the propagation distance and the wavelength: DsDg ˜ rλ
So? Well, let’s first say that Ds and Dg are the same. In this case, we would require the diameter of each dish to be 330 m. These are gigantic, especially in space. Note also that really we need Dg = Ds + rλ/Ds to account for the original extent of the beam before diffraction spreads it further. So really, the one on Earth would be 660 m across.
Launching a microwave dish this large should strike anyone as prohibitively difficult, so let’s scale back to a more imaginable Ds = 30 m (still quite impressive), in which case our ground-based receiver must be 3.6 km in diameter!
Now you can see why I wanted to keep the frequency high, rather than dipping into the radio, where dishes would need only get bigger in proportion to the wavelength.
Converting Back to Electrical Power
At microwave frequencies, it is straightforward to directly rectify the oscillating electric field into direct current at something like 85% efficiency. The generation of beamed microwave energy in space, the capture of the energy at the ground, then conversion to electrical current all take their toll, so that the end-to-end process may be expected to have something in the neighborhood of 50% efficiency.