20 November 2006

Alternative Energy Back in Vogue?

WashingtonPost.com is carrying a story on alternative energy on their frontpage today. I should know better to draw any conclusions from a datum but I am somewhat surprised to see this given where oil is on the market. When oil was hitting records articles on alternative energy sources, biofuels, and hybrid cars were all the rage. Now the price has dropped 20 %, which is, gasp, 2005 levels, people don't seem to be quite as concerned. Proof that value is a relative measure if anyone ever doubted it.

Where the story is interesting is that it focuses on the employment potential of alternative energy. Unlike oil, alternative energy is fairly employment heavy. It also doesn't export jobs to the same extent. It also makes some key points that I've never seen in mass media before on costs. First, the author quotes BP's spokemen as stating:

If they last as long as planned, solar panels might become competitive without government subsidies. Edwards said that every time industry capacity doubles, the cost of panels falls about 20 percent.

Capacity has doubled over the past three years, but costs haven't dropped as much as expected because of a silicon shortage. Eventually, though, Edwards said that "if we can keep driving costs lower, we will reach a point where solar is the same price as grid power."

I've historically seen this as a 15 % drop per doubling of capacity but the end result for an industry growing at 35 % annually is more or less the same. Thin film is coming and it will have a huge reduction in the amount of bulk semiconductor consumed. The next step is to use growth processes that minimize the degree of vacuum required for production, since high vacuum requires expensive, time consuming production techniques.

03 October 2006

Back to Work

I've been a little busy lately due to my apartment being sold out from under me as a condo. This is one of the benefits of living in a boom economy in a fossil fuel rich province. Fortunately I've been able to find a new place and hopefully I won't have to move again, and that should mean a return to a semi-monthly posting schedule.

18 September 2006

Note to Other Blogger Users

Blogger is offering a new beta version of their blogging system. One of the 'features' of this system is that you cannot use your new Google account to post comments on original blogspot.com websites that haven't switched over.

12 September 2006

Solar Power Satellite

And now for a trip to Buck Rogers in the 25th Century... One oft repeated concept in science fiction is that of power being generated in space by solar cells and then beamed down to the surface. Such a system requires three basic components: a collector to gather solar radiation, a transmitter to redirect the energy to the surface, and a receiver on the surface to transform the beam into electricity. See Wikipedia for an overview.

Space has a number of advantages for solar power. For one, a satellite in a high geosynchronous orbit (35,887 km altitude) is rarely shaded by the Earth. As a result, it is in sunlight about 98 % of the time. Also, there is no atmosphere or clouds to attenuate and diffuse the incoming solar radiation (insolation). A solar module in high orbit will receive the full 1367 W/m2 insolation. This compares well to ground based systems that average 250 (Arizona) - 125 (Britain) W/m2 with a peak of 1000 W/m2 at mid-day.

Concentrating optics − typically parabolic mirrors or Fresnel lenses − can be used in space without concerns over cloud cover diffusing the insolation. On the other hand cooling the solar modules becomes quite difficult as there is no atmosphere to advect heat away. As a result all heat has to be dumped through radiators. If one wants to use a concentrating photovoltaic arrangement it becomes necessary to have a cooling system with shaded radiators because the efficiency of most solar cells declines with higher cell temperatures. All this adds more weight to be launched into orbit.

The biggest overall drawback to any sort of space power solution is the cost of launching material into orbit. At the top end of the chain, NASA's Space Shuttle or the Titan booster cost approximately $10,000/kg to reach low earth orbit. Getting up to geosynchronous orbit requires an additional booster and increases the cost by a factor of 5-6. Programs such as SeaLaunch or the Russian Proton booster are cheaper but by less than an order of magnitude. Realistically, in order for space solar to have any opportunity costs would need to drop to $100/kg, which is nearly impossible for a Western company. There are all sorts of concepts for reducing the costs of launching satellites − air-launch, or the big-dumb booster concept − but none have the financing given the lack of a dependable market and high-profile busts such as Beal Aerospace.

The high capital cost of launch services has a secondary effect that it requires one to use expensive, high-efficiency cells rather than the one with the lowest price per unit peak power. This further hampers the ability of space solar of being cost competitive with ground solar.

In contrast to the cost, the energy required to place a solar system into orbit is not prohibitive. For a sea level launch from the equator the amount of potential and kinetic energy required is 56.5 MJ/kg. If you then include gravity losses during launch, drag with the atmosphere, and engine performance loss as a function of external pressure, the amount of energy required will increase but by less then 25 %. The impact on EROEI should be marginal.

A further problem is that satellites in geosynchronous orbit are outside the Earth's magnetosphere, leaving them open to bombardment by charged particles. This will drastically limit their lifetime compared to ground-based systems. A satellite in geosynchronous orbit will see a flux of 6·1013 (1 MeV electrons) cm-2 year-1 (with considerable variation year-to-year depending on solar flare activity). A 1 Mega electron volt particle is highly energetic and more than enough to break bonds and eject K and L-shell electrons from semiconductors. A solar cell in geosynchronous orbit will typically lose 5-6 % of its performance per year. Compare that to ground based units that are guaranteed to provide 90 % power after 12.5 years, or a loss of 0.8 %/year. We can see that even if a space solar panel receives 8× the insolation of a ground based unit, it will in fact produce less energy over its much shorter lifetime. The wikipedia article claims a lifetime of 20 years but that is not realistic. The economics suffer as a result.

Diffraction Killed the Radio Star

The single greatest problem for space solar is transmitting the power from space to the ground where we use it. The most common proposal is to use microwaves for wireless power transmission. However, as people who have previously seen solar power satellite proposals will know, the satellite needs an enormous antenna and the ground station an even larger rectenna. The reason for this is the phenomenon of diffraction.

a = 1.22 λ·L / d

where 'a' is the distance to the first minima given 'λ' the wavelength, 'L' the distance between the satellite and the rectenna, and 'd' the diameter of the antenna on the satellite. The big problem that microwave power transmission runs into is the enormous wavelength used. The standard frequency for microwave transmission is 2.45 GHz (or 12.24 cm wavelength). This frequency is not attenuated much by the atmosphere or water vapour although it is already popular for mobile phones, which creates a conflict. As compared to visual light at 500 nm, the diffraction of microwaves is far, far greater (about 250000 x).

Figure 1: Ground intensity profile (Airy pattern)
of a 2.45 GHz, 1.0 km diameter geosynchronous transmitter.

As we can see from Figure 1, the size of the transmitter (antenna) and receiver (rectenna) are quite huge. The space-based antenna needs to be at least 1 km in diameter, which makes it far larger than any satellite even proposed by a major space agency.

The rectenna (or receiver) is quite simple but also extremely massive. To absorb the microwave energy in the form of AC power one simply needs to string a regular array of wires, each about 12.25 cm apart. Of course, the problem is that this pattern needs to be replicated over an area of 78.5 km2. That's almost 8000 hectares (or 20000 acres), which represents a fairly massive continuous landmass. I know that space solar advocates like to suggest that the land under a rectenna could still be used for farming, but it's large enough to cover several farms, there's going to be a huge NIMBY problem, and the sideband zones outside the rectenna will receive a fairly high dose.

While microwave transmission is quite efficient on the emission and reception ends there will be some loss in the atmosphere. The major portion of the loss comes from the microwaves that fall outside of the centre maxima of the Airy disk. Roughly 84 % of the total intensity is contained within the centre disk. The sidebands are not worth capturing. Overall microwave power transmission is usually assessed at 80 % efficiency, which is really quite good. Additional losses will come from power transmission over long distances.

It should be very clear that any sort of solar power satellite that proposes to utilize microwave power transmission is not a small project. Far from it, it requires us to construct structures larger than any ever constructed by the human race. Needless to say, this is not something you can go down to your local bank to get a loan to build. It's not even clear what countries could afford such a thing on their own. It's quite the antithesis of the small, incremental solar panel.

Death Star Option

One possibility that I've been asked about before is using some sort of monochromatic emission (i.e. a laser) in the near infrared region to beam power from space to the ground. This sidesteps much of the problems with diffraction: using an infrared laser with an aperture of 1.0 m would only require a photovoltaic array on the surface with a diameter of 180 m.

The problem with using light near the visual spectrum is that our means of generation are not very efficient. One is basically forced to use a combination of laser diode emitter and photovoltaic receiver. Examination of the Air Mass standards gives a good idea of what regions in the near infrared are suitable for wireless power transmission. The best match is probably to use 2250 nm wavelength lasers coupled to germanium solar cells. This region has a transmittance of about 75 % through the scale height of the atmosphere in good conditions.

For emission, one would want something such as an array of quantum wire laser diodes. Laser diodes with quantum (very small) emission structures have become very efficient, recently reaching 70 % efficiency. Eventually we will probably see 85 % efficient quantum wire or quantum dot laser arrays. At some point, however, lasers have limits in their efficiency. A laser must be pumped with a higher energy electron or photon than the photon that it emits.

Photovoltaics can be significantly more efficient than normal because they can be closely tailored to the wavelength that is transmitted. Normally much of the inefficiency comes from light of the wrong wavelength losing some of its energy when a solar cell absorbs it. If the light incident on the photovoltaic panel can be well matched then the efficiency will increase markedly. Under one sun the theoretical efficiency limit for germanium (0.55 eV) is about 70 % and it is 85 % under heavy concentration as any beamed power system would be. The very best contemporary commercial cells reach about 2/3rds their theoretical limits. A reasonable assumption might be that we could achieve 60 % efficiency energy conversion of the beamed power by an photovoltaic array.

So to sum up, after sunlight is captured in space and converted to electricity we must convert it to lasing photons (0.85), beam it through the atmosphere (0.75), and collect it on the ground (0.85 for diffraction and 0.60 for photovoltaic conversion). The total efficiency of the system is 0.85 · 0.75 · 0.85 · 0.60 = 0.32. Thus while the infrared wavelength system is more practical from the perspective of scale, it's efficiency advantage over ground-based solar is marginal at best. Poor atmospheric conditions will render ground based solar more efficient and cheaper.

Even if a wavelength has good transmission through water vapour, it will still be diffused by contact with a cloud (which is after all condensed water droplets). Clouds cause diffusion because light refracts through the water-air interface of the droplets. Clouds are not 'transparent' to near infrared light, contrary to popular culture. For water droplets to be transparent to light, the wavelength of light needs to be larger than the dimensions of the water particles.

Simple Wireless Power Transmission

It should be clear by now that the transmission stage is the most nettlesome part of space solar power concepts. Is there a better concept available? Perhaps we should return to the KISS (Keep It Simple, Stupid) paradigm and look for a better way. Our objective is two-fold: deal with the intermittency aspect and increase the efficiency of our expensive photovoltaic devices.

One science fiction concept that might actually have an application in this case the ubiquitous space mirror. A parabolic mirror in space is capable of redirecting light from the sun to the surface with a fair degree of concentration. It will be able to redirect the largest amount of power during the middle of the night when it is on the far side of the earth from the sun and no power at noon. The bonus is, compared to a space-based photovoltaic collector, is that it can be extremely simple. Only a thin layer of aluminium on a flexible plastic substrate is necessary so the area mass density would be far lower than for a solar power satellite.

Prototypes of space mirrors have already been launced by the Russians (a later test in 1999 failed). An array of space mirrors would be able to provide more even insolation over the course of the day for a large array of photovoltaic panels. For a big solar farm in Arizona or North Africa, there could be a significant benefit to the intermittency as well as economic gain from extra power generation.


Sheila G. Bailey, D. J. F. (1998). "Space photovoltaics." Progress in Photovoltaics: Research and Applications 6(1): 1-14.

Nelson, J. (2003). The Physics of Solar Cells London, Imperial College Press.

24 August 2006

Pluto a Dwarf Planet


21 August 2006

Water and Plug-ins Aren't Miscible

One issue that hides under the table of the plug-in concept is that your hydrocarbon fuel may end up sitting in the tank for months if you don't extend the electric range of your vehicle very often. This portends badly for gasoline which can gum up as solids settle out. Summer and winter formulations of gasoline are different (with winter gas containing hydrocarbons that can evaporate in summer temperatures).

Water accumulation can cause greater problems. Gasoline can only contain about 0.02 % water by volume before the water will separate. This memo from the Environmental Protection Agency details some of the issues (thanks to Robert Rapier who tracked this down for me). If a car is left outside where it can go through many hot/cool cycles condensation can increase the amount of water in the tank. When it gets cold in the winter time the water can freeze or simply stall the engine as it drops to the bottom of the tank. Water expands when it freezes so you don't want freezing to occur in your fuel lines or pump.
Ethanol mixed with gasoline complicates the situation. Ethanol likes to absorb water. Anyone who knows anything about liquor probably is aware that ethanol and water are miscible, i.e. they will mix in any proportion. For a vehicle running on pure ethanol this wouldn't be a serious problem. The presence of water in the fuel reduces fuel economy because it is dead weight that needs to be vapourized and brought up to high temperature in the cylinder. However, that's all it does.
When you mix gasoline, ethanol, and water you get a ternary solution as long as the water content is low enough. As the proportion of water increases it will reach a critical point at which time the ethanol and water will separate from the gasoline and form a new solution. A water/ethanol solution is heavier than gasoline or water, so it will drop to the bottom of the tank.

As you can see from the graph the tolerance drops with lower temperatures. If water manages build up in the tank over the summer and fall then in the winter as the temperature drops so does the tolerance. Where previously you were just sipping gasohol with 0.5 % water content you might end up with half a litre of ethanol and water in the bottom of your tank after an overnight cold snap. Then when the driver stomps on the pedal to get the power of the engine along with the electric drive the engine will turn over but then splutter and not fire.

It's generally quite difficult to separate water and ethanol. Ethanol has a higher vapour pressure than water (which means it likes to evaporate faster) and it's more polar. It's not practical to use any sort of desiccant to remove water from gasohol. The best solution is generally to keep the tank full and use a better seal for the gas cap. Stations will also have to take steps to insure that their storage facilities don't allow water to infiltrate before they sell it to their customer. Overall it's a fairly minor problem that can be overcome with good design.

20 August 2006

Baby Steps Along the Plug-in Path

As I mentioned in my last post on the drawbacks of hydrogen, it has problems with production, storage, and distribution. It is well known that electric vehicles thump hydrogen on production efficiency. With regards to storage, the problems for batteries are greater than hydrogen but are still technical rather than theoretical in origin. Distribution is not a problem at all for electrics and plug-ins.

Not only does the electrical grid already exist, but also progress towards a full-fledged plug-in hybrid can be made in small baby steps that do not require any massive investments of capital.
  1. The battery capacity can be tailored to the user. Ideally a plug-in hybrid should have enough electric range for the daily commute. Since battery storage is expensive no one will want to buy more then is economical for them
  2. As transportation energy consumption is shifted to electricity the proportion of hydrocarbon fuel that is derived from biomass or synthetic sources can be increased at the expense of fossil fuels. This does not benefit climate change greatly but does have a major mitigating impact on the problems of peak oil.
  3. Improvements in hybrid technology that boost the tank-to-wheel efficiency of cars can be gradually introduced. This includes adding ultracapacitors to increase the power that can be recouped by braking, lightening of the body, etc.
It is easy to imagine the battery capacity of the plug-in advancing in small steps. Right now the biggest markets for high power batteries is power tools and laptops. Widespread introduction of hybrids with their large battery packs should drive the price down (as long as it doesn't cause any material shortages). The Toyota Prius currently carries a 1500 Wh battery pack. Assuming it can discharge 50 % of its capacity and it consumes 160 Wh/km then the electric range is already almost 6 km. The main reason the Prius doesn't have an all-electric switch (in North America) is that the battery and transmission can't provide enough power to achieve high speed. That requires a series hybrid setup, where there is no transmission: the propulsion is provided solely by electric motors.

Switching from a parallel to a series hybrid is a big step. It transforms the engine into a generator, which implies that it runs at a constant speed. This will benefit diesel engines significantly on the side of pollution emissions. It also means that the size of the internal combustion generator component of the car can shrink substantially. The engine only needs to be able to provide enough power to keep the vehicle at highway cruising speed. For a car such as the Prius traveling at 110 km/h that's only about 15 kW (or 20 hp) to overcome the drag and rolling resistance. Note that is power needed at the tires, not what's developed on the crankshaft. All of the surge acceleration can be handled by the stored electrical energy. The torque of electrical motors is extremely high.

I exaggerate a touch but the hydrogen path benefits little from hybrid technology. One can't take all the benefits of the hybrid path and then at the end of the road decide to switch to the less efficient hydrogen vehicle. Does not compute, QED.

One reaction to the general malaise around hydrogen hype lately comes from Hydrogenics, which has joined the Plug-in Development Consortium (hat tip to Green Car Congress). This is a fairly major step down for a major Proton Exchange Membrane fuel cell equipment manufacturer. The plug-in concept marginalizes the on-board generator by attempting to use battery capacity to its maximum.

At some time in the far future we might find ourselves in the situation where electricity can be produced extremely cheaply in prodigious quantities. The most likely candidate would be solar power that uses thin-film technology that can be produced without high vacuum manufacturing. In this case the efficiency of energy storage might become irrelevant. One might assume that a hydrogen vehicle could be produced for significantly less, even if it consumed 2-3 times more energy. Currently the Honda H2 fuel cell vehicle costs about $2000000 so they have a long way to go. In fact the concept is so far into the future, what possible business does a commercial company have doing research on the subject? It's futurist stuff, which is why we see hydrogen often talked about in the same breath as carbon nanotubes and superconductors. The path to improving the status quo clearly lies along the path Toyoto chose of the hybrid vehicle. We have to keep kicking car builders in the pants so they continue to move forward.

23 July 2006

Hydrogen's Death Knell?

Update: Ben has now posted an interview with Ulf Bossel. If you're masochistic you can also listen to me try to make the same points here.

A major announcement (hat tip to theWatt.com) came out of the Lucrene Fuel Cell Forum two weeks ago. The president of the conference, Ulf Bossel, presumably with the support of the organizing commity, announced that the pre-eminant European fuel cell conference would no longer be providing a forum for the discussion of hydrogen fuel cells. This is due to the unsuitability of hydrogen as a fuel to power our economy.
Fuel cells are energy converters, not energy sources. They will be part of a sustainable energy solution only if they can compete with other conversion technologies. This includes system parameters, fuels and applications. Time has come for a critical assessment.


The European Fuel Cell Forum is committed to the establishment of a safe energy future. Therefore, it will continue to promote fuel cells for sustainable fuels, but discontinue supporting the development of fuel cells for hypothetical fuel supplies. Time has come for decisions. Keeping all options open is not an adequate response to mounting energy problems.

Therefore, the schedule of the European SOFC Forum will be continued in 2008 with an extended conference every second year. Beginning 2007 (July 2 to 6) sustainable energy topics will be emphasized in odd years. Despite earlier announcements the European PEFC Forum series will not be continued.

A series of technical reports on the subject by Ulf Bossel and others is available here. I discussed this on this week's theWatt.com podcast. I would like to reiterate some of the arguments here.

Just to provide a bit of background there are basically two general categories of fuel cells: ones that operate at low temperature − typically below the boiling point of water − and ones that operate at a high temperature of at least several hundred degrees. The proton exchange membrane or PEM fuel cell is the low temperature fuel cell, along with the direct methanol fuel cell. There are many more high temperature fuel cells − examples would be the solid oxide or molten carbonate types. The PEM is unique in that it's the only type that can only burn pure hydrogen; all of the other types can directly burn hydrocarbons of one type or another.

The proponents of the proton exchange membrane fuel cell have promulgated this concept of the 'hydrogen economy' that I'm sure all my readers have seen references too. Basically the hydrogen economy is the idea that we can shift from using fossil fuels to hydrogen as a chemical fuel to power our economy. The transformation would begin with the transportation sector and eventually propagate to providing residential fuel cells that could provide combined heat and power.

There are three major shortcomings of the hydrogen economy concept:
  1. Production.
  2. Storage.
  3. Distribution.
Distribution is basically a chicken and egg problem. No one wants to buy a hydrogen car until there are fueling stations available and no corporation wants to invest in hydrogen fueling stations until there are customers on the road. Building the hydrogen economy would require an absolutely massive capital investment. For example, none of our current natural gas pipelines are capable of handling hydrogen because it’s a highly corrosive substance.

The storage problem is partly technological and partly the laws of physics. The basic difficulty comes from the fact that hydrogen has an extremely low density: liquid hydrogen a density of only 80 k/m3. In order to get hydrogen from a lighter than air gas to some usable stored form it needs to be compressed, liquefied, or chemically bonded. All of these means need to consume a large fraction of the energy of the hydrogen to get it to that state. Hydrogen is not like gasoline. You cannot pull up to a station and pump your tank full in a couple of minutes. A lot of people don't realize that pumping up a high-pressure compressed hydrogen tank can easily take 30 - 60 minutes.

The last problem is that of production; it is the most fundamental problem and it’s the basis of the schism that's occurred at the Lucerne Fuel Cell Forum. Unlike fossil fuels hydrogen doesn't exist in nature on Earth − we can't poke a hole in the ground and pump out hydrogen formed from long-dead plants. Hydrogen isn't an energy source; it's an energy currency, like electricity. Elemental hydrogen has to be produced from other compounds such as water or hydrocarbons.

When it comes to producing hydrogen from fossil fuels the high temperature fuel cell guys rightly get out their signing voice and break into their best rendition of, "Anything you can do I can do better..." from Mary Poppins. The solid oxide fuel cell and other types can all burn natural gas, gasified coal and biomass at a higher efficiency than converting those feedstocks to hydrogen and using a PEM cell.

That leaves producing hydrogen from the electrolysis of water, which is the supposedly ‘green’ option. The reality is that the electrolysis to fuel cell path is a terribly inefficient method to convert solar, wind, and nuclear energy to useful work. Let us consider the production of hydrogen from wind power. First you have to rectify the alternating current to direct current to power the electrolyzer, which is about 90 % efficient. An electrolyzer is optimistically 75 % efficient so you lose another quarter of your energy there. Then you need to store the hydrogen, by say compressing it under high pressure. This would consume about 20 % of the energy content of the hydrogen, and distributing it perhaps another 10 %. Now we finally have the hydrogen at the fuel cell but then we have to remember that the fuel cell is maybe 50 % efficient. The product of the fuel cell is direct current electricity, so in the end we’ve gone through a whole bunch of steps in a big circle. When you multiply all these factors together you find that the well-to-wheel (or source-to-sink) efficiency is only about 25 %.

The obvious question that Ulf Bossel and people such as myself ask is why go to all that trouble? Why not just transmit and use the electricity directly? High-voltage direct current electricity transmission is just as efficient as pipelining hydrogen. If we allow for 90 % efficiency for rectifying and 90 % for transmission we end up with 3.3 times more energy for the electricity economy than the hydrogen economy. If you want to include batteries the math doesn’t change much because the round-trip efficiency of batteries is really very high – 90 % for lithium-ion batteries. As Bossel states, hydrogen cannot compete with its own fuel source − in this case, electrons. This poor efficiency of the hydrogen economy that I’ve talked about is not something that has a solution through improved technology. The laws of thermodynamics maintain the limiting factor here. All the extra steps in the hydrogen case produce entropy, and there’s no way to get past certain theoretical limits to the efficiency of each stage.

The inefficiency of hydrogen isn’t something that we can afford environmentally. Would anyone consider it better to have three wind turbines rather than one, or three nuclear power plants rather than one? If you try to figure out how many power plants we would need to implement the hydrogen economy it becomes readily apparent what a fantasyland it is. The USA uses approximately 20 million barrels of oil per day. If we were to replace every gallon of gasoline with a kilogram of hydrogen we would require 1.4 TW of continuous power. However, there’s only 0.9 TW currently installed, of which about 2/3rds is used on average, so the idea of trying to use nighttime power to produce hydrogen won’t work. The existing infrastructure is incapable of powering a hydrogen economy – we’re talking about 1500 large nuclear power plants. So not only would we have to change our entire distribution network but we would also have to massively ramp up electricity production. The expense of the whole idea is terrifying.

This is sharply contrasted with trying to develop plug-in vehicles, electrified rail and such. Because the electricity path is so much more efficient we can actually dump almost all of our transportation energy needs on the existing electricity grid. If we throw in some efficiency improvements to the residential and commercial sectors then everything is peachy. The existing electrical grid that we have may not seem as sexy as hydrogen but it’s definitely the better option.

What's happened at Lucrene is that the rest of the fuel cell community have gotten tired of the empty promises of the hydrogen economy and are fighting back. PEM fuel cells have been receiving a disproportionate share of funding into all alternate energy technologies. What Ulf Bossel is saying is that is we have to refocus our efforts and monies onto technologies that we know will actually work rather than some idea put forth by a special interest group. Of course, the PEM researchers don’t want to hear this. Careers are at stake so I wouldn’t expect abandonment of the hydrogen economy concept quite yet.

12 July 2006


I'm going on strike until the quantity and quality of comments improves, or until I get back from my vacation, whichever comes first.

10 July 2006

Scientific Thinking for Energy and the Environment

Billmon, one of the more amusing bloggers out here, recently wrote an acerbic rendition of Al Gore's quest to moderate global warming. However, the bulk of his post is not about global warming per say but rather a grouchy old man railing about how thick-headed the people of this world are. While the gist of his argument is true, it is also not novel. If you went to Victorian times and took a survey of intellectual blowhards of that period, I am sure that they would feel that an even largely segment of the population is composed of blockheads then we would currently. This is not to say that we should be satisfied with the way our education system currently works. Clearly it doesn't go fair enough in training the minds of young pupils. Spoon-feeding facts into someone's brain isn't useful if you lack the capacity to properly utilize them.

Many people lack the ability to differentiate science from pseudo-science and moreover could not even suggest how one might go about doing so. When I write a post about wind or biofuels and then follow it up a couple of days later with another discussion that pans my original talking points it is not because I lack "strength of conviction" or some other absurd form of weakness. I challenge myself because I am fully aware of the fallibility of my assumptions and the need to constantly assess my position to ensure that it is quantitatively factually correct.

The only way to enlighten people is to slowly chip away at their preconceptions of the world. The idea of critical thinking as a standard practice needs to insidiously infiltrate the manner in which people think. This is especially important in the world of energy and the environment which are now so interwoven to form a Gordian knot. To me, when an environmental group such as Greenpeace boils down their energy policy to solar and wind, it is just as intellectually dishonest as CEI's "Carbon Dioxide is Life" campaign. Both policies can be easily demonstrated to be bogus.

I would like to discus some of the basics of scientific thinking. This is adapted from a small book, "Miniature Guide on Scientific Thinking," by the Foundation for Critical Thinking. Unfortunately this doesn't appear to be available on their website any longer.

Development of the Scientific Mind
  1. Unscientific thinker: Unaware of significant problems in thinking about scientific issues. Hence one is unable to distinguish science from pseudo-science.
  2. Challenged thinker: Beings to recognize that one often fails to think scientifically when considering scientific questions.
  3. Beginning Scientific Thinker: Tries to improve scientific thinking but lacks regular practice in it.
  4. Practicing Scientific Thinker: Recognizes the need to regularly practice scientific thinking in order to maintain proficiency.
  5. Advanced Scientific Thinker: Advances by maintaining regular practice in scientific thinking.
  6. Accomplished Scientific Thinker: Good habits of scientific thought have become second nature.
Obviously one needs to advance, step-by-painful-step from one stage to the next. As the text puts it:
Scientific thinkers routinely apply intellectual standards to the elements of scientific reasoning as they develop the traits of a scientific mind.
So what are these standards, and how do lead us to scientific thinking?

Essential Intellectual Standards
must be applied to

Elements of Scientific Thought
Points of View
Sources of Error

to develop

Traits of a Scientific Mind
Confidence in Reason
Intellectual Empathy
Intellectual Courage
Wikipedia is not a bad place to start if you don't understand the difference between precision and accuracy, correlation and causation, or inference and implication. If you would like some less escapist summer reading, and would like to start the process of training your mind, Carl "Billions and billions" Sagan's book "The Demon Haunted World: Science as a Candle in the Dark" is a good place to start. Better yet, give it to a friend once you've read it. For a more humourous look at the world from a skeptic's perspective, Bob Park's What's New is a weekly snarky look at events in the USA that are relevant to the world of science.

04 July 2006

Texas Power Mixer


Mix one part water, one part Gulf crude, and one part pulverized enriched uranium in a silicon barrel. Stir with a wind turbine blade and serve piping hot. Warning: may cause urges to clear brush
In the future we can assume that electrical power will be generated by a melangé of sources:

  1. Base-load power: steady output from thermal plants powered by nuclear fission or coal.
  2. Renewable power: intermittent power captured from our environment.
  3. Load-following power: Hydroelectric or natural gas turbines that can rapidly shift their output to meet demand or compensate for fluctuations in renewable supply.
I've been meaning to write a post that details a hypothetical future power mix where a large proportion of electricity is generated by renewable sources. For whatever reason, I've now finally gotten around to it. The objective of this post is to take some demand data, generate some supply data for renewables, and mix it all together in order to get a grasp of what the general case might be.

I picked Texas as a source of electricity demand data simply because the information is published on the web. I used the data for June 15th, 2006 but then scaled it up so that the maximum demand for the day was 100 000 MW.

I then tried to meet this demand with 35 000 MW of nuclear power, 120 000 MW of wind power, 75 000 MW of solar power, and 65 000 MW of hydroelectric power. The wind data is simulated but the solar data is real, for San Antonio in 2003. In order to try to smooth the data to reflect geographic distribution of wind and solar farms I averaged the daily wind speeds and insolation for five days spread over a total of five months. I think it is a reasonable approximation to use temporal averaging to imitate geographical averaging. In spite of averaging the data over such a long period there's still a considerable bottoming out in the wind power production at 5:00 am. This is an irritating part of the nature of wind: it is unreliable and hence must be backed by spare capacity from other sources.

Once the nuclear, wind, and solar power production was all accounted for I used hydroelectric power to fill in the gap (when it existed) between supply and demand. The objective is to preserve the water in the reservoir (or natural gas supplies) whenever possible. This allows us, essentially, to trade some of the inherent power quality of hydroelectric to the low power quality renewable sources.
Figure 1: Simulated electricity production to meet Texas mimicked demand.

The results are fairly intuitive. Base-load power provides the solid bottom portion of the power mix. Wind is chaotic. Solar correlates mildly with the daily peak in demand, but not as well as we might want. The correlation could actually be fooled with by adjusting time zones.

Table 1: Daily Power Demand

Total Daily Power Demand

2 262 622 MWh

Maximum Power Demand

100 000 MW

Minimum Power Demand

87 038 MW

Excess Production

60 853 MWh

There is some excess power produced during the day, although it is a relatively small proportion of the total (2.3 %). This is a result of the unpredictable nature of wind as much as anything else. While solar can be forecast relatively well, wind cannot.

Table 2: Daily Power Generation


Total Generation

Peak Generation


Production Share

Model Capacity Factor



840 000

35 000

35 000



0.7 - 0.9


605 860

41 509

120 000



0.2 - 0.3


337 515

48 300

75 000



0.15 - 0.25


540 099

45 966

65 000



0.0 - 1.0

There are certainly a number of notable facts to take from this little simulation in a day in the life of Texas electrons. One is that the need for scalable power is quite high (~ 25 %). For a nation like the USA, which is poor in hydroelectric power, it implies that they will continue to need to burn large quantities of natural gas for electricity in the future. The other is that with the widespread deployment of renewables you are almost guaranteed to produce more power than you can use on a common basis. It would be very nice to incorporate power storage solutions like pumped hydro or flow batteries into the grid to utilize this excess power but my past work generally indicates that stationary arbitrage systems have poor margins. The next step past storage is deferrable demand, generally in the form of plug-in hybrid or electric vehicles. A plug-in hybrid does not need to be fully charged from the moment it is plugged into the grid because it has a backup chemical fuel supply. Instead it can wait and accept energy only when the grid is saturated and presumably get a cheaper rate for its trouble. In the case of a plug-in hybrid, the battery is purchased not for the primary purpose of conducting arbitrage but instead providing power for a vehicle. In a sense, the secondary benefit to the electricity grid would be subsidized by personal transportation.

02 July 2006

Incremental Capital Investment Advantage of Renewable Energy Sources

A comment Nick made about interest on the capital costs of a photovoltaic power plant got me thinking: maybe one of the reasons that wind and solar power are experiencing such explosive growth is their small incremental capital cost. Wind turbines typically come with a power rating of 1000 - 300 kW while photovoltaic systems can be built in any practical increment. On the other hand, a nuclear power plant, a hydroelectric dam, or any other centralized power plant can only be built in extremely large increments (300+ MW). The capital cost for a nuclear or coal power plant is three or four orders of magnitude higher than that of a wind turbine. Because of this, a wind power investment can grow much closer to the exponential curve than big thermal power plants.

The financial advantage essentially works like this: assume that you have a nuclear plant and a wind farm and that both systems have the same rate of return. For my case study, I will assume that the system's net revenue is 20 % of its capital cost every year. Nominally you could build a new centralized power plant every five years if you reinvested all of your profits. Consider instead a wind farm that produced the same average power (with the same capital cost per Megawatt). If each turbine cost 1/1000th that of a major nuclear plant then a new one could be purchased every two days. As it will become apparent, the law of compound interest greatly favours the source that can be built in smaller increments.

In order to test this theory I constructed a model to compare the growth of an abstract wind farm and nuclear plant investment. Both the nuclear and wind farm investor took a loan sufficient to build 1000 MW of capacity. This is assumed to be average capacity, with the capacity factor of each already included. I will also incorporate a 2 % interest rate − indexed to inflation − on all cash balances held (positive or negative). The last thing I would like to model is a delay between the allocation of funds and the power plant actually coming on-line. For the 'nuclear' the delay is two years to account for construction, for 'wind' the delay is thirty days.

The interest rate is quite small compared to the profit on the plants so it should take approximately five years for the nuclear plant to payoff the loan, another five years to accumulate enough capital to pay for a new one, and two more years to actually build the plant. That's twelve years before the second nuclear plant can be constructed. For wind, it will take the same five years to payoff the loan but then it will only take two days to earn enough money to build another turbine and then thirty days for the turbine to be installed. One can guess that this is not going to turn out well for the nuclear option but nothing illustrates this better than a figure.
Figure 1: Step-like growth of large capital nuclear power plants (red) versus
more continuous small capital wind turbine farm (blue). Exponential
growth curve (black) at given rate of return (0.2/annum) added for reference.

After twenty years the wind farm will already have outgrown the nuclear capacity by a factor of 17850 MW to 5000 MW. The wind-based power system will provide 3.6x more power than the nuclear based system. This doesn't let wind power off the hook with regard to its intermittent nature but it certainly goes a long way to explain why solar and wind are seeing such explosive growth around the world. One could certainly play with my model to include other factors, change the rate of return or interest rates. However, the big picture isn't going to change unless the large capital investment power installations can massively outperform solar and wind with regards to the rate of return.

The growth rate advantage is not the whole story, of course. It is much easier to secure a few million in financing than a few billion. Similarly, the incremental risk of putting up a new wind turbine is quite low. The risk is also reduced by the fact that there are no fluctuating fuel costs associated with renewables versus fossil or nuclear fuels.

01 July 2006

Canada and Kyoto

For my Canada Day post, I'd like to say that Canada is not doing well at all on the greenhouse gas emissions front.

The Conservative Prime Minister Steven Harper and his Minister of Environment, Rhonda Ambrose, have taken a lot of heat for abrogating the Kyoto treaty on climate change. My feelings on this have been ambivalent. While I hardly think the Conservatives are going to govern in a pro-environment fashion, at least they are being honest with the population. In a parliamentary system when the governing party passes legislation, everyone knows who wrote it and who passed it. There's none of the bizarre amendments that are so omnipresent in the US Congress and you can't run away from your record very easily. In comparison, all the Liberal party was doing was offering empty platitudes to the green vote. I mean really, can anyone name a successful program instituted by the Liberal party that helped reduce greenhouse gas emissions? The "One Tonne Challenge"? Please... Liberals pander to environmentalists in order to get their votes, but in office they don't actually effect any significant policy changes. Canada has a resource-based economy, and on our current course we could actually manage to pass the USA in per capita greenhouse gas emissions in spite of our relatively green electricity production infrastructure.

Also much of the criticism against Kyoto is strong. I certainly can't see the value of shipping money to Eastern Europe (whose economies and hence emissions collapsed in the 1990s) in order to 'offset' CO2 emission. It would be far better to spend that money nationally on programs to actually reduce greenhouse gas emissions. The fact is, as soon as the Bush administration decided not to ratify Kyoto, the treaty was dead. The American influence on the global economy and global environment is far too massive to be sidestepped. The fact that coal-powered China also has no incentive to take another path (aside from destroying their local environment) is another nail in the coffin.

The sad fact of the matter is that Canada could probably meet its Kyoto targets without a lot of fuss:
  1. Eliminate raw methane emissions. Methane has 22 x the global warming potential as CO2 so the benefit on a mole by mole basis is very big. Methane mainly comes from two sources: the oil and gas industry and waste streams (landfills and wastewater). For the oil and gas industry, the government would need to institute regulations to vastly reduce the amount of methane that is permitted to leak out of natural gas pipelines and wellheads. Reducing methane production from wastewater (sewage) and landfills is a matter [edit] of caping them and introducing anaerobic bacteria to eat the carbohydrates and cellulose in order to produce biogas, a mixture of methane and carbon dioxide. Biogas is typically burned in large diesel engines to provide electricity; in the future solid oxide fuel cells could be used. [/edit]
  2. Introduce a feebate program on cars and light trucks without any loopholes. Canadian fleet fuel economy is pathetic. More long term programs to improve the transportation sector would include pushing freight onto electrified rail and improving the quantity and quality of public transit. I've lived in both Victoria and Edmonton and in both busses are generally filled to overcapacity during rush hour. Canada in itself is not big enough to induce new technologies like the plug-in hybrid to appear on the marketplace but we can still make a huge impact by slowing shifting the structural framework of the transportation sector to use sustainable energy and be more efficient.
  3. Further green the electricity sector so that we can better make the argument that switching to electricity is the way to go. Investing in a more robust electrical grid with more DC connections will payoff in any future. 57 % of our electricity comes from hydro and 13 % from nuclear. The obvious missing factor is wind. While Alberta has some significant wind development (and amazing katabatic winds coming off the Rocky Mountains) Canada in general lags behind every developed nation in wind. The hydro provinces (Quebec, British Columbia, and Manitoba) are uniquely well suited to use their reservoirs to handle the intermittency problems of wind power. Alberta and Saskatchewan should be pushing carbon dioxide sequesterization much harder. I live in Edmonton so I understand that it would be practically impossible to convince the province to get off coal but pumping CO2 underground has a real potential economic value for tertiary recovery of conventional oil and gas. The experience at Weyburn has been positive and the government could be pushing this technology far, far harder. Ontario doesn't have the thick sedimentary basin of the prairie provinces so they appear to be stuck with nuclear power for the moment. (Oh, and which is more environmentally benign, new nuclear plants in Ontario or new hydro-electric dams in Quebec?)
  4. Continue to increase EnerGuide standards on appliances and offer programs to encourage residential and commercial building owners to retrofit their structures to use less electricity and natural gas. Consider for example how grossly excessive the lighting in most commercial buildings is. The bathrooms in new buildings on the University of Alberta campus usually have motion sensors that turn on when you enter. This sort of technology should see more common use. Putting photosensors on hall lighting to turn the lights off when the sun is shining from the outside would be another positive step to reduce wasted electricity.
A pretty simple and achievable plan in my opinion: push hard on methane, increase car fuel economy standards and push public transit and rail, invest in green electricity generation, push efficiency and conservation through government standards. None of these policies would have a significant harmful impact on the economy and in the long-run, improving energy efficiency will benefit any country.

Manly-Man Shopping Bags

I've been long fed-up with the collection of plastic shopping bags I accumulate from grocery shopping only to have them 'recycled' in some Chinese incinerator. This is not to mention the lovely elongation behavior of low-density polyethylene when you've got a 4L jug of milk in one bag. For some reason I don’t like cutting off the circulation to my fingers whenever I go out to buy milk.

Unfortunately, the vast majority of reusable shopping bags seem to be made out of some sissified natural fibre with some smarmy hippy logo on the side. Edmonton, being a city of pimped-up pickup trucks that have never had a load in the bed, let alone been off-road, doesn't always appreciate hemp bags with a cannabis label on the side.

What I want is a basic rugged, square-bottom bag with wide handles. So do any of my readers have any recommendations (assuming I still have any -- I hope you're all RSS subscribers...)?

On a different tack, CBC has been running a 'reality' television series called Code Green. Homeowners are given $15,000 to renovate their residence in order to reduce the amount of electricity, water, and heat that they consume. After the renovations are complete the homes are monitored for a month to determine how well they have done. Each installation then competes against other families to see who can reduce their carbon footprint the most, with the top team winning a Toyota Prius.

A colleague of mine were discussing how screwed up this competition would be if you tried to run it for apartment renters rather than homeowners. Take my case: I don't pay for heat or hot water, just electricity. However, the largest source of electricity consumption in my apartment is the refrigerator, an appliance that is the responsibility of my landlord. As such we have this messed up benefits for conservation. If I reduce my hot water consumption, my landlord saves money. If my landlord replaces my fridge, I save money. The problem is obvious. I can open the window in the winter to let in fresh air and totally ignore the extra natural gas burning the building's boiler. My colleague doesn't even pay for power.

This is an obvious area in which government regulation on the way this consumption is paid for would be beneficial. My coin-operated laundry costs $1.75 for washing and $1.25 for drying. I pay the same price for washing in hot or cold water. Ben@theWatt.com has already noted that apartment washers are basically a big cash cow for apartment owners but wouldn't it be nice to have a slightly smarter system where I could pay less for washing with cold water rather than warm or hot?

25 June 2006

Baby Boomer Legacy

The Globe and Mail ran a series on Saturday regarding the Baby Boomers now that their leading edge is on the cusp of retirement. The section includes a variety of issues, such as health, snarky commentary (my favourite type), and a documentary on tupperware. The timeline for the Boomer generation, Gen-X (the Bust), and Generation Why? (Baby Boom Echo) is given here. I am firmly ensconced in Generation Why?

Trying to make sweeping generalizations about the nature of a generation is somewhat useless, but we can look at the defining events of the baby boomer generation. First is the Cuban Missile Crisis. One can imagine what sort of impact this sort of trauma would have on young lives. The "Duck and Cover" ads (read: propaganda films) of the time were downright sinister. This has got to be the most widespread childhood trauma of the baby boomers. It wouldn't matter where you lived, or how rich your parents were, the fear of the bomb was over your head

In America the next major event would be the Vietnam war. However, Vietnam is not applicable to Western Europe or Canada. Furthermore, given the way people have reacted to 9/11 and the invasion of Iraq I think I can safely say that the Cuban Missile Crisis trumps Vietnam for psychological impact.

The relevance of Neil Armstrong and Buzz Aldrin landing on the moon is tough for me to analyze. I see it as posturing that accomplished nothing of significance, but I understand that it's one of the events that led to the general optimism of the baby boomer generation.

The next events that we would think would have a major impact on the collective consciousness of a generation would be the oil shocks of the early and late 1970s. Baby boomers were young adults at the time. While the oil shocks would have sweeping changes on the economy, the effect on psychology appears to be dependant on which continent you live on. In Europe and Japan attitudes changed; in North America the same generation later popularized the SUV.

In the 1980s and 1990s are mostly notable for what didn't happen. While Gorbachov and Sakharov did manage to bring down the Soviet Union, that only added to the general euphoria of the baby boomers. The 1990s were truly dull from my recollection. What didn't happen was any change to the unsustainable status quo. I think the overriding legacy from the baby boomers that we can quantify is the massive debts they're leaving behind as they retire. I'm not simply talking of publicly held debt, but pension and health entitlements, infrastructure investment, and the burgeoning energy and environmental crisis. They will probably be held up in history as the least sustainable generation.

The service payments on public debt consumes a major chunk of the budget of developed nations. This is a good life lesson against living off your credit card, although too many people seem to not have grasped this constatation.

On entitlements, in Canada the national pension plan has $100 billion in assets, as opposed to a box of IOUs in the USA. Our health situation is no better, however. As baby boomers age, one can easily invision massive pressure building on the health care system. Will we yun`uns be able to handle the demand baby boomers create for health care? The more pertinant question may be are we willing to pay for it? I would not be surprised to see the boomers force the issue of health care to the forefront due to their power as a voting block. However, I would also expect an eventual backlash.

Infrastructure debt seems to have maxed out in the mid-1990s. From what I've seen, that trend is slowly reversing itself. However, I would certainly take umbridge with the allocation of new capital for infrastructure. Mass transit has not even remotely kept pace with suburban development while the university system seems to have been overdeveloped compared to trades. The rise of MBA programs are a pox upon our lands while the electricity grid is in frighteningly bad condition.

The depletion of energy resources and the associated problem of climate change that comes with the burning of fossil fuels is the biggest issue, at least from my perspective. One thing I kind of miss with the Blogger software is the ability to run polls. It would be nice to survey people and see how the taxonomy of peak oilers fits with the various generations. (And yes, I am wondering if Doomers are predominately Generation-Xers.)

In the new millennium, there was the tragic day of 9/11 followed by the biggest non-sequitur ever, the invasion of Iraq. I do wonder how much of the reaction to 9/11 can be ascribed to a desire to counter the helplessness of baby boomers during the Cuban Missile Crisis. They've pretty much gone round a full circle and now we're right back at Vietnam.

23 June 2006


So Entropy Production was one year old as of yesterday. Such an anniversary presents a good opportunity to reflect on the general state of the blog, where it's been and the overriding themes:
  • Electricity is my favourite energy carrier with high density chemical fuels as the second choice. Overall the economy and environment would benefit from a push to offset our petrochemical consumption with electricity − in particular the transportation sector.
  • Intermittency of electricity sources such as solar and wind power is not a serious problem at present but it is likely to grow into one. I put a lot of emphasis on this issue. There is a need for some form of energy storage or flexible demand (such as electric vehicle charging). Solar power correlates reasonably well with peak demand while wind is a volatile electricity source. Some form of base-load power production is highly desirable.
  • Conservation is necessary but proceeding at a fairly limited pace. Government action could raise the standards for many applications to increase their efficiency.
So thoughts and comments from my readers? What do you like, what don't you like, where would you take the blog.

16 June 2006

Response to Comment of my Review

Oh, we're talking about biodiesel if is wasn't immediatly clear from the title on the post. I posted a review of a major National Renewable Energy Laboratory (NREL) study on soy-derived biodiesel. J.C. Winnie of After Gutenberg later published an extended comment on my review on his own blog that I would like to respond to.

The first statement I take umbrage with appears to be confusion over one problem with the energy return on energy invested accounting used within the NREL study:
The authors failed to take into account any power input other than fossil fuel, i.e., they omitted electricity, which is a predominant energy input.
This isn't what I said. I stated that "for the purposes of this study, this means the hydroelectric and nuclear power share," (of electricity). The study used a bouquet of energy sources intended to be representative of the USA as a whole. In the States, the share of nuclear and hydroelectric power (20 + 3 %) is not very big, so they are not omitting all of the electricity consumption from their energy balance equations, just a chunk of it. In a nation such as France, Canada, or the nordic and baltic states this would have had a much bigger impact.

I also said that the coproduct accounting stank because the meal is worth less (on a mass basis) than the oil, whereas NREL's accounting gave equal value to not just the meal but also the water content and hulls of the beans. Winnie casts some doubt on this assertion:
In another post, a Missouri farmer indicated that the meal when used as a high-protein animal feed is worth more than the biodiesel at current prices. Yet it may well depend upon where one looks and who does the looking. Reportedly, Biodiesel production in Indiana is rapidly increasing because the price of soybeans have become relatively cheap compared when compared with markets in other grain-producing regions in the Midwest, plus the demand for fuel in Eastern states is increasing.
There is no need to rely on anecdotal evidence in this case. Entering the terms "wholesale price soy meal" into Google yields precisely the data we desire from the University of Cornell. If we open up Table 9(2).xls we'll find that a bushel of soybeans yields an average of 11.33 lbs. of oil and 44.26 lbs. of meal for the year 2004/5. We can calculate from the table that soy oil has a value of $0.2321/lbs. while meal has a price of $0.0915/lbs. Sometimes I want to leave something as an exercise to the reader.

The last comment I would like to make regards the closing discussion:
Speaking of accounting, such a narrow perspective omits acknowledgment 1) that, whether in Europe or North America, biomass-based power generation is likely to remain more cost effective than biofuel production, or that, as Tad Patzek continues to remind us, bottom line, “There simply isn’t enough arable land available in the world to grow the crops that would be needed to fuel our oil habit.”
This is a bit of a non-sequitur, in that my original post didn't really attempt to address those issues. Entropy Production is a blog; an evolving narrative. I can't write a book chapter every post, so no one post will be a self-contained document. I only consider biomass energy solutions useful if they produce a portable, calorific, liquid fuel. Generation is just not on my agenda; there are too many better ways to generate electricity without invoking issues like soil erosion.

With regards to Tad Patzek (and by extension David Pimental) they both like to include energy amortization charges for all the capital infrastructure used for the production of biofuels. The NREL study does not deal with this issue. It is a topic that will have to be saved for another time.

14 June 2006

Ergosphere Implodes

Of all the things that could cause the Ergosphere to collapse into its attendant black hole I didn't think an unclosed 〈hr \ was capable of such a feat.

On a more serious front, I saw an interesting example of efficiency improvements driven by higher energy prices the other day. We were seeing a demo for a fumehood. For those of you who aren't familiar with such a piece of research apparatus, they essentially allow you to conduct chemistry experiments that might emit noxious fumes. The fumehood is a box within which a high airflow is passed. This laminar air flow prevents any gas from escaping into the laboratory environment.

Of course, the cost of conditioning all this air that passes through a fumehood is quite high. The engineer from the fumehood manufacturer say that the annual cost of heating and cooling air for a fumehood was $5.50 per cubic feet per minute flow rate. Our fumehoods cost approximately $10000 and have a constant flow rate of 620 cfpm. Hence the operating (energy) costs of the fumehood ($3410/annum) exceeds the capital investment in under three years.

In order to improve the energy efficiency of the fumehood the company has developed a variable flow system that reduces the flow rate when all the windows on the fumehood to the lab are closed. This is futher improved by adding an infrared motion sensor − similar to the type on automatic doors − that opens and closes the window (sash) when someone actually approaches the fumehood.

I just thought that this was an interesting anecdote of how industrial operations can improve their energy efficiency through high energy prices. Judiciously applied carbon taxes and efficiency subsidies could help further improve a huge number of industrial energy consumers. I know that my energy consumption at work (which would be categorized as industrial sectors) dwarfs my personal energy consumption.

12 June 2006

Economy of a Solar-Electric Power Plant

I co-posted this over at theWatt.com.

Portugal is breaking ground on the worlds single largest photovoltaic (solar cell) electricity power plant. The plant, situated in Serpa, will cost €58 million to construct. It will have a peak power rating of 11 megawatts spread out over 60 hectares.

Portugal gets quite a lot of sun − not as much as Murcia or Sicily, but still a lot. According to RETScreen's database Evora (the closest inland location to Serpa in the database) gets 2.82 MWh/m2 per annum with the use of a two-axis tracking system. I did a quick calculation with RETScreen and came up with a 25.8 % capacity factor. That correlates to a annual power production of 25000 MWh. If amortized over 25 years, the facility will produce power for a rate of €0.093/kWh plus maintenance costs.

Paying 10 cents a kWh for a clean source of power seems like a good deal to me, and this is with global prices for photovoltaic modules drifting up to $5.50 / Wp.

11 June 2006

Thermal Storage, an Efficient Allocation of Resources?

Does the Lovins/passivehaus building construction theme render the concept of thermal storage systems for regulating the intermittency of renewable energy sources obsolete? In order to examine this question we would need to first compare the capital investment of each system. However, since neither is deployed in an quantity, this isn't really possible. The issue that can be analyzed then is the ancillary value of thermal storage to a renewable electricity grid versus the efficiency gains of the passivehaus concept.

Passive Home Concept

For those of you who have never come across the Amory Lovins schtik or the German Passivehaus building standards, it has been shown that residential or commercial structures can be built with practically no heating requirements. This is done through construction that is properly insulated and sealed with minimal air exchange and uses passive heating (or cooling, depending on climate) strategies. The passivehaus benefits from entropy in terms of home heating. Every electronic device is essentially a resistance heater in addition to its functional purpose, and every person an 80-100 W thermal source.

Scale of Thermal Storage

The most likely medium for thermal storage is water due to its low cost, heat capacity, and the fact that it is liquid and hence is easy to transfer heat with it.

The specific heat capacity of water is 4.184 KJ kg-1 K-1. The heat of fusion − the energy required to change the phase of water from solid ice to liquid water − is 334 KJ kg-1 or the equivalent of almost 80 K. Relative to 20 °C, ice stores a greater amount of cooling power than boiling water.

Consider data on space heating and air conditioning for the USA in 2001. I'll use the worst case: Northeast homes for heating and Southwest for cooling. I am going to ignore hot water heating even though it's significant because it's not relevant to the argument in the end. The average Northeast home uses 63 mmbtu/year for space heating. This works out to an average of about 0.18 GJ/day or 0.365 GJ/day during the peak heating season assuming some sinusoidal distribution. This is the equivalent of 87,000 kg K/day of water; if we store the water at 80 °C to heat the home at 20 °C then we need approximately 1.5 tonnes of water, or 1.5 cubic meters worth (nearly 400 gallons).

On the cooling front, the average Southwest annual electricity consumption is 4,000 kWh/year. If we use an average coefficient of performance of 3.o then the actual cooling supplied is 0.12 GJ/day or a estimated peak of 0.235 GJ/day. If, again, we assume the house is kept at 20 °C then 2/3 of a tonne of ice is required. Consider that 1-2.5 tons (of ice) are common ratings for a centralized air conditioning systems.

Taken over North America (say 100 million homes), these seem like significant numbers. 44 mmbtu of natural gas at $10/mmbtu is $44 billion dollars a year and the production of ~67 Megatonnes of CO2 . 2,300 kWh of electricity for cooling is a total of $18.4 billion dollars (at $0.08/kWh) per year and assuming coal power (at 900 g/kWh), 207 Megatonnes of CO2.

The North American GDP is about $12 trillion per year, so residential heating and cooling alone constitute 0.5 % of that. Scale-wise, there is plenty of potential for passivehaus or thermal storage systems. But can they be friends?

Heat Pump Efficiency

The coefficient of performance is how much heat is moved for a given amount of work (electricity in this case). Commercially air conditioners in the USA are rated based on a Energy Efficiency Rating (EER) which is the square of the COP between 80 F and 95 F (or about 300 K and 308 K). There is also a Seasonal EER (SEER) which is a different (more relaxed) standard. The theoretical COP for this temperature range is 37, but in reality most systems are in the range of around 3.5. The ultimate theoretical coefficient of performance of a heat pump is given by:

COP = TH / (TH - TC) = TH / ΔT

Herein lies a problem. As the temperature difference a heat pump has to cross increases its efficiency decreases. Normally an air conditioner only has to work across 8 K or so. However, if we want to use it to make ice, from a night-time temperature of 24 °C, then the ultimate efficiency of the system will only be a third normal. For a real-world system the drop in efficiency on a percentage basis will not be so precipitous but it will still be disadvantaged trying to make ice.

Overall it's a tough sell for thermal storage as a means of handling renewables intermittency. As we've seen, thermal storage sets efficiency against grid regulation. Generally, when we have schemes with competing criteria they fail to be economically attractive. Witness my investigation into solar thermal cooling. In that case there was a competition between the efficiency of the solar thermal collector and absorption chiller on the basis of temperature. Here we have competition on pure power. Yes, we can store off-peak power, but the effective round-trip efficiency is going to be unimpressive simply due to the drop of in performance of the heat pump.

There is still the possibility to run numerous appliances on a deferrable basis. Heat, air-conditioning and refrigeration all only need to maintain a given (if narrow) temperature range so with good insulation they should be able to run on relatively short duty cycles. Other appliances, such as the dishwasher or combination washer/dryer can be scheduled.

Chicken or Egg?

One problem with pumping efficient homes is that houses last for such a long time. One often heard meme in the peak oil world is that car fleets take too long to be replaced. Houses can be renovated, cars can't. Still if you are one of those people who think suburbia is evil (as opposed to just soulless) then the choice of whether to concentrate on improving the efficiency of houses or cars presents quite a quandary. From my point of view, I'm more concerned overall with climate change and more localized pollution of the air and water. If people want to live in rows of identical pink stucco houses... enjoy.

04 June 2006

Soy Biodiesel Review

The topic of discussion is Sheehan, J., V. Camobreco et al. (1998) . Life Cycle Inventory of Biodiesel and Petroleum Diesel for Use in an Urban Bus. Golden, National Renewable Energy Laboratory. It's a big, inclusive report (314 pages) on the the energy balance of soy derived biodiesel and fossil diesel.

Energy Return

Let's start on p. V of the executive summary:
Biodiesel yields 3.2 units of fuel product energy for every unit of fossil energy consumed in its life cycle. The production of B20 yields 0.98 units of fuel product energy for every unit of fossil energy consumed.
That first number, 3.2 units of fuel product energy for every unit of fossil energy. What does this mean? It's in a big bold block quote in the executive summary. It looks like the EROEI, right? Unfortunately it's not. On p. 207 we find that Fossil Energy Ratio = Fuel Energy/Fossil Energy Inputs. In other worlds, any power input that is not a fossil source is not accounted for. For the purposes of this study, this means the hydroelectric and nuclear power share. What we actually want to know is the total process energy required. The energy inputs for biodiesel are predominately electricity (to run machinery) and low grade steam (50 - 70 °C), along with natural gas (to produce methanol) in addition to the standard farm inputs.

Fortunately, the results are not that badly off due to this factor. The study divides the production of biodiesel into five stages:
  1. Agriculture
  2. Transport from farm to processing plant
  3. Soybean crushing and oil separation operations
  4. Conversion of soy oil to methyl ester fuel.
  5. Transport and distribution of biodiesel to consumers.
The study also accounts for transport from the separation plant to the conversion plant but this energy input is left out of the end results. I agree with this because it is overly high due to the small number of biodiesel plants in the USA at the time of the study. For large scale biodiesel production the two operations would be naturally collocated.

I went through the study to attempt to figure out what the difference was between fossil and absolute energy inputs. Annoyingly, for Stage 4 (Conversion), the energy of the soy oil is incorporated as an input. What makes this frustrating is that the authors at no point in the study actually define what they consider the energy content of soy oil to be, which makes deconstructing this part of the study difficult.

Table 1: Energy Allocation to Biodiesel Production Stages



(MJ/kg biodiesel)




Table 62, p. 116



Table 63, p. 118



Table 83, p. 137



Table 105, p. 166



Table 106, p. 169



Higher Heating Value


Table 108, p. 173

Lower Heating Value


Table 108, p. 173



As it happens, I get a better result (3.24 > 3.2) but that's due to the fact that I employ the Higher Heating Value -- the original authors' calculation uses the LHV. As it happens, this is only a minor bone I have to pick with the study. The big problem is with what's called "Allocation of Lifecycle Flows." Anyone who has read into ethanol studies will know this as 'coproducts'.

Funny Coproduct Accounting

The first giant problem associated with 'coproducts' appears in the Separation (or crushing) stage. The oil content of soybeans is rather low − around 18.4 %. For this entire study, allocation of energy consumption between biodiesel and coproducts is done purely on a mass basis. Unfortunately, this leads to a silly assumption. Table 82 (p. 136) allocates 18 % of energy consumption for the Separation stage to soy oil and 82 % to meal. This in turn propagates back through the allocations for transport and agricultural energy consumption. Is this fair? Take a look at Table 64 (p. 121):

Table 2: Mass composition of soybeans


18.4 %


0.8 %


7.4 %


16.0 %


57.4 %

That's right boys and girls, the authors are allocating the same value to oil as dirt and water. Realistically if the mass of dirt, water, and hulls were discarded then the oil would have to assume 24.2 % of energy use for the first three stages. Furthermore, it probably makes more sense to compare the ratio between the wholesale price of soy oil versus soy meal to determine the proper value of the coproducts. Free hint: the oil is worth more per kilogram than the meal.

This process is repeated for the Conversion stage is similar for the allocation between methyl ester (biodiesel) and glycerin. For this stage, 82 % of the energy is allocated to biodiesel and 18 % to glycerin. Once again this allocation is propagated back through the previous steps. However, this calculation is actually unfavourable to the biodiesel. Separating the glycerin and excess methanol consumes approximately 65 % of the energy for the Conversion stage (Table 96, p. 159). The reason is distillation. As anyone who has looked at ethanol systems will know, distillation is a killer because it requires so much energy to vapourize water. Also, the NREL numbers come out quite high compared to some European plants also presented in the report.

From my point of view, I want to know if biodiesel is energy positive, regardless of coproducts. For soy, the answer appears to be no. Going back and removing the coproduct credits appears to give the following results:

Table 3: Energy Consumption for Biodiesel Production
with Zero Coproduct Credits



(MJ/kg biodiesel)















Before we all fall into a state of depression, it is fairly clear from the report that there is a lot of promise in reducing the energy inputs for the conversion stage. Methanol inputs constitute approximately half the energy inputs. I have previously hypothesized that anaerobic digestion of the meal could produce methane which in turn could be made into methanol in addition to providing heat energy. The NREL numbers require approximately three times as much energy as some quoted European operations (Table 98, p. 161).

There are potential improvements to be made to the efficiency of the Conversion stage as well. Research and development on catalysts offers the potential to reduce the reaction temperature. In particular I think a zeolite could be ideal for separating the glycerol from the methyl ester chains. Most of the energy (65 %) is used not for the actual conversion but for distilling out glycerin and excess methanol post-transesterfication − normally an excess of methanol is added to carry through the reaction to completion. Reducing the amount of water and methanol used will have a direct result on the distillation requirements.

Like ethanol, biodiesel would benefit significantly from combined heat and power generation. The temperature requirements for most processes is low enough (50 - 70 °C) that the use of solar thermal systems to augment the heat production is feasible.

To a certain extent glycerin might be the biodiesel analogue to sulfur for petroleum oil. Sulfur is a chemical with its uses, but oil refining produces mountains of the stuff. Will glycerin be a product worth distilling in a biodiesel nation, or should it just go into the anaerobic digester to make more methane?

Soy versus Rapeseed (Canola)

Any way you cut it soy is not an ideal crop for biofuel production. Soy does have one significant advantage in that it's a legume and hence fixes atmospheric nitrogen. As such, the energy requirements for fertilizer for soy is very low compared to everyone's favourite biomass villain, corn. However, the foremost quantity on my mind is the low oil content of soybeans. It's about 18 % (Table 64, p. 121) versus 40 % for rape and jatropha or 70 % for coconut. Rape appears to be the best temperate crop for biodiesel production. Its oil quality is high as its content, and its moisture content low.

The NREL study uses an average yield of 36 bushels/acre for soy which works out to 445 kg (oil)/hectare. (Here is a useful webpage for converting agricultural units from US Customary nonsense to more sensible metric units. Oh, and soybeans are 60 lbs./bushel, not 56 or 48 or 25 lbs./bushel but you all knew that, right? Next thing you know they'll be measuring the volume of biodiesel in barrels.) In comparison Canadian Canola yields about 640 kg (oil)/ha. The same source gives European Rapeseed a much higher yield of approximately 1280 kg (oil)/ha, largely due to the greater use of irrigation.

Aside from the oil content issue there are a number of other drawbacks for soy. For the most part, soy appears to take a great deal of work to get the oil separated from the meal. Soy has a high moisture content of 16.0 % water by mass (Table 64, p. 121) which necessitates drying.
In comparison, Canola is about half that if properly sun dried, and hence can be processed without drying. Soy also needs to be flaked into regular sized small pieces, which constitutes about a quarter of the electricity requirements for the Separation stage.