Olduvaiblog: Musings on the coming collapse

Home » Posts tagged 'electricity'

Tag Archives: electricity

Fossil Fuel and High Energy Burn Uses | The Energy Collective

Fossil Fuel and High Energy Burn Uses | The Energy Collective.

Posted March 11, 2014

Fossil fuels and energy density

There are few more remarkable machines than a Boeing 747. Four hundred people can be hurled half way across the planet with levels of comfort, efficiency and reliability that would have been deemed miraculous by those living a few centuries ago. A vision of the incredible technical proficiency humanity has gained since the Industrial Revolution, the Boeing 747 is also a remarkably potent symbol of what we can achieve with fossil fuels, and what we currently cannot achieve with their low carbon alternatives.

The Boeing 747: a vision of high power density

The impossibility of solar powered aviation

Last year an adventuring Swiss team managed to fly across the United States in a solar powered plane. This feat, which took a leisurely two months, was described by some as a symbol of what can be achieved with solar energy, a rather curious inversion of reality. It is a symbol of exactly the opposite.

Could commercial flight ever be powered by solar panels? The answer is an obvious no, and any engineer who suggests it is yes is likely to find themselves unemployed. However considering why the answer is no is illustrative of the multiple challenges faced by a transition to renewable sources of energy, which is principally a transition from high density fuels to diffuse energy sources.

So, why can a Boeing 747 not be powered by solar panels?

I will now reach for the back of an envelope and compare the energy consumption of a Boeing 747 with what you could possibly get from a solar powered plane. This calculation will tell us all we need to know about solar powered flight.

Mid-flight a Boeing 747 uses around 4 litres of jet fuel per second. Therefore given the energy density of jet fuel, approximately 35 MJ/litre, a Boeing 747 consumes energy at a rate of around 140 MW (million watts).

We can then convert this rate of energy consumption into power density, that is the rate of energy consumption per square metre. Typically this is measured in watts per square metre (W/m2 ). A Boeing 747 is 70 by 65 metres. So the power density over this 70 by 65 metre square is approximately 30,000 W/m2, and of course the power density over the surface area of the plane will be a few times higher, over 100,000 W/m2.

What can be delivered by solar energy? Solar panels essentially convert solar radiation into electricity, and average solar irradiance is no higher than 300 W/m2 on the planet. In the middle of the day this can be perhaps 4 or 5 times higher than the average. However solar panels are typically less than 20% efficient. So sticking solar panels on the roof of a Boeing 747 is unlikely to provide anything close to 1% of the flight’s energy consumption. Perhaps they can power the in-flight movie.

The power density of a Boeing 747 can further be compared with that of a wind farm.

140 MW. How big would a wind farm need to be to provide this in electricity on average? Probably bigger than Europe’s largest onshore wind farm.

Whitelee Wind Farm, outside Glasgow in Scotland, is a 140 turbine wind farm covering 55 square kilometres. It has a rated capacity of 322 MW, and given its average capacity factor of 23%, it has an average output of around 75 MW, almost two times lower than the rate of energy consumption of a Boeing 747. (Of course chemical and electrical energy are not strictly speaking completely comparable, but when I am trying to illustrate here is the orders of magnitude differences in power density.)

The obvious lesson here is that fossil fuels can deliver power densities orders of magnitude higher than wind or solar. And mobile sources of energy consumption such as Boeing 747s require power density at a level that is physically impossible from direct provision of wind or solar.

The limits of batteries

Perhaps we could store low carbon energy in batteries and use them to power planes. Here we move from the problem of low power density to the problem of low energy density. Despite one hundred years of technical progress batteries still offer very poor energy density compared with fossil fuels.

Consider the lithium-ion batteries that power that excessively hyped luxury car the Tesla S. They offer up just over 130 Wh/kg according to Tesla.  So in conventional scientific units they provide an energy density of below 0.5 MJ/kg. In contrast jet fuel provides over 40 MJ/kg. This is a two order of magnitude difference.

Again, reaching for the back of an envelope. A fully loaded Boeing 747 weighs around 400 tonnes at take off, with around 200 tonnes of fuel. The Tesla lithium-ion batteries that could store the same amount of energy would weigh as much as about fifty Boeing 747s.

Lithium-oxygen batteries perhaps could reach close to 4 MJ/kg, an order of magnitude lower than jet fuel, after a couple of decades of future technical progress, according to a recent report in Nature.

So, this is where we are with batteries: a couple of decades from now they might reach energy densities of only 10% of that provided by the best fossil fuels. Clearly a solar energy and battery powered world has its limits.

Aviation’s limited and unpromising low carbon options

Put simply getting a Boeing 747 off the ground requires the provision of high energy dense fuels. This clearly cannot be done with direct provision of renewable electricity, or by storing it in batteries. Nuclear energy is capable of providing extremely high power density, but try powering a plane with a nuclear reactor (or even more importantly try getting a few hundred people to sit in a nuclear powered plane).

There appear to only be two half-plausible low carbon options. The first is the use of biofuels. The second is the use of low carbon electricity to generate synthetic hydro-carbon fuel, so called renewable fuels. Neither of these options is particularly promising.

A growing consensus indicates that current biofuels offer little benefit either economically or environmentally. We have converted large amounts of cropland over to biofuel plantation, all so that we can burn a fuel that an increasing amount of scientific evidence indicates is not reducing carbon emissions. From an environmental and humanitarian perspective this has become indefensible.

Few people realise how dreadful the land use impacts of biofuels are. Consider this: 6% of Germany is used to produce liquid biofuels, yet they only provide around 1% of German energy consumption. Can you imagine a less efficient use of land? Next generation biofuels appear to offer more of the same. The fundamental problem of bio-energy’s low power density cannot be overcome any-time soon.

The only prospect for biofuel production that is actually low carbon and does not have a significant land use impact is to use synthetic biology and genetic engineering to radically alter plants so that they are far more photosynthetically efficient. However the results to date of the research by Craig Venter’s team suggest that this will be the work of a generation, and perhaps generations, of geneticists.

Renewable synthetic fuels are similarly many decades from being an economic reality, if they ever will be. In essence the idea is that you use renewable (or if you prefer nuclear) electricity to convert carbon dioxide into a hydro-carbon based fuel, such as methane or methanol.

However for this to be half-economical, there are no shortage of problems to be overcome. First we need to figure out a way to suck carbon dioxide out of the air on a billion tonne scale. This is obviously not going to happen tomorrow. The cost of this renewable fuel is also guaranteed to be at least two times more expensive than renewable electricity, because of the efficiencies of the conversion process. In other words you will pay for 1 kWh of renewable electricity and get less than 0.5 kWh of renewable fuel out the other end. These scale and cost barriers will be incredibly difficult to overcome, and will likely require either a drastic reduction in the cost of low carbon electricity, or increase in the price of oil.

Renewable fuels then don’t seem to be very promising, on a one or perhaps two generation timescale, as a replacement for jet fuel. This did not stop the German Environment Agency from recently putting forward a scenario where Germany can completely move away from fossil fuels by 2050, which depended heavily on renewable fuels. How heavily? Well, Germany would be sucking around 200 million tonnes of carbon dioxide out of the air by 2050 in this supposedly “technically achievable” future.

I will realise this is all rather pessmistic, but things are what they are. So I will close with a prediction. Aviation will still be powered by fossil fuels by the middle of the century, but this is put forward in the hope that someone proves me wrong.

Authored by:

Robert Wilson

Robert Wilson is a PhD Student in Mathematical Ecology at the University of Strathclyde.

His secondary interests are in energy and sustainability, and writes on these issues at The Energy Collective.
Email: robertwilson190@gmail.com

Fossil Fuel and High Energy Burn Uses | The Energy Collective

Fossil Fuel and High Energy Burn Uses | The Energy Collective.

Posted March 11, 2014

Fossil fuels and energy density

There are few more remarkable machines than a Boeing 747. Four hundred people can be hurled half way across the planet with levels of comfort, efficiency and reliability that would have been deemed miraculous by those living a few centuries ago. A vision of the incredible technical proficiency humanity has gained since the Industrial Revolution, the Boeing 747 is also a remarkably potent symbol of what we can achieve with fossil fuels, and what we currently cannot achieve with their low carbon alternatives.

The Boeing 747: a vision of high power density

The impossibility of solar powered aviation

Last year an adventuring Swiss team managed to fly across the United States in a solar powered plane. This feat, which took a leisurely two months, was described by some as a symbol of what can be achieved with solar energy, a rather curious inversion of reality. It is a symbol of exactly the opposite.

Could commercial flight ever be powered by solar panels? The answer is an obvious no, and any engineer who suggests it is yes is likely to find themselves unemployed. However considering why the answer is no is illustrative of the multiple challenges faced by a transition to renewable sources of energy, which is principally a transition from high density fuels to diffuse energy sources.

So, why can a Boeing 747 not be powered by solar panels?

I will now reach for the back of an envelope and compare the energy consumption of a Boeing 747 with what you could possibly get from a solar powered plane. This calculation will tell us all we need to know about solar powered flight.

Mid-flight a Boeing 747 uses around 4 litres of jet fuel per second. Therefore given the energy density of jet fuel, approximately 35 MJ/litre, a Boeing 747 consumes energy at a rate of around 140 MW (million watts).

We can then convert this rate of energy consumption into power density, that is the rate of energy consumption per square metre. Typically this is measured in watts per square metre (W/m2 ). A Boeing 747 is 70 by 65 metres. So the power density over this 70 by 65 metre square is approximately 30,000 W/m2, and of course the power density over the surface area of the plane will be a few times higher, over 100,000 W/m2.

What can be delivered by solar energy? Solar panels essentially convert solar radiation into electricity, and average solar irradiance is no higher than 300 W/m2 on the planet. In the middle of the day this can be perhaps 4 or 5 times higher than the average. However solar panels are typically less than 20% efficient. So sticking solar panels on the roof of a Boeing 747 is unlikely to provide anything close to 1% of the flight’s energy consumption. Perhaps they can power the in-flight movie.

The power density of a Boeing 747 can further be compared with that of a wind farm.

140 MW. How big would a wind farm need to be to provide this in electricity on average? Probably bigger than Europe’s largest onshore wind farm.

Whitelee Wind Farm, outside Glasgow in Scotland, is a 140 turbine wind farm covering 55 square kilometres. It has a rated capacity of 322 MW, and given its average capacity factor of 23%, it has an average output of around 75 MW, almost two times lower than the rate of energy consumption of a Boeing 747. (Of course chemical and electrical energy are not strictly speaking completely comparable, but when I am trying to illustrate here is the orders of magnitude differences in power density.)

The obvious lesson here is that fossil fuels can deliver power densities orders of magnitude higher than wind or solar. And mobile sources of energy consumption such as Boeing 747s require power density at a level that is physically impossible from direct provision of wind or solar.

The limits of batteries

Perhaps we could store low carbon energy in batteries and use them to power planes. Here we move from the problem of low power density to the problem of low energy density. Despite one hundred years of technical progress batteries still offer very poor energy density compared with fossil fuels.

Consider the lithium-ion batteries that power that excessively hyped luxury car the Tesla S. They offer up just over 130 Wh/kg according to Tesla.  So in conventional scientific units they provide an energy density of below 0.5 MJ/kg. In contrast jet fuel provides over 40 MJ/kg. This is a two order of magnitude difference.

Again, reaching for the back of an envelope. A fully loaded Boeing 747 weighs around 400 tonnes at take off, with around 200 tonnes of fuel. The Tesla lithium-ion batteries that could store the same amount of energy would weigh as much as about fifty Boeing 747s.

Lithium-oxygen batteries perhaps could reach close to 4 MJ/kg, an order of magnitude lower than jet fuel, after a couple of decades of future technical progress, according to a recent report in Nature.

So, this is where we are with batteries: a couple of decades from now they might reach energy densities of only 10% of that provided by the best fossil fuels. Clearly a solar energy and battery powered world has its limits.

Aviation’s limited and unpromising low carbon options

Put simply getting a Boeing 747 off the ground requires the provision of high energy dense fuels. This clearly cannot be done with direct provision of renewable electricity, or by storing it in batteries. Nuclear energy is capable of providing extremely high power density, but try powering a plane with a nuclear reactor (or even more importantly try getting a few hundred people to sit in a nuclear powered plane).

There appear to only be two half-plausible low carbon options. The first is the use of biofuels. The second is the use of low carbon electricity to generate synthetic hydro-carbon fuel, so called renewable fuels. Neither of these options is particularly promising.

A growing consensus indicates that current biofuels offer little benefit either economically or environmentally. We have converted large amounts of cropland over to biofuel plantation, all so that we can burn a fuel that an increasing amount of scientific evidence indicates is not reducing carbon emissions. From an environmental and humanitarian perspective this has become indefensible.

Few people realise how dreadful the land use impacts of biofuels are. Consider this: 6% of Germany is used to produce liquid biofuels, yet they only provide around 1% of German energy consumption. Can you imagine a less efficient use of land? Next generation biofuels appear to offer more of the same. The fundamental problem of bio-energy’s low power density cannot be overcome any-time soon.

The only prospect for biofuel production that is actually low carbon and does not have a significant land use impact is to use synthetic biology and genetic engineering to radically alter plants so that they are far more photosynthetically efficient. However the results to date of the research by Craig Venter’s team suggest that this will be the work of a generation, and perhaps generations, of geneticists.

Renewable synthetic fuels are similarly many decades from being an economic reality, if they ever will be. In essence the idea is that you use renewable (or if you prefer nuclear) electricity to convert carbon dioxide into a hydro-carbon based fuel, such as methane or methanol.

However for this to be half-economical, there are no shortage of problems to be overcome. First we need to figure out a way to suck carbon dioxide out of the air on a billion tonne scale. This is obviously not going to happen tomorrow. The cost of this renewable fuel is also guaranteed to be at least two times more expensive than renewable electricity, because of the efficiencies of the conversion process. In other words you will pay for 1 kWh of renewable electricity and get less than 0.5 kWh of renewable fuel out the other end. These scale and cost barriers will be incredibly difficult to overcome, and will likely require either a drastic reduction in the cost of low carbon electricity, or increase in the price of oil.

Renewable fuels then don’t seem to be very promising, on a one or perhaps two generation timescale, as a replacement for jet fuel. This did not stop the German Environment Agency from recently putting forward a scenario where Germany can completely move away from fossil fuels by 2050, which depended heavily on renewable fuels. How heavily? Well, Germany would be sucking around 200 million tonnes of carbon dioxide out of the air by 2050 in this supposedly “technically achievable” future.

I will realise this is all rather pessmistic, but things are what they are. So I will close with a prediction. Aviation will still be powered by fossil fuels by the middle of the century, but this is put forward in the hope that someone proves me wrong.

Authored by:

Robert Wilson

Robert Wilson is a PhD Student in Mathematical Ecology at the University of Strathclyde.

His secondary interests are in energy and sustainability, and writes on these issues at The Energy Collective.
Email: robertwilson190@gmail.com

18% of global population lack access to electricity

18% of global population lack access to electricity.

2014 marks the start of the United Nations Decade of Sustainable Energy for All (SE4ALL), the international effort to bring modern and sustainable energy to everyone on the planet. IEA data collected over more than a decade have been vital to the push already, establishing the size of the problem and helping determine the resources necessary to allow every woman, man and child to benefit from the security and convenience that most already take for granted.

 

In the latest edition of its annual flagship publication, World Energy Outlook (WEO), the IEA provides the most recent estimate: nearly 1.3 billion people, or 18% of the world population, lacked access to electricity in 2011. While the number of those without electricity declined by 9 million from the previous year, the global population increased by about 76 million in 2011, according to the United Nations estimates, to top 7 billion.

 

And the modest decline in those lacking electricity obscured the fact that energy poverty either stagnated or worsened in some countries, particularly in sub-Saharan Africa, as population growth outpaced energy access efforts. More than 95% of people without access to electricity live in sub-Saharan Africa or developing Asia. Over two-thirds of the population in sub-Saharan Africa had no modern energy in 2011, and the number of people without electricity access there will soon overtake the total in developing Asia. Among the far more numerous people in developing Asia, 17% did not have access to electricity in 2011.

 

As a special focus within its World Energy Outlook 2014 series, the IEA is conducting its most comprehensive analytical study to date of the energy outlook for Africa. Among other topics, the report will examine which policies, investments and infrastructure are required to expand access to reliable and affordable electricity supply on the continent.

 

Modern energy: vital to many development goals

 

The new push by the United Nations for universal access is part of the growing recognition, highlighted in the WEO, that modern energy is crucial to achieving a range of social and economic goals relating to poverty, health, education, equality and environmental sustainability. About 80 developing countries have signed up to the SE4All initiative, including many with the largest populations of those lacking access to modern energy. In addition, IEA Executive Director Maria van der Hoeven is among the leaders who serve on the Advisory Board to the SE4All initiative.

 

The IEA joined with the World Bank to lead a project to create the Global Tracking Framework last year. That tool calculates the starting point to benchmark SE4All progress towards its 2030 objectives of achieving universal access to modern energy services while also doubling both the global rate of improvement in energy efficiency and the share of renewable energy in the global energy mix. The Global Tracking Framework published 2010 data for all of these objectives and has helped decision makers fully appreciate the scale of action that needs to be taken to meet the 2030 goals.

 

The latest WEO includes more recent and detailed data, highlighting areas of improvement. More people gained access to electricity in Bangladesh, Cameroon, Ghana, Indonesia, Mozambique, South Africa and Sri Lanka in 2011. India remained the country with the largest number of people without access to electricity, at 306 million, or a quarter of the population.

 

The previous edition of the WEO found that nearly USD 1 trillion in cumulative investment would bring universal access by 2030. That equates to USD 49 billion a year – or about five times what was being invested in 2009.

 

Under its central projections, the WEO shows a decline of more than 20% in the number of people without access to electricity by 2030, but that would still leave 12% of the world population without modern energy. The projections see the total number of people without electricity in 2030 falling by nearly half in developing Asia, to 324 million. But it will rise by 8% in sub-Saharan Africa, to 645 million.

 

The best news is that the current trajectory is expected to result in universal access in China, Latin America and the Middle East by 2030. Brazil, with its successful “Luz para Todos” (Light for All) programme, expects full access within a few years. Besides continued economic growth and urbanisation, which are general trends that support efforts to improve electricity access in emerging countries, there are specific programmes like the Power Africa initiative, which channel financing and technical expertise to assist national electrification plans.

 

Clean cooking and heating facilities

 

Access to electricity is not the only focus of IEA analysis of energy poverty. The WEO tracks the number of people who do not have clean cooking facilities, a far larger share of the global population at 38%. These 2.6 billion people rely on traditional biomass, usually wood, and their ranks increased by 54 million in 2011, as population growth outstripped improvements in providing better equipment. A further 200 million to 300 million people rely on coal for household cooking and heating.  Recent studies find that the household pollution from use of solid fuels kills 3.5 million people each year, and 4 million when the pollution’s effect on outdoor air is considered.

 

The WEO central projections see less of an improvement by 2030 in both the number and share of people cooking and heating with traditional biomass compared with those connecting to modern energy. The number without clean cooking facilities will shrink by less than 120 million people, to 30% of the population. While nearly 200 million Chinese will stop using traditional biomass, almost the same number more will be using it in sub-Saharan Africa. IEA

 

 

New Study Shows Total North American Methane Leaks Far Worse than EPA Estimates | DeSmogBlog

New Study Shows Total North American Methane Leaks Far Worse than EPA Estimates | DeSmogBlog.

Fri, 2014-02-14 12:40SHARON KELLY

Sharon Kelly's picture

Just how bad is natural gas for the climate?

A lot worse than previously thought, new research on methane leaks concludes.

Far more natural gas is leaking into the atmosphere nationwide than the Environmental Protection Agency currently estimates, researchers concluded after reviewing more than 200 different studies of natural gas leaks across North America.

The ground-breaking study, published today in the prestigious journal Science, reports that the Environmental Protection Agency has understated how much methane leaks into the atmosphere nationwide by between 25 and 75 percent — meaning that the fuel is far more dangerous for the climate than the Obama administration asserts.

The study, titled “Methane Leakage from North American Natural Gas Systems,” was conducted by a team of 16 researchers from institutions including Stanford University, the Massachusetts Institute of Technology and the Department of Energy’s National Renewable Energy Laboratory, and is making headlines because it finally and definitively shows that natural gas production and development can make natural gas worse than other fossil fuels for the climate.

The research, which was reported in The Washington PostBloomberg and The New York Times, was funded by a foundation created by the late George P. Mitchell, the wildcatter who first successfully drilled shale gas, so it would be hard to dismiss it as the work of environmentalists hell-bent on discrediting the oil and gas industry.

The debate over the natural gas industry’s climate change effects has raged for several years, ever since researchers from Cornell University stunned policy-makers and environmentalists by warning that if enough methane seeps out between the gas well and the burner, relying on natural gas could be even more dangerous for the climate than burning coal.

Natural gas is mostly comprised of methane, an extraordinarily powerful greenhouse gas, which traps heat 86 times more effectively than carbon dioxide during the two decades after it enters the atmosphere, according to the Intergovernmental Panel on Climate Change, so even small leaks can have major climate impacts.

The team of researchers echoed many of the findings of the Cornell researchers and described how the federal government’s official estimate proved far too low.

“Atmospheric tests covering the entire country indicate emissions around 50 percent more than EPA estimates,” said Adam Brandt, the lead author of the new report and an assistant professor of energy resources engineering at Stanford University. “And that’s a moderate estimate.”

The new paper drew some praise from Dr. Robert Howarth, one of the Cornell scientists.

“This study is one of many that confirms that EPA has been underestimating the extent of methane leakage from the natural gas industry, and substantially so,” Dr. Howarth wrote, adding that the estimates for methane leaks in his 2011 paper and the new report are “in excellent agreement.”

In November, research led by Harvard University found that the leaks from the natural gas industry have been especially under-estimated. That study, published inthe Proceedings of the National Academy of Science, reported that methane emissions from fossil fuel extraction and oil refineries in some regions are nearly five times higher than previous estimates, and was one of the 200 included in Thursday’s Science study.

EPA Estimes Far Off-Target

So how did the EPA miss the mark by such a high margin?

The EPA’s estimate depends in large part on calculations — take the amount of methane released by an average cow, and multiply it by the number of cattle nationwide. Make a similar guess for how much methane leaks from an average gas well. But this leaves out a broad variety of sources — leaking abandoned natural gas wells, broken valves and the like.

Their numbers never jibed with findings from the National Oceanic and Atmospheric Administration and the U.S. Department of Energy, which approached the problem by taking measurements of methane and other gas levels from research flights and the tops of telecommunications towers.

But while these types of measurements show how much methane is in the atmosphere, they don’t explain where that methane came from. So it was still difficult to figure out how much of that methane originated from the oil and gas industry.

At times, EPA researchers went to oil and gas drilling sites to take measurements. But they relied on driller’s voluntary participation. For instance, one EPA study requested cooperation from 30 gas companies so they could measure emissions, but only six companies allowed the EPA on site.

“It’s impossible to take direct measurements of emissions from sources without site access,” said Garvin Heath, a senior scientist with the National Renewable Energy Laboratory and a co-author of the new analysis in a press release. “Self-selection bias may be contributing to why inventories suggest emission levels that are systematically lower than what we sense in the atmosphere.” (DeSmog haspreviously reported on the problem of industry-selected well sites in similar research funded by the Environmental Defense Fund.)

Worse than Coal?

There was, however, one important point that the news coverage so far missed and that deserves attention — a crucial point that could undermine entirely the notion that natural gas can serve as a “bridge fuel” to help the nation transition away from other, dirtier fossil fuels.

In their press release, the team of researchers compared the climate effects of different fuels, like diesel and coal, against those of natural gas.

They found that powering trucks or busses with natural gas made things worse.

“Switching from diesel to natural gas, that’s not a good policy from a climate perspective” explained the study’s lead author, Adam R. Brandt, an assistant professor in the Department of Energy Resources at Stanford, calling into question a policy backed by President Obama in his recent State of the Union address.

The researchers also described the effects of switching from coal to natural gas for electricity — concluding that coal is worse for the climate in some cases. “Even though the gas system is almost certainly leakier than previously thought, generating electricity by burning gas rather than coal still reduces the total greenhouse effect over 100 years, the new analysis shows,” the team wrote in a press release.

But they failed to address the climate impacts of natural gas over a shorter period — the decades when the effects of methane are at their most potent.

“What is strange about this paper is how they interpret methane emissions:  they only look at electricity, and they only consider the global warming potential of methane at the 100-year time frame,” said Dr. Howarth. Howarth’s 2011 Cornell study reviewed all uses of gas, noting that electricity is only roughly 30% of use in the US, and describing both a 20- and a 100-year time frame.

The choice of time-frame is vital because methane does not last as long in the atmosphere as carbon dioxide, so impact shifts over time. “The new Intergovernmental Panel on Climate Change (IPCC) report from last fall — their first update on the global situation since 2007 — clearly states that looking only at the 100 year time frame is arbitrary, and one should also consider shorter time frames, including a 10-year time frame,” Dr. Howarth pointed out.

Another paper, published in Science in 2012, explains why it’s so important to look at the shorter time frames.

Unless methane is controlled, the planet will warm by 1.5 to 2 degrees Celsius over the next 17 to 35 years, and that’s even if carbon dioxide emissions are controlled. That kind of a temperature rise could potentially shift the climate of our planet into runaway feedback of further global warming.

“[B]y only looking at the 100 year time frame and only looking at electricity production, this new paper is biasing the analysis of greenhouse gas emissions between natural gas and coal in favor of natural gas being low,” said Dr. Howarth, “and by a huge amount, three to four to perhaps five fold.”

Dr. Howarth’s colleague, Prof. Anthony Ingraffea, raised a similar complaint.

“Once again, there is a stubborn use of the 100-year impact of methane on global warming, a factor about 30 times that of CO2,” Dr. Ingraffea told Climate Central, adding that there is no scientific justification to use the 100-year time window.

“That is a policy decision, perhaps based on faulty understanding of the climate change situation in which we find ourselves, perhaps based on wishful thinking,” he said.

For its part, the oil and gas industry seems very aware of the policy implications of this major new research and is already pushing back against any increased oversight of its operations.

“Given that producers are voluntarily reducing methane emissions,” Carlton Carroll, a spokesman for the American Petroleum Institute, told The New York Times in an interview about the new study, “additional regulations are not necessary.”
Photo Credit: “White Smoke from Coal-Fired Power Plant,” via Shutterstock.

New Study Shows Total North American Methane Leaks Far Worse than EPA Estimates | DeSmogBlog

New Study Shows Total North American Methane Leaks Far Worse than EPA Estimates | DeSmogBlog.

Fri, 2014-02-14 12:40SHARON KELLY

Sharon Kelly's picture

Just how bad is natural gas for the climate?

A lot worse than previously thought, new research on methane leaks concludes.

Far more natural gas is leaking into the atmosphere nationwide than the Environmental Protection Agency currently estimates, researchers concluded after reviewing more than 200 different studies of natural gas leaks across North America.

The ground-breaking study, published today in the prestigious journal Science, reports that the Environmental Protection Agency has understated how much methane leaks into the atmosphere nationwide by between 25 and 75 percent — meaning that the fuel is far more dangerous for the climate than the Obama administration asserts.

The study, titled “Methane Leakage from North American Natural Gas Systems,” was conducted by a team of 16 researchers from institutions including Stanford University, the Massachusetts Institute of Technology and the Department of Energy’s National Renewable Energy Laboratory, and is making headlines because it finally and definitively shows that natural gas production and development can make natural gas worse than other fossil fuels for the climate.

The research, which was reported in The Washington PostBloomberg and The New York Times, was funded by a foundation created by the late George P. Mitchell, the wildcatter who first successfully drilled shale gas, so it would be hard to dismiss it as the work of environmentalists hell-bent on discrediting the oil and gas industry.

The debate over the natural gas industry’s climate change effects has raged for several years, ever since researchers from Cornell University stunned policy-makers and environmentalists by warning that if enough methane seeps out between the gas well and the burner, relying on natural gas could be even more dangerous for the climate than burning coal.

Natural gas is mostly comprised of methane, an extraordinarily powerful greenhouse gas, which traps heat 86 times more effectively than carbon dioxide during the two decades after it enters the atmosphere, according to the Intergovernmental Panel on Climate Change, so even small leaks can have major climate impacts.

The team of researchers echoed many of the findings of the Cornell researchers and described how the federal government’s official estimate proved far too low.

“Atmospheric tests covering the entire country indicate emissions around 50 percent more than EPA estimates,” said Adam Brandt, the lead author of the new report and an assistant professor of energy resources engineering at Stanford University. “And that’s a moderate estimate.”

The new paper drew some praise from Dr. Robert Howarth, one of the Cornell scientists.

“This study is one of many that confirms that EPA has been underestimating the extent of methane leakage from the natural gas industry, and substantially so,” Dr. Howarth wrote, adding that the estimates for methane leaks in his 2011 paper and the new report are “in excellent agreement.”

In November, research led by Harvard University found that the leaks from the natural gas industry have been especially under-estimated. That study, published inthe Proceedings of the National Academy of Science, reported that methane emissions from fossil fuel extraction and oil refineries in some regions are nearly five times higher than previous estimates, and was one of the 200 included in Thursday’s Science study.

EPA Estimes Far Off-Target

So how did the EPA miss the mark by such a high margin?

The EPA’s estimate depends in large part on calculations — take the amount of methane released by an average cow, and multiply it by the number of cattle nationwide. Make a similar guess for how much methane leaks from an average gas well. But this leaves out a broad variety of sources — leaking abandoned natural gas wells, broken valves and the like.

Their numbers never jibed with findings from the National Oceanic and Atmospheric Administration and the U.S. Department of Energy, which approached the problem by taking measurements of methane and other gas levels from research flights and the tops of telecommunications towers.

But while these types of measurements show how much methane is in the atmosphere, they don’t explain where that methane came from. So it was still difficult to figure out how much of that methane originated from the oil and gas industry.

At times, EPA researchers went to oil and gas drilling sites to take measurements. But they relied on driller’s voluntary participation. For instance, one EPA study requested cooperation from 30 gas companies so they could measure emissions, but only six companies allowed the EPA on site.

“It’s impossible to take direct measurements of emissions from sources without site access,” said Garvin Heath, a senior scientist with the National Renewable Energy Laboratory and a co-author of the new analysis in a press release. “Self-selection bias may be contributing to why inventories suggest emission levels that are systematically lower than what we sense in the atmosphere.” (DeSmog haspreviously reported on the problem of industry-selected well sites in similar research funded by the Environmental Defense Fund.)

Worse than Coal?

There was, however, one important point that the news coverage so far missed and that deserves attention — a crucial point that could undermine entirely the notion that natural gas can serve as a “bridge fuel” to help the nation transition away from other, dirtier fossil fuels.

In their press release, the team of researchers compared the climate effects of different fuels, like diesel and coal, against those of natural gas.

They found that powering trucks or busses with natural gas made things worse.

“Switching from diesel to natural gas, that’s not a good policy from a climate perspective” explained the study’s lead author, Adam R. Brandt, an assistant professor in the Department of Energy Resources at Stanford, calling into question a policy backed by President Obama in his recent State of the Union address.

The researchers also described the effects of switching from coal to natural gas for electricity — concluding that coal is worse for the climate in some cases. “Even though the gas system is almost certainly leakier than previously thought, generating electricity by burning gas rather than coal still reduces the total greenhouse effect over 100 years, the new analysis shows,” the team wrote in a press release.

But they failed to address the climate impacts of natural gas over a shorter period — the decades when the effects of methane are at their most potent.

“What is strange about this paper is how they interpret methane emissions:  they only look at electricity, and they only consider the global warming potential of methane at the 100-year time frame,” said Dr. Howarth. Howarth’s 2011 Cornell study reviewed all uses of gas, noting that electricity is only roughly 30% of use in the US, and describing both a 20- and a 100-year time frame.

The choice of time-frame is vital because methane does not last as long in the atmosphere as carbon dioxide, so impact shifts over time. “The new Intergovernmental Panel on Climate Change (IPCC) report from last fall — their first update on the global situation since 2007 — clearly states that looking only at the 100 year time frame is arbitrary, and one should also consider shorter time frames, including a 10-year time frame,” Dr. Howarth pointed out.

Another paper, published in Science in 2012, explains why it’s so important to look at the shorter time frames.

Unless methane is controlled, the planet will warm by 1.5 to 2 degrees Celsius over the next 17 to 35 years, and that’s even if carbon dioxide emissions are controlled. That kind of a temperature rise could potentially shift the climate of our planet into runaway feedback of further global warming.

“[B]y only looking at the 100 year time frame and only looking at electricity production, this new paper is biasing the analysis of greenhouse gas emissions between natural gas and coal in favor of natural gas being low,” said Dr. Howarth, “and by a huge amount, three to four to perhaps five fold.”

Dr. Howarth’s colleague, Prof. Anthony Ingraffea, raised a similar complaint.

“Once again, there is a stubborn use of the 100-year impact of methane on global warming, a factor about 30 times that of CO2,” Dr. Ingraffea told Climate Central, adding that there is no scientific justification to use the 100-year time window.

“That is a policy decision, perhaps based on faulty understanding of the climate change situation in which we find ourselves, perhaps based on wishful thinking,” he said.

For its part, the oil and gas industry seems very aware of the policy implications of this major new research and is already pushing back against any increased oversight of its operations.

“Given that producers are voluntarily reducing methane emissions,” Carlton Carroll, a spokesman for the American Petroleum Institute, told The New York Times in an interview about the new study, “additional regulations are not necessary.”
Photo Credit: “White Smoke from Coal-Fired Power Plant,” via Shutterstock.

How Much Energy are We Flushing Down the Drain? | Energy Economics Exchange

How Much Energy are We Flushing Down the Drain? | Energy Economics Exchange.

California is in the middle of a drought. In the Bay Area, that has meant day after day of glorious, uncharacteristically sunny winter weather. But, I am haunted by media images of dry creek beds and by my own mental images of driving by the Rim Fire near Yosemite last summer. Who knows what this summer will bring.

The drumbeat of media coverage on the drought had led me to think harder about the water-energy nexus. At a high level, that phrase encapsulates two profound facts: energy production is extremely water intensive and water provision is extremely energy intensive. (At this point, we can’t really say “water production,” but as we add more desalination capacity, production becomes more apt.)

I’ll focus on the second of those two facts, but this article on the water used for fracking relates to the first.

Providing Water to Homes, Businesses and Farms Requires A LOT of Energy

The energy intensity of water delivery hit home to me several years ago when my husband, who works for an electricity generator, spent the day at a California Public Utilities Commission workshop on low-flow toilets. Why would an electric generator care about toilets?! It turns out that pumping, conveying, heating, and treating water are all highly energy intensive.

An Energy Efficient Toilet?

An Energy Efficient Toilet?

In fact, several years ago, the California Energy Commission calculated that 19 percent of the state’s electricity and nearly 30 percent of its natural gas consumption went to moving, heating and treating water.

I’ve delved into these calculations, and not all of the energy attributed to water is in my view actually driven by decisions that we would normally think of as water-usage choices. For instance, the calculations include things like heating water for sterilization in food processing. I can imagine a sterilization technique that didn’t use water but still used energy, and sterilization is ultimately driven by decisions about processed food consumption.

A recent paper from the University of Texas similarly calculates the share of U.S. energy related to water. The authors distinguish between “Direct Steam Uses,” which includes things like sterilization and “Direct Water Services,” which are driven by what I think of as water-based decisions. The authors estimates that the two categories together account for 13 percent of the nation’s energy and Direct Water Services account for 8.5 percent of the nation’s energy.

The energy cost of H2O also depends on where you live. Californians use more energy-intensive water because we use more groundwater and less surface water, and we move it over longer distances. My water provider, East Bay Municipal Utilities District, charges an, “Elevation Surcharge,” which they describe as, “based on the energy costs of pumping water to higher elevations.” For households in the hills above 600 feet, the surcharge adds more than $1 per hundred cubic feet to a base price of roughly $2.50 per hundred cubic feet. Not all utilities have this adder.

Solutions?

As an energy economist, I hear a lot about positive  – in the sense of reinforcing – feedback loops that could result from climate change. Rising temperatures, for example, will require more electricity to power air conditioners, and, right now, electricity production is the country’s main source of greenhouse gas emissions. A drier California climate might be an example of a negative feedback: more droughts will force us to rationalize the ways we use water—and save energy, in the process.

But, how do we rationalize our water use? We should start by rationalizing water pricing. I know this might sound like the knee-jerk economist answer, but the water world has many examples that violate simple Econ-101 principles. In a nutshell, water is a scarce resource, and we treat it as though the basic input were free. In Los Angeles, for instance, the Department of Water and Power subsidizes houses on bigger lots by giving them more cheap water. Water usage in the agriculture sector, which accounts for 80% of California’s total water consumption, is a whole mess in and of itself, symbolized in my mind by the rice paddies in the Central Valley.

Rice Paddies in the Central Valley

The water economist, David Zetland, has made scarcity pricing for water his battle cry and has written a book on Living with Water Scarcity. As Timothy Egan in the New York Timeshas said, we cannot “out-engineer a fevered planet.” But, we can move towards rational pricing policies that help us make better decisions about our planet’s scarce resources.

About Catherine Wolfram

Catherine Wolfram is the Cora Jane Flood Professor of Business Administration at the Haas School of Business, Co-Director of the Energy Institute at Haas, and a Faculty Director of The E2e Project. Her research analyzes the impact of environmental regulation on energy markets and the effects of electricity industry privatization and restructuring around the world. She is currently implementing several randomized control trials to evaluate energy efficiency programs.

The great Australian electricity rip off – Solar Business Services

The great Australian electricity rip off – Solar Business Services.

The great Australian electricity rip off

20 Feb, 2014

 

Right, now I’m really, really annoyed.

Although I’ve spent more than two decades in the solar and energy field, in the last two years as solar has grown and we have become an intrinsic and material part of  Australia’s energy mix I have come to realize something fundamental.

The Australian public is being duped and constantly lied to on a monumental scale when it come to electricity.

Now I am a fundamentally trusting person; it’s the way I was brought up. I’m not a conspiracy theorist. I always give people, Governments and corporations the benefit of the doubt.

However, the more I read, research and understand about the way our electricity system operates the more alarmed I become. I admit, I am not an expert in the complex and ever changing world of electricity regulation, but a lot of what is happening in the industry is not rocket science. Events of the last few weeks have simply brought it all home for me.

Lets look at a few examples.

The RET

The facts on what the RET does and doesn’t cost are absolutely, 100% clear, ironically thanks to a Government body, The Australian Energy Market Commission. It’s the single smallest component of electricity bills  (bar one) and is already declining in proportional terms.

And yet, from the Prime Minister all the way down to the subtle messages passed on to their very close friends in  media who helped them gain power, time and time again the RET (and the Carbon Price) are made out to be the root of all evil.

This is despite the data, the facts and the truth from their own departments. I am boggled and stunned by the willingness of our leaders to tell blatantly astounding mistruths about this issue and to conveniently overlook the real source of price rises. Even Joe Hockey (who seems like a nice bloke) jumped on the band wagon yesterday suggesting that the RET had something to do with Alcoa’s decision to exit Australia, despite the fact that the company had received hundreds of Millions of dollars in exemptions and grants. The only ones not blaming the RET and the Carbon Price, were Alcoa.

The real source of price rises

When you look at the data, it shows you some staggering facts about what is really going on. Take for example, one of Australia’s largest network owners, NSW Government owned Ausgrid.

Ausgrid has the single largest share of customers in the entire National Electricity Market (around 18%) making them the canary in the coal mine. In their 2013 report, the Australian Energy Regulator had this to say: “There have been many large changes in the relative and overall magnitude of the charging parameters within the period. Of particular note is the 471.14 per cent increase in the fixed charge in 2012–13, 18 per cent decreases in energy charges in 2006–07 and over 200 per cent increases in energy charges in 2009–10.”

Did you get that ? Ausgrid, a Government owned network operator increased fixed charges by 471.14 % to business customers.

If you look at it over the period 2004 to 2013 it is a total increase of 1125%. Peak energy costs increased 600%, shoulder by 649%, Off peak by 1111% and peak capacity by 869%.

And yet, the RET is the problem apparently.

So despite all the bleating about wanting to reduce peak demand, they have in fact increased fixed charges which consumers can have NO IMPACT on, no matter how hard they try.  These  ”price signals”  are counter intuitive to reducing peak demand and in fact utterly dis-empower consumers in a most profound way, a fact that was outlined in a report in 2013 by the Centre for Policy Research. And they are completely Government sanctioned.

If that’s not enough, the same report actually shows that in 44 out of 46 cases across 8 network companies between 2005 and 2011, revenues (that are regulated) were ABOVE expectation. That means they mademore profit and we all paid for it. And guess what; when you look at the AEMC’s data here’s what it shows is going to happen as a proportion of the average National electricity bill between 2014 and 2016:

  • Distribution network charges will RISE by 8.2%
  • Generation costs will RISE by 5.7%
  • Retail Margins will RISE by 6.3%
  • Transmission costs will RISE by 6.7%
  • The RET (Small and Large scale) will REDUCE by 55.6%

Of course, these changes could be somewhat masked by State price settinghours a day.  regimes and the assumed removal of the Carbon Price. How terribly, terribly convenient.

But of course, there are rewards for electricity consumers in some cases. years ago, many tariff structures were revised so that their was an incentive to use less energy and to reward energy efficiency. But the AEMC document demonstrates the inexorable shift away from this and back to rewarding higher consumption. Use more and pay less. This works beautifully if your profit comes from meeting this demand or expanding your network to cope but the impact on the rest of society is that prices rise to fund it all.

Highlighting the case, I spoke to an installer recently who was facing challenges because of this issue. He had stumbled across several large agricultural facilities that were obsessed with ensuring their demand was constantly high enough to get them to the next (lower) tariff rate. The solution? Install a 200kW water pump, suck water out of a dam and pump it back in again. Constantly. 24 hours a day.

Wonderfully efficient.

But lets not forget the retailers because after all “they just pass on the regulated network costs from the distributors” (like Ausgrid). Poor guys. They are scrambling to scrap the RET at a rabid pace, have erroneously called it  middle class welfare and are laying the blame for the countries woes squarely at our mutual, solar panel installing feet. All the while they have Government sanctioned approval to make proportionally MORE profit from you and me and every single Australian business owner (and Alcoa of course, had they stayed).

Meanwhile, the regulators and the Government just keep saying “Don’t worry. its ok, you can just switch providers and save a FORTUNE because switching is really, really easy and the market is in a state of healthy market based competition”. Bullshit.

Firstly, the vast majority of the Australian electricity industry is still Government owned. Not  really renowned for innovation or their creative market based behaviour, the Government.

Secondly, consumers are lazy and switching is a pain in the backside. Most of us are too busy dealing with life to worry about trying to save a few percent here or there. Where’s the reward for loyalty gone in this world, for goodness sake? And you know what? Switching and “customer churn” is on the increase and the poor utilities are facing increased costs because of it which is exactly the reason they are allowed to charge us more. Because we are all switching. Because that’s how we’ll save money. But it puts costs up. So it will cost us money. But we should switch because we’ll save money.

You’re getting this, right?

But hey, if we swallow the assumption (and advice from Government) that we will save money by switching then that’s awesome. You’ll knock 10 0r 15% off my bill? Yes? Awesome, because my last bill was a shocker. Terms and conditions? Yep, read all 279, 621 tiny little words of your terms and conditions after following ten links on your website (lie). Didn’t understand a word of (true). Yes, I’ll sign your contract because I’m Australian, you’re Australian and a deal is a deal. I’d spit and shake on it if you weren’t in Bangalore.

Now as it turns out, the totally awesome discount you just got is actually pretty “fluid”.  Turns out current laws allow the retailers to increase the price they charge you for electricity at any time during a contract.  But I hate switching, it’s a pain, so I’ll just lump it in 6 or 12 months when you hit me with a price rise caused by factors completely outside your control.

Wow, that wasn’t such a good deal after all.

The rules

Then there are the rules. My god, the rules.  Simply trying to understand the rules and regulations that govern the industry, how they translate to your bill and what they can and cant do is like trying to understand what your Optus phone is actually costing you. You have absolutely no hope.

Take business customers for example. I recently analysed 5 business bills, which were from different locations in Australia but all similar costs and by co-incidence, all from the same retailer.

Firstly, there was a a complete lack of consistency which made understanding and comparing them virtually impossible. Different terms for the same thing, slight changes in wording,some charges on energy, some on demand and an utter lack of consistency. In some cases customers paid for simply awesome things like “VIP Metering” and “Consumer advocacy”. Unreal. If I was a business owner, I would be so impressed to know that my retailer is charging me to be an advocate. For me. And then charging me. Now that’s service!

Then there is the complete and total transparency which allows me to compare commercial offerings. Yep, you can go to a web site, look at every offer in the market upload your consumption data and work out which offer is best.  And its easy (switching, remember?). Bullshit.

There is a chasm greater than the Western Australia’s Big Pit here.

First, if you want to know your demand profile, they’ll take weeks and probably charge you. For knowing. Your consumption.

Secondly, if you ask for an offer, they’ll pretty quickly slot you into a demand “band”. No one actually knows what these bands are or what they mean and they vary by region, by offer, by your size and the color of your neighbors hair (god help you if they are a blood-nut). It’s like a mystery flight; just shut up, sit down and hang on. If you don’t know your demand yet, don’t worry because they have a secret formula so they can tell you how much it will cost and what your profile will look like. Without knowing anything about your demand. At all.

But hey, I’m probably being unfairly critical because its complicated; I couldn’t possibly hope to understand. Go right ahead.

Then of course, you might have a relationship going back many, many years with your retailer. You watch the news, you’ve seen the drought, you listened to the issues about peak demand and the greatest moral issue of our time and you decided; Screw it. I’ll stump up hundreds of thousands of dollars of my own money and whack some solar up.

Your retailers reaction? Well at least one I know of said “Awesome!” “We’ll just renegotiate the contract you broke, your energy rate will dramatically reduce from 25c kWh to 5c. Your standing charges (don’t worry about them) will increase form 25c a day to $2  day”. For those unfamiliar, that’s called “the big switcheroo”, formerly the domain of dudes in weird waistcoats with cups and balls, but now a wholly owned subsidiary of electricity retailers.

Oh and because the rules have changed to protect consumers (enter the National Energy Customer Framework) , if you want solar, we will need to come and do a horrendously expensive study because  well, the fact that you have been on our network for thirty years and we approved everything counts for nothing. Because we have to protect you. In one actual case from a network operator one of the reason the gave for delaying a solar installation was, and I quote “The LV OH supply from the Council access track North of premises is quite sneaky visually and very hazardous to the unsuspecting. “

Damn it, sneaky wires. That’s a damn good reason to stop progress and infuriate a 30 year customer who’s (sneaky) installation was approved by you.You’re right. We are busted for excessive sneakiness.

I was also fascinated to see the variation in loss factors that are applied to bills, as  a separate and definable item. They varied between 0.1% and a staggering 15.19%. and are applied as a multiplier to the energy you consume. So in one case, the business bill I looked at was 15.9% higher than their actual consumption because the network is so grossly inefficient at delivering energy to their premises.   That’s akin to a mechanic saying “Sorry mate, I spilled 15.9% of the oil when I was doing your service because my pipe has a leak, but the law says I can charge you for it”. 

Not only are they allowed to do this by law, but they will charge you a huge proportion of your bill for building owning operating and maintaining that same network, then charge you (again) if they happen to do a lousy job of it where you happen to have your business. Really.

Then we can also consider the regulations around the pass through of the costs of RET. In NSW for example, Retailers were allowed (by the State regulator) to pass through the “full cost” of certificates at $40 and recover these costs from consumers and business. The catch here is the real price of certificates has moved from $16-$36 over the last few years and of course, if those same retailers create their own certificates (by selling you a solar system and capturing the STC’s) then they could get prices way down. So we know and it has been ackowledged by IPART that the Retailers stood to gain, potentially substantial sums from this quirk.

So in reality, the RET and the SRES in particular, has contributed to the profits of the Retailers.

I could go on with a myriad of other examples but I suspect you get my point.

The Government owns, regulates and controls the vast majority of the electricity industry in Australia all the way back to the coal reserves in some cases. The make a phenomenal amount of money from it, as do the non Government retailers and they don’t want it to change. The US based Edison Electric Institute (an electricity industry think tank) summed up the substantial concerns of their industry to disruptive challenges in blunt terms in a document released lat year, warning that industry had to adapt or perish. Through their vast media connections they will say what is politically convenient even if it is complete and utter rubbish and we wont even get a return phone call from the same reporters.

It seems to me that they have all got themselves into a corner so dark, they just have to keep rolling out the same rubbish and hope no one notices.

Guess what? We noticed.

Report: Power Plant Attack: “Most Significant Incident of Domestic Terrorism Involving the Grid That Has Ever Occurred”

Report: Power Plant Attack: “Most Significant Incident of Domestic Terrorism Involving the Grid That Has Ever Occurred”.

Mac Slavo
February 6th, 2014
SHTFplan.com

Comments (80)
Read by 6,007 people

Chances are you didn’t hear about it when it happened or the investigation that followed. Last April just outside of San Jose, California the grid system came under direct attack.

Investigators have yet to identify any suspects, but the attack seems to have been well planned. First, someone accessed an underground vault housing fiber optic telephone cables and cut off communications to a large PG&E Substation.

Then, for 19 minutes, someone opened fire from long-range.

The sniper apparently utilized 7.62x39mm rounds, such as those used in an AK-47, to target the oil-driven cooling systems for 17 large transformers. The shell casings found at the scene had been wiped clean of fingerprints. According to Newsmax none of the transformers exploded, but the damage was significant enough for PG&E to force their electricity feeds to reroute through another station in an effort to prevent a widespread blackout.

As of yet police and the Federal Bureau of Investigation have no leads. The evidence suggests any number of scenarios with the highest likelihood being a coordinated attack involving a team. But because of its simplicity it’s possible that the attack could have been orchestrated by a lone individual.

Whatever the case, the event prompted the head of the Federal Energy Regulatory Commission Jon Wellinghoff to call it, “the most significant incident of domestic terrorism involving the grid that has ever occurred.”

The Wall Street Journal reports:

The 64-year-old Nevadan, who was appointed to FERC in 2006 by President George W. Bush and stepped down in November, said he gaveclosed-door, high-level briefings to federal agencies, Congress and the White House last year. As months have passed without arrests, he said,he has grown increasingly concerned that an even larger attack could be in the works.

He said he was going public about the incident out of concern that national security is at risk and critical electric-grid sites aren’t adequately protected.

The Federal Bureau of Investigation doesn’t think a terrorist organization caused the Metcalf attack, said a spokesman for the FBI in San Francisco. Investigators are “continuing to sift through the evidence,” he said.

Some people in the utility industry share Mr. Wellinghoff’s concerns, including a former official at PG&E, Metcalf’s owner, who told an industry gathering in November he feared the incident could have been a dress rehearsal for a larger event.

This wasn’t an incident where Billy-Bob and Joe decided, after a few brewskis, to come in and shoot up a substation,” Mark Johnson, retired vice president of transmission for PG&E, told the utility security conference, according to a video of his presentation. “This was an event that was well thought out, well planned and they targeted certain components.”

cali-attack2(Via theWall Street Journal)

The most significant power grid attack in U.S. history failed to be reported in any detail by officials or the mainstream media, likely because they did not want to panic the populace.

Could this have been a test for a larger scale event? Certainly.

Since then, what steps have been taken to protect the grid from such attacks, or even other potential scenarios like electro-magnetic pulse devices or solar flares that could wipe out the national power grid within seconds? None.

A single individual could have carried out such an attack. Cut the phone lines. Take aim. Open fire. It’s simple, really.

Now consider the potential damage if a rogue terrorist group or state-sponsored initiative launched a coordinated attack across 50 to 100 critical nodes all over the United States. Such an attack could bring the country to a complete standstill, leavingeconomic destruction and large-scale destabilization in its wake. A couple of days are manageable, but if the right equipment were to be targeted then it’s possible that repairs would take up to 18 months because many transformer components are sourced from foreign nations and have long build times.

The telecommunications systems, power grid, water  utilities, transportation systems, oil refineries and other critical industries across America are, as reported by U.S. Cyber Command, completely exposed to attack. It could come in the form of a cyber vulnerability, as we saw in Illinois when a utility station’s water pump systems overheated due to a reported digital security breach or when our drone fleet was hacked in the middle east. Or, it could be a physical attack like the one in California, with future incidents potentially involving larger transformers and explosives instead of AK-47′s.

The possibilities exist. Our government knows this, as evidenced by the comments of outgoing DHS Secretary Janet Napolitano who recently said that a crippling attack against U.S. infrastructure elements is inevitable.

The fact is that our infrastructure is outdated and exposed. It will not be repaired any time soon because the costs run into the hundreds of billions of dollars.

Thus, the only real option for Americans is to expect that such an event is coming, and to prepare for it.

Congressman Roscoe Bartlett, who has retired and now lives well outside of populated areas, says people should get out of major cities and have a retreat to avoid the fall-out from a grid collapse. His fears are substantiated by a recent report that claims 9 out of 10 Americans would die within a year of the electricity going out.

But whether you head out to the boonies or stay local, even the Federal Emergency Management Agency recommends having an emergency supply because, as they’ve admitted, any response in a catastrophic scenario will be slow to come. This means that having a preparedness plan complete with evacuation strategiesfood supplies,water and other considerations will be essential to survival.

The threat is real.

How the electric power grid works

China reveals ‘ace’ against U.S. military

China reveals ‘ace’ against U.S. military.

WASHINGTON – Members of the Chinese military are looking to use an electromagnetic pulse as part of a “one-two punch” to knock out – literally within seconds – all defensive electronics not only on Taiwan but also on U.S. warships that could defend the island.

This revelation comes in an article by Lou Xiaoqing who says the People’s Liberation Army sees an EMP weapon as the primary means of incapacitating Taiwan and disabling American defenders nearby.

Given that such a strategy was made public in an article entitled “Electromagnetic pulse bombs are Chinese ace,” it is seen as reflecting the official Chinese government position.

Xaoqing said that if the Chinese were to use a high-altitude nuclear device which would create the destructive EMP impact on Taiwan’s electronics, it would be exploded at an attitude of 18 miles to avoid damaging civilian and military equipment on the Chinese mainland, which might happen if the bomb exploded at a higher altitude.

“China is attracted to the fight against the U.S. military after the effective range, using them as a means of surprise attack or an intimidation factor,” Xaoqing said. “The United States will abandon the use of aircraft carrier battle groups to defend Taiwan.”

Xaoqing said that the Chinese military has calculated that the U.S. military is too fragmented and, coupled with the downturn in the economy, would be less likely to come to Taiwan’s assistance, forcing Taiwan to defend itself.

Contrary to popular belief, the 1979 Taiwan Relations Act does not require the United States to intervene militarily if the Chinese mainland attacks Taiwan. Instead, it has adopted what is called a policy of “strategic ambiguity” in which the U.S. neither will confirm nor deny that it would intervene on Taiwan’s behalf.

The legislation, however, does require the U.S. to “provide Taiwan with arms of a defensive character” and “to maintain the capacity of the United States to resist any resort to force or other forms of coercion that would jeopardize the security, or the social or economic system, of the people of Taiwan.”

Read the documentation that’s sparking the worry about the EMP threat, in “A Nation Forsaken.”

As WND previously has reported, China is giving a priority to developing EMP weapons that could be used against U.S. aircraft carriers, which increasingly are arriving in the South and East China Seas as part of the new U.S. “pivot” policy toward Asia.

That policy is to challenge China’s claims over all of the East and South China Seas and the increasing assertiveness by Beijing, which is trying to gain exclusive control over vital minerals and energy in the region.

There already have been instances of military confrontations between China and neighbors such as Vietnam, the Philippines and Japan.

With a history of animosity, China and Japan now have conflicting claims of ownership over two South China Sea islands.

China calls the islands Diaoyu while Japan refers to them as Senkaku. The Japanese have evidence of their claim – in having purchased them from private citizens years ago – and the U.S. supports Japan’s claim.

A 2005 U.S. National Ground Intelligence Center study that was classified secret but released two years ago said China’s development of high-powered microwave weapons is part of its “assassin’s mace” arsenal – weapons that allow a technologically inferior country such as China and even North Korea to defeat U.S. military forces.

Microwaves and the gamma rays from a nuclear blast are forms of electromagnetic energy. The bombs are designed to be exploded at a high altitude to knock out all unprotected electronics, including electrical grids, computers and automobiles over a wide geographical area.

Even the declassified NGIC report pointed out that the use of an EMP against Taiwan at an altitude of 30 to 40 kilometers would “confine the EMP effects to Taiwan and its immediate vicinity and minimize damage to electronics on the mainland.”

The report particularly said that China’s DF-21 medium-range ballistic missile could be the platform to be used to launch an EMP attack on Taiwan.

In outlining China’s one-two punch, Xaoqing said that in the first punch the Chinese military would disable non-hardened electronics and command and control centers.

He said that an EMP would be especially attractive because it acts with the speed of light in any kind of weather, would hit multiple targets over a wide area and minimize damage in politically sensitive environments.

Given the relatively low altitude of 18 miles at which a Chinese EMP would be detonated over Taiwan, Xaoqing said the second punch would create certain health effects from exposure to an EMP.

He said that based on Chinese research in 2005 that assessed the effects of an EMP on heart cells, it would make peoples’ hearts unable to function as well as they should, with possible death or serious damage of the heart and, by extension, death to those exposed to an EMP.

If exposed to explosions at higher altitudes, the effects of an EMP would be less damaging to peoples’ health, he said.

While there wouldn’t be a 100 percent kill rate, Xaoqing said, he said it could lead to long term disability to those most susceptible to an EMP, such as the elderly, young and unborn.

Get “A Nation Forsaken” by Michael Maloof and become part of the solution rather than part of the problem.
Read more at http://www.wnd.com/2014/01/chinas-reveals-ace-against-u-s-military/#3v8gQZeDTmUzQlPi.99

%d bloggers like this: