Olduvaiblog: Musings on the coming collapse

Home » Posts tagged 'science'

Tag Archives: science

New Study Shows Total North American Methane Leaks Far Worse than EPA Estimates | DeSmogBlog

New Study Shows Total North American Methane Leaks Far Worse than EPA Estimates | DeSmogBlog.

Fri, 2014-02-14 12:40SHARON KELLY

Sharon Kelly's picture

Just how bad is natural gas for the climate?

A lot worse than previously thought, new research on methane leaks concludes.

Far more natural gas is leaking into the atmosphere nationwide than the Environmental Protection Agency currently estimates, researchers concluded after reviewing more than 200 different studies of natural gas leaks across North America.

The ground-breaking study, published today in the prestigious journal Science, reports that the Environmental Protection Agency has understated how much methane leaks into the atmosphere nationwide by between 25 and 75 percent — meaning that the fuel is far more dangerous for the climate than the Obama administration asserts.

The study, titled “Methane Leakage from North American Natural Gas Systems,” was conducted by a team of 16 researchers from institutions including Stanford University, the Massachusetts Institute of Technology and the Department of Energy’s National Renewable Energy Laboratory, and is making headlines because it finally and definitively shows that natural gas production and development can make natural gas worse than other fossil fuels for the climate.

The research, which was reported in The Washington PostBloomberg and The New York Times, was funded by a foundation created by the late George P. Mitchell, the wildcatter who first successfully drilled shale gas, so it would be hard to dismiss it as the work of environmentalists hell-bent on discrediting the oil and gas industry.

The debate over the natural gas industry’s climate change effects has raged for several years, ever since researchers from Cornell University stunned policy-makers and environmentalists by warning that if enough methane seeps out between the gas well and the burner, relying on natural gas could be even more dangerous for the climate than burning coal.

Natural gas is mostly comprised of methane, an extraordinarily powerful greenhouse gas, which traps heat 86 times more effectively than carbon dioxide during the two decades after it enters the atmosphere, according to the Intergovernmental Panel on Climate Change, so even small leaks can have major climate impacts.

The team of researchers echoed many of the findings of the Cornell researchers and described how the federal government’s official estimate proved far too low.

“Atmospheric tests covering the entire country indicate emissions around 50 percent more than EPA estimates,” said Adam Brandt, the lead author of the new report and an assistant professor of energy resources engineering at Stanford University. “And that’s a moderate estimate.”

The new paper drew some praise from Dr. Robert Howarth, one of the Cornell scientists.

“This study is one of many that confirms that EPA has been underestimating the extent of methane leakage from the natural gas industry, and substantially so,” Dr. Howarth wrote, adding that the estimates for methane leaks in his 2011 paper and the new report are “in excellent agreement.”

In November, research led by Harvard University found that the leaks from the natural gas industry have been especially under-estimated. That study, published inthe Proceedings of the National Academy of Science, reported that methane emissions from fossil fuel extraction and oil refineries in some regions are nearly five times higher than previous estimates, and was one of the 200 included in Thursday’s Science study.

EPA Estimes Far Off-Target

So how did the EPA miss the mark by such a high margin?

The EPA’s estimate depends in large part on calculations — take the amount of methane released by an average cow, and multiply it by the number of cattle nationwide. Make a similar guess for how much methane leaks from an average gas well. But this leaves out a broad variety of sources — leaking abandoned natural gas wells, broken valves and the like.

Their numbers never jibed with findings from the National Oceanic and Atmospheric Administration and the U.S. Department of Energy, which approached the problem by taking measurements of methane and other gas levels from research flights and the tops of telecommunications towers.

But while these types of measurements show how much methane is in the atmosphere, they don’t explain where that methane came from. So it was still difficult to figure out how much of that methane originated from the oil and gas industry.

At times, EPA researchers went to oil and gas drilling sites to take measurements. But they relied on driller’s voluntary participation. For instance, one EPA study requested cooperation from 30 gas companies so they could measure emissions, but only six companies allowed the EPA on site.

“It’s impossible to take direct measurements of emissions from sources without site access,” said Garvin Heath, a senior scientist with the National Renewable Energy Laboratory and a co-author of the new analysis in a press release. “Self-selection bias may be contributing to why inventories suggest emission levels that are systematically lower than what we sense in the atmosphere.” (DeSmog haspreviously reported on the problem of industry-selected well sites in similar research funded by the Environmental Defense Fund.)

Worse than Coal?

There was, however, one important point that the news coverage so far missed and that deserves attention — a crucial point that could undermine entirely the notion that natural gas can serve as a “bridge fuel” to help the nation transition away from other, dirtier fossil fuels.

In their press release, the team of researchers compared the climate effects of different fuels, like diesel and coal, against those of natural gas.

They found that powering trucks or busses with natural gas made things worse.

“Switching from diesel to natural gas, that’s not a good policy from a climate perspective” explained the study’s lead author, Adam R. Brandt, an assistant professor in the Department of Energy Resources at Stanford, calling into question a policy backed by President Obama in his recent State of the Union address.

The researchers also described the effects of switching from coal to natural gas for electricity — concluding that coal is worse for the climate in some cases. “Even though the gas system is almost certainly leakier than previously thought, generating electricity by burning gas rather than coal still reduces the total greenhouse effect over 100 years, the new analysis shows,” the team wrote in a press release.

But they failed to address the climate impacts of natural gas over a shorter period — the decades when the effects of methane are at their most potent.

“What is strange about this paper is how they interpret methane emissions:  they only look at electricity, and they only consider the global warming potential of methane at the 100-year time frame,” said Dr. Howarth. Howarth’s 2011 Cornell study reviewed all uses of gas, noting that electricity is only roughly 30% of use in the US, and describing both a 20- and a 100-year time frame.

The choice of time-frame is vital because methane does not last as long in the atmosphere as carbon dioxide, so impact shifts over time. “The new Intergovernmental Panel on Climate Change (IPCC) report from last fall — their first update on the global situation since 2007 — clearly states that looking only at the 100 year time frame is arbitrary, and one should also consider shorter time frames, including a 10-year time frame,” Dr. Howarth pointed out.

Another paper, published in Science in 2012, explains why it’s so important to look at the shorter time frames.

Unless methane is controlled, the planet will warm by 1.5 to 2 degrees Celsius over the next 17 to 35 years, and that’s even if carbon dioxide emissions are controlled. That kind of a temperature rise could potentially shift the climate of our planet into runaway feedback of further global warming.

“[B]y only looking at the 100 year time frame and only looking at electricity production, this new paper is biasing the analysis of greenhouse gas emissions between natural gas and coal in favor of natural gas being low,” said Dr. Howarth, “and by a huge amount, three to four to perhaps five fold.”

Dr. Howarth’s colleague, Prof. Anthony Ingraffea, raised a similar complaint.

“Once again, there is a stubborn use of the 100-year impact of methane on global warming, a factor about 30 times that of CO2,” Dr. Ingraffea told Climate Central, adding that there is no scientific justification to use the 100-year time window.

“That is a policy decision, perhaps based on faulty understanding of the climate change situation in which we find ourselves, perhaps based on wishful thinking,” he said.

For its part, the oil and gas industry seems very aware of the policy implications of this major new research and is already pushing back against any increased oversight of its operations.

“Given that producers are voluntarily reducing methane emissions,” Carlton Carroll, a spokesman for the American Petroleum Institute, told The New York Times in an interview about the new study, “additional regulations are not necessary.”
Photo Credit: “White Smoke from Coal-Fired Power Plant,” via Shutterstock.

New Study Shows Total North American Methane Leaks Far Worse than EPA Estimates | DeSmogBlog

New Study Shows Total North American Methane Leaks Far Worse than EPA Estimates | DeSmogBlog.

Fri, 2014-02-14 12:40SHARON KELLY

Sharon Kelly's picture

Just how bad is natural gas for the climate?

A lot worse than previously thought, new research on methane leaks concludes.

Far more natural gas is leaking into the atmosphere nationwide than the Environmental Protection Agency currently estimates, researchers concluded after reviewing more than 200 different studies of natural gas leaks across North America.

The ground-breaking study, published today in the prestigious journal Science, reports that the Environmental Protection Agency has understated how much methane leaks into the atmosphere nationwide by between 25 and 75 percent — meaning that the fuel is far more dangerous for the climate than the Obama administration asserts.

The study, titled “Methane Leakage from North American Natural Gas Systems,” was conducted by a team of 16 researchers from institutions including Stanford University, the Massachusetts Institute of Technology and the Department of Energy’s National Renewable Energy Laboratory, and is making headlines because it finally and definitively shows that natural gas production and development can make natural gas worse than other fossil fuels for the climate.

The research, which was reported in The Washington PostBloomberg and The New York Times, was funded by a foundation created by the late George P. Mitchell, the wildcatter who first successfully drilled shale gas, so it would be hard to dismiss it as the work of environmentalists hell-bent on discrediting the oil and gas industry.

The debate over the natural gas industry’s climate change effects has raged for several years, ever since researchers from Cornell University stunned policy-makers and environmentalists by warning that if enough methane seeps out between the gas well and the burner, relying on natural gas could be even more dangerous for the climate than burning coal.

Natural gas is mostly comprised of methane, an extraordinarily powerful greenhouse gas, which traps heat 86 times more effectively than carbon dioxide during the two decades after it enters the atmosphere, according to the Intergovernmental Panel on Climate Change, so even small leaks can have major climate impacts.

The team of researchers echoed many of the findings of the Cornell researchers and described how the federal government’s official estimate proved far too low.

“Atmospheric tests covering the entire country indicate emissions around 50 percent more than EPA estimates,” said Adam Brandt, the lead author of the new report and an assistant professor of energy resources engineering at Stanford University. “And that’s a moderate estimate.”

The new paper drew some praise from Dr. Robert Howarth, one of the Cornell scientists.

“This study is one of many that confirms that EPA has been underestimating the extent of methane leakage from the natural gas industry, and substantially so,” Dr. Howarth wrote, adding that the estimates for methane leaks in his 2011 paper and the new report are “in excellent agreement.”

In November, research led by Harvard University found that the leaks from the natural gas industry have been especially under-estimated. That study, published inthe Proceedings of the National Academy of Science, reported that methane emissions from fossil fuel extraction and oil refineries in some regions are nearly five times higher than previous estimates, and was one of the 200 included in Thursday’s Science study.

EPA Estimes Far Off-Target

So how did the EPA miss the mark by such a high margin?

The EPA’s estimate depends in large part on calculations — take the amount of methane released by an average cow, and multiply it by the number of cattle nationwide. Make a similar guess for how much methane leaks from an average gas well. But this leaves out a broad variety of sources — leaking abandoned natural gas wells, broken valves and the like.

Their numbers never jibed with findings from the National Oceanic and Atmospheric Administration and the U.S. Department of Energy, which approached the problem by taking measurements of methane and other gas levels from research flights and the tops of telecommunications towers.

But while these types of measurements show how much methane is in the atmosphere, they don’t explain where that methane came from. So it was still difficult to figure out how much of that methane originated from the oil and gas industry.

At times, EPA researchers went to oil and gas drilling sites to take measurements. But they relied on driller’s voluntary participation. For instance, one EPA study requested cooperation from 30 gas companies so they could measure emissions, but only six companies allowed the EPA on site.

“It’s impossible to take direct measurements of emissions from sources without site access,” said Garvin Heath, a senior scientist with the National Renewable Energy Laboratory and a co-author of the new analysis in a press release. “Self-selection bias may be contributing to why inventories suggest emission levels that are systematically lower than what we sense in the atmosphere.” (DeSmog haspreviously reported on the problem of industry-selected well sites in similar research funded by the Environmental Defense Fund.)

Worse than Coal?

There was, however, one important point that the news coverage so far missed and that deserves attention — a crucial point that could undermine entirely the notion that natural gas can serve as a “bridge fuel” to help the nation transition away from other, dirtier fossil fuels.

In their press release, the team of researchers compared the climate effects of different fuels, like diesel and coal, against those of natural gas.

They found that powering trucks or busses with natural gas made things worse.

“Switching from diesel to natural gas, that’s not a good policy from a climate perspective” explained the study’s lead author, Adam R. Brandt, an assistant professor in the Department of Energy Resources at Stanford, calling into question a policy backed by President Obama in his recent State of the Union address.

The researchers also described the effects of switching from coal to natural gas for electricity — concluding that coal is worse for the climate in some cases. “Even though the gas system is almost certainly leakier than previously thought, generating electricity by burning gas rather than coal still reduces the total greenhouse effect over 100 years, the new analysis shows,” the team wrote in a press release.

But they failed to address the climate impacts of natural gas over a shorter period — the decades when the effects of methane are at their most potent.

“What is strange about this paper is how they interpret methane emissions:  they only look at electricity, and they only consider the global warming potential of methane at the 100-year time frame,” said Dr. Howarth. Howarth’s 2011 Cornell study reviewed all uses of gas, noting that electricity is only roughly 30% of use in the US, and describing both a 20- and a 100-year time frame.

The choice of time-frame is vital because methane does not last as long in the atmosphere as carbon dioxide, so impact shifts over time. “The new Intergovernmental Panel on Climate Change (IPCC) report from last fall — their first update on the global situation since 2007 — clearly states that looking only at the 100 year time frame is arbitrary, and one should also consider shorter time frames, including a 10-year time frame,” Dr. Howarth pointed out.

Another paper, published in Science in 2012, explains why it’s so important to look at the shorter time frames.

Unless methane is controlled, the planet will warm by 1.5 to 2 degrees Celsius over the next 17 to 35 years, and that’s even if carbon dioxide emissions are controlled. That kind of a temperature rise could potentially shift the climate of our planet into runaway feedback of further global warming.

“[B]y only looking at the 100 year time frame and only looking at electricity production, this new paper is biasing the analysis of greenhouse gas emissions between natural gas and coal in favor of natural gas being low,” said Dr. Howarth, “and by a huge amount, three to four to perhaps five fold.”

Dr. Howarth’s colleague, Prof. Anthony Ingraffea, raised a similar complaint.

“Once again, there is a stubborn use of the 100-year impact of methane on global warming, a factor about 30 times that of CO2,” Dr. Ingraffea told Climate Central, adding that there is no scientific justification to use the 100-year time window.

“That is a policy decision, perhaps based on faulty understanding of the climate change situation in which we find ourselves, perhaps based on wishful thinking,” he said.

For its part, the oil and gas industry seems very aware of the policy implications of this major new research and is already pushing back against any increased oversight of its operations.

“Given that producers are voluntarily reducing methane emissions,” Carlton Carroll, a spokesman for the American Petroleum Institute, told The New York Times in an interview about the new study, “additional regulations are not necessary.”
Photo Credit: “White Smoke from Coal-Fired Power Plant,” via Shutterstock.

Global riot epidemic due to demise of cheap fossil fuels | Nafeez Ahmed | Environment | theguardian.com

Global riot epidemic due to demise of cheap fossil fuels | Nafeez Ahmed | Environment | theguardian.com.

From South America to South Asia, a new age of unrest is in full swing as industrial civilisation transitions to post-carbon reality
A pro-European protester swings a metal chain during riots in Kiev

A protester in Ukraine swings a metal chain during clashes – a taste of things to come? Photograph: Gleb Garanich/Reuters

If anyone had hoped that the Arab Spring and Occupy protests a few years back were one-off episodes that would soon give way to more stability, they have another thing coming. The hope was that ongoing economic recovery would return to pre-crash levels of growth, alleviating the grievances fueling the fires of civil unrest, stoked by years of recession.

But this hasn’t happened. And it won’t.

Instead the post-2008 crash era, including 2013 and early 2014, has seen a persistence and proliferation of civil unrest on a scale that has never been seen before in human history. This month alone has seen riots kick-off in VenezuelaBosniaUkraineIceland, and Thailand.

This is not a coincidence. The riots are of course rooted in common, regressive economic forces playing out across every continent of the planet – but those forces themselves are symptomatic of a deeper, protracted process of global system failure as we transition from the old industrial era of dirty fossil fuels, towards something else.

Even before the Arab Spring erupted in Tunisia in December 2010, analysts at the New England Complex Systems Institute warned of thedanger of civil unrest due to escalating food prices. If the Food & Agricultural Organisation (FAO) food price index rises above 210, they warned, it could trigger riots across large areas of the world.

Hunger games

The pattern is clear. Food price spikes in 2008 coincided with the eruption of social unrest in Tunisia, Egypt, Yemen, Somalia, Cameroon, Mozambique, Sudan, Haiti, and India, among others.

In 2011, the price spikes preceded social unrest across the Middle East and North Africa – Egypt, Syria, Iraq, Oman, Saudi Arabia, Bahrain, Libya, Uganda, Mauritania, Algeria, and so on.

Last year saw food prices reach their third highest year on record, corresponding to the latest outbreaks of street violence and protests in Argentina, Brazil, Bangladesh, China, Kyrgyzstan, Turkey and elsewhere.

Since about a decade ago, the FAO food price index has more than doubled from 91.1 in 2000 to an average of 209.8 in 2013. As Prof Yaneer Bar-Yam, founding president of the Complex Systems Institute, told Vice magazine last week:

“Our analysis says that 210 on the FAO index is the boiling point and we have been hovering there for the past 18 months… In some of the cases the link is more explicit, in others, given that we are at the boiling point, anything will trigger unrest.”

But Bar-Yam’s analysis of the causes of the global food crisis don’t go deep enough – he focuses on the impact of farmland being used for biofuels, and excessive financial speculation on food commodities. But these factors barely scratch the surface.

It’s a gas

The recent cases illustrate not just an explicit link between civil unrest and an increasingly volatile global food system, but also the root of this problem in the increasing unsustainability of our chronic civilisational addiction to fossil fuels.

In Ukraine, previous food price shocks have impacted negatively on the country’s grain exports, contributing to intensifying urban poverty in particular. Accelerating levels of domestic inflation are underestimated inofficial statistics – Ukrainians spend on average as much as 75% on household bills, and more than half their incomes on necessities such as food and non-alcoholic drinks, and as75% on household bills. Similarly, for most of last year, Venezuela suffered from ongoing food shortagesdriven by policy mismanagement along with 17 year record-high inflation due mostly to rising food prices.

While dependence on increasingly expensive food imports plays a role here, at the heart of both countries is a deepening energy crisis. Ukraine is a net energy importer, having peaked in oil and gas production way back in 1976. Despite excitement about domestic shale potential, Ukraine’s oil production has declined by over 60% over the last twenty years driven by both geological challenges and dearth of investment.

Currently, about 80% of Ukraine’s oil, and 80% of its gas, is imported from Russia. But over half of Ukraine’s energy consumption is sustained by gas. Russian natural gas prices have nearly quadrupled since 2004. The rocketing energy prices underpin the inflation that is driving excruciating poverty rates for average Ukranians, exacerbating social, ethnic, political and class divisions.

The Ukrainian government’s recent decision to dramatically slash Russian gas imports will likely worsen this as alternative cheaper energy sources are in short supply. Hopes that domestic energy sources might save the day are slim – apart from the fact that shale cannot solve the prospect of expensive liquid fuels, nuclear will not help either. A leakedEuropean Bank for Reconstruction and Development (EBRD) reportreveals that proposals to loan 300 million Euros to renovate Ukraine’s ageing infrastructure of 15 state-owned nuclear reactors will gradually double already debilitating electricity prices by 2020.

“Socialism” or Soc-oil-ism?

In Venezuela, the story is familiar. Previously, the Oil and Gas Journal reported the country’s oil reserves were 99.4 billion barrels. As of 2011, this was revised upwards to a mammoth 211 billion barrels of proven oil reserves, and more recently by the US Geological Survey to a whopping 513 billion barrels. The massive boost came from the discovery of reserves of extra heavy oil in the Orinoco belt.

The huge associated costs of production and refining this heavy oil compared to cheaper conventional oil, however, mean the new finds have contributed little to Venezuela’s escalating energy and economic challenges. Venezuela’s oil production peaked around 1999, and has declined by a quarter since then. Its gas production peaked around 2001, and has declined by about a third.

Simultaneously, as domestic oil consumption has steadily increased – in fact almost doubling since 1990 – this has eaten further into declining production, resulting in net oil exports plummeting by nearly half since 1996. As oil represents 95% of export earnings and about half of budget revenues, this decline has massively reduced the scope to sustain government social programmes, including critical subsidies.

Looming pandemic?

These local conditions are being exacerbated by global structural realities. Record high global food prices impinge on these local conditions and push them over the edge. But the food price hikes, in turn, are symptomatic of a range of overlapping problems. Globalagriculture‘s excessive dependence on fossil fuel inputs means food prices are invariably linked to oil price spikes. Naturally, biofuels and food commodity speculation pushes prices up even further – elite financiers alone benefit from this while working people from middle to lower classes bear the brunt.

Of course, the elephant in the room is climate change. According to Japanese media, a leaked draft of the UN Intergovernmental Panel onClimate Change‘s (IPCC) second major report warned that while demand for food will rise by 14%, global crop production will drop by 2% per decade due to current levels of global warming, and wreak $1.45 trillion of economic damage by the end of the century. The scenario is based on a projected rise of 2.5 degrees Celsius.

This is likely to be a very conservative estimate. Considering that the current trajectory of industrial agriculture is already seeing yield plateausin major food basket regions, the interaction of environmental, energy, and economic crises suggests that business-as-usual won’t work.

The epidemic of global riots is symptomatic of global system failure – a civilisational form that has outlasted its usefulness. We need a new paradigm.

Unfortunately, simply taking to the streets isn’t the answer. What is needed is a meaningful vision for civilisational transition – backed up with people power and ethical consistence.

It’s time that governments, corporations and the public alike woke up to the fact that we are fast entering a new post-carbon era, and that the quicker we adapt to it, the far better our chances of successfully redefining a new form of civilisation – a new form of prosperity – that is capable of living in harmony with the Earth system.

But if we continue to make like ostriches, we’ll only have ourselves to blame when the epidemic becomes a pandemic at our doorsteps.

Dr Nafeez Ahmed is executive director of the Institute for Policy Research & Development and author of A User’s Guide to the Crisis of Civilisation: And How to Save It among other books. Follow him on Twitter @nafeezahmed

Global riot epidemic due to demise of cheap fossil fuels | Nafeez Ahmed | Environment | theguardian.com

Global riot epidemic due to demise of cheap fossil fuels | Nafeez Ahmed | Environment | theguardian.com.

From South America to South Asia, a new age of unrest is in full swing as industrial civilisation transitions to post-carbon reality
A pro-European protester swings a metal chain during riots in Kiev

A protester in Ukraine swings a metal chain during clashes – a taste of things to come? Photograph: Gleb Garanich/Reuters

If anyone had hoped that the Arab Spring and Occupy protests a few years back were one-off episodes that would soon give way to more stability, they have another thing coming. The hope was that ongoing economic recovery would return to pre-crash levels of growth, alleviating the grievances fueling the fires of civil unrest, stoked by years of recession.

But this hasn’t happened. And it won’t.

Instead the post-2008 crash era, including 2013 and early 2014, has seen a persistence and proliferation of civil unrest on a scale that has never been seen before in human history. This month alone has seen riots kick-off in VenezuelaBosniaUkraineIceland, and Thailand.

This is not a coincidence. The riots are of course rooted in common, regressive economic forces playing out across every continent of the planet – but those forces themselves are symptomatic of a deeper, protracted process of global system failure as we transition from the old industrial era of dirty fossil fuels, towards something else.

Even before the Arab Spring erupted in Tunisia in December 2010, analysts at the New England Complex Systems Institute warned of thedanger of civil unrest due to escalating food prices. If the Food & Agricultural Organisation (FAO) food price index rises above 210, they warned, it could trigger riots across large areas of the world.

Hunger games

The pattern is clear. Food price spikes in 2008 coincided with the eruption of social unrest in Tunisia, Egypt, Yemen, Somalia, Cameroon, Mozambique, Sudan, Haiti, and India, among others.

In 2011, the price spikes preceded social unrest across the Middle East and North Africa – Egypt, Syria, Iraq, Oman, Saudi Arabia, Bahrain, Libya, Uganda, Mauritania, Algeria, and so on.

Last year saw food prices reach their third highest year on record, corresponding to the latest outbreaks of street violence and protests in Argentina, Brazil, Bangladesh, China, Kyrgyzstan, Turkey and elsewhere.

Since about a decade ago, the FAO food price index has more than doubled from 91.1 in 2000 to an average of 209.8 in 2013. As Prof Yaneer Bar-Yam, founding president of the Complex Systems Institute, told Vice magazine last week:

“Our analysis says that 210 on the FAO index is the boiling point and we have been hovering there for the past 18 months… In some of the cases the link is more explicit, in others, given that we are at the boiling point, anything will trigger unrest.”

But Bar-Yam’s analysis of the causes of the global food crisis don’t go deep enough – he focuses on the impact of farmland being used for biofuels, and excessive financial speculation on food commodities. But these factors barely scratch the surface.

It’s a gas

The recent cases illustrate not just an explicit link between civil unrest and an increasingly volatile global food system, but also the root of this problem in the increasing unsustainability of our chronic civilisational addiction to fossil fuels.

In Ukraine, previous food price shocks have impacted negatively on the country’s grain exports, contributing to intensifying urban poverty in particular. Accelerating levels of domestic inflation are underestimated inofficial statistics – Ukrainians spend on average as much as 75% on household bills, and more than half their incomes on necessities such as food and non-alcoholic drinks, and as75% on household bills. Similarly, for most of last year, Venezuela suffered from ongoing food shortagesdriven by policy mismanagement along with 17 year record-high inflation due mostly to rising food prices.

While dependence on increasingly expensive food imports plays a role here, at the heart of both countries is a deepening energy crisis. Ukraine is a net energy importer, having peaked in oil and gas production way back in 1976. Despite excitement about domestic shale potential, Ukraine’s oil production has declined by over 60% over the last twenty years driven by both geological challenges and dearth of investment.

Currently, about 80% of Ukraine’s oil, and 80% of its gas, is imported from Russia. But over half of Ukraine’s energy consumption is sustained by gas. Russian natural gas prices have nearly quadrupled since 2004. The rocketing energy prices underpin the inflation that is driving excruciating poverty rates for average Ukranians, exacerbating social, ethnic, political and class divisions.

The Ukrainian government’s recent decision to dramatically slash Russian gas imports will likely worsen this as alternative cheaper energy sources are in short supply. Hopes that domestic energy sources might save the day are slim – apart from the fact that shale cannot solve the prospect of expensive liquid fuels, nuclear will not help either. A leakedEuropean Bank for Reconstruction and Development (EBRD) reportreveals that proposals to loan 300 million Euros to renovate Ukraine’s ageing infrastructure of 15 state-owned nuclear reactors will gradually double already debilitating electricity prices by 2020.

“Socialism” or Soc-oil-ism?

In Venezuela, the story is familiar. Previously, the Oil and Gas Journal reported the country’s oil reserves were 99.4 billion barrels. As of 2011, this was revised upwards to a mammoth 211 billion barrels of proven oil reserves, and more recently by the US Geological Survey to a whopping 513 billion barrels. The massive boost came from the discovery of reserves of extra heavy oil in the Orinoco belt.

The huge associated costs of production and refining this heavy oil compared to cheaper conventional oil, however, mean the new finds have contributed little to Venezuela’s escalating energy and economic challenges. Venezuela’s oil production peaked around 1999, and has declined by a quarter since then. Its gas production peaked around 2001, and has declined by about a third.

Simultaneously, as domestic oil consumption has steadily increased – in fact almost doubling since 1990 – this has eaten further into declining production, resulting in net oil exports plummeting by nearly half since 1996. As oil represents 95% of export earnings and about half of budget revenues, this decline has massively reduced the scope to sustain government social programmes, including critical subsidies.

Looming pandemic?

These local conditions are being exacerbated by global structural realities. Record high global food prices impinge on these local conditions and push them over the edge. But the food price hikes, in turn, are symptomatic of a range of overlapping problems. Globalagriculture‘s excessive dependence on fossil fuel inputs means food prices are invariably linked to oil price spikes. Naturally, biofuels and food commodity speculation pushes prices up even further – elite financiers alone benefit from this while working people from middle to lower classes bear the brunt.

Of course, the elephant in the room is climate change. According to Japanese media, a leaked draft of the UN Intergovernmental Panel onClimate Change‘s (IPCC) second major report warned that while demand for food will rise by 14%, global crop production will drop by 2% per decade due to current levels of global warming, and wreak $1.45 trillion of economic damage by the end of the century. The scenario is based on a projected rise of 2.5 degrees Celsius.

This is likely to be a very conservative estimate. Considering that the current trajectory of industrial agriculture is already seeing yield plateausin major food basket regions, the interaction of environmental, energy, and economic crises suggests that business-as-usual won’t work.

The epidemic of global riots is symptomatic of global system failure – a civilisational form that has outlasted its usefulness. We need a new paradigm.

Unfortunately, simply taking to the streets isn’t the answer. What is needed is a meaningful vision for civilisational transition – backed up with people power and ethical consistence.

It’s time that governments, corporations and the public alike woke up to the fact that we are fast entering a new post-carbon era, and that the quicker we adapt to it, the far better our chances of successfully redefining a new form of civilisation – a new form of prosperity – that is capable of living in harmony with the Earth system.

But if we continue to make like ostriches, we’ll only have ourselves to blame when the epidemic becomes a pandemic at our doorsteps.

Dr Nafeez Ahmed is executive director of the Institute for Policy Research & Development and author of A User’s Guide to the Crisis of Civilisation: And How to Save It among other books. Follow him on Twitter @nafeezahmed

How Climate Change Helped Decimate a 4,000 Year Old Megacity | Motherboard

How Climate Change Helped Decimate a 4,000 Year Old Megacity | Motherboard.

February 27, 2014 // 05:01 PM EST 

More than 4,000 years ago, three civilizations dominated South Asia and North Africa. Ancient Egypt and Mesopotamia are names you’ll surely recognize, but the lesser-known Indus Valley Civilization was actually the largest of the three. During its height, at around 2600 BCE, the Indus spread across what is now India and Pakistan, and built large cities like Mohenjo-Daro, whose population is estimated to have been well into five figures.

Around 1800 BCE, the Indus civilization began to decline, and all but disappeared by 1300 BCE. The reason has been the source of controversy for decades, but new research adds evidence to the theory that climate change led to a sharp weakening of the key summer monsoon season, which left the Indus river valley drier and inhospitable.

Tracking weather patterns from millennia ago isn’t easy. The University of Cambridge research team first started by finding an ancient lake, called Kotla Dahar, that still existed in the Indus’ time. The dirt at the bottom of an ancient lake doesn’t offer many clues, but what it holds does: By identifying the species and chemical makeup of ancient snails buried in the former lake, the Cambridge team was able to calculate how much rainfall the region received thousands of years ago. The results are published in Geology.

They found that the paleolake in Haryana, India was a deep body of water between 6,500 and 5,800 years ago, which corresponded with a time of heavy monsoon action. But, in snail shells dating to around 4,100 years ago—right before the time the Indus when into decline—the researchers found an increase of an oxygen isotope, which suggests the lake was drying up due to a weakening of the summer monsoon.

“We think that we now have a really strong indication that a major climate event occurred in the area where a large number of Indus settlements were situated,” study co-author Professor David Hodell said in a release. “Taken together with other evidence from Meghalaya in northeast India, Oman and the Arabian Sea, our results provide strong evidence for a widespread weakening of the Indian summer monsoon across large parts of India 4,100 years ago.”

At the time, drought was spreading throughout much of Asia. “The 4.2 ka aridification event is regarded as one of the most severe climatic changes in the Holocene, and affected several Early Bronze Age populations from the Aegean to the ancient Near East,” the authors write.

A map of the spread of the Indus Valley Civilization, including Mohenjo-Daro (5) and Harappa (4), another large city. Image: Wikipedia

 

Such drought would certainly have had a destabilizing effect. And even given some wiggle room within the dates—again, dating isotopes of snail shells in ancient lake beds is a tall task—the authors argue such monsoon weakening corresponds with known times for Indus decline. “The resultant age of drying at Kotla Dahar is consistent with the suggested archeological dates for the onset of Indus de-urbanization within dating uncertainties,” the authors write.

As you might expect, drought wreaks havoc on agriculture. Feeding a megacity, even an ancient one like Mohenjo-Daro, takes a strong farm sector, and without one, people will disappear. “Our paleoclimate record also provides indirect evidence for the suggestion that the ISM weakening at ca. 4.1 ka in northwestern India likely led to severe decline in summer overbank flooding that adversely affected monsoon-supported agriculture in this region,” the authors write.

The Indus civilization collapse has remained a mystery for at least a century of archeological investigation, but the climate angle has been batted around for nearly that long. As V.N. Misra notes in a deep look at the subject, British archeologists Sir Aurel Stein and Sir John Marshall both posited in 1931 that the Indus lived in a far wetter climate, which was held as fact until the 60s, when an American team poked holes in previous evidence.

Since then, the evidence has largely been on the side of drought coinciding with the Indus collapse, although there have also been arguments to the contrary. Isotopic studies have provided more conclusive evidence. A 2003 study in Geophysical Research Letters also found evidence of drought occurring around 4,200 years ago. Combined with the most recent study, it’s becoming more clear that while drought alone may not have caused the Indus collapse, it does appear to have helped push things along.

“We know that there was a clear shift away from large populations living in megacities,” co-author Dr. Cameron Petrie said. “But precisely what happened to the Indus civilization has remained a mystery. It is unlikely that there was a single cause, but a climate change event would have induced a whole host of knock-on effects.”

And guess what? Research in the last few years has shown that the current warming climate will likely lead to a decrease in India’s monsoon season. A 2012 paper in Environmental Research Letters put it rather simply: “Indian monsoon rainfall is vital for a large share of the world’s population,” the authors write in their abstract, before noting that “monsoon failure is possible but very rare under pre-industrial conditions, while under future warming it becomes much more frequent.”

Compounding the problem, Pakistani media reported last fall that researchers have modeled a decline in Himalayan glaciers, which means that rivers already feeling the effects of decreasing monsoon intensity could also have less snow melt to rely on. For the hundreds of millions of people in the region, the coming drought may feel a bit too reminiscent of the Indus’ collapse for comfort. But there is one major difference: This time, the climate change is man-made.

Dispute Over The Future Of Basic Research In Canada

Dispute Over The Future Of Basic Research In Canada.

By KAREN BIRCHARD and JENNIFER LEWINGTON | THE CHRONICLE OF HIGHER EDUCATIONFEB. 16, 2014

CHARLOTTETOWN, Prince Edward Island — Canada’s National Research Council is the country’s premier scientific institution, helping to produce such inventions as the pacemaker and the robotic arm used on the American space shuttle. But last year, its mission changed.

The Canadian government announced a transformation of the 98-year-old agency, formerly focused largely on basic research, into a one-stop “concierge service” to bolster technological innovation by industry — historically weak — and generate high-quality jobs.

This has set off a dispute over the future of Canada’s capacity to carry out fundamental research, with university scientists and academic organizations uncharacteristically vocal about the government’s blunt preference for commercially applicable science.

“We are not sure the government appreciates the role that basic research plays,” said Kenneth Ragan, a McGill University physicist and president of the Canadian Association of Physicists: “The real question is, How does it view not-directed, nonindustrial, curiosity-driven blue-sky research? I worry the view is that it is irrelevant at best and that in many cases they actually dislike it.”

The remodeling of the research council is one in a series of policy changes that have generated fierce pushback by Canadian academics in recent years. The Conservative government of Prime Minister Stephen Harper is also under fire for closing research libraries, shutting down research facilities and restricting when government scientists can speak publicly about their work. Last year the Canadian Association of University Teachers began a national campaign, “Get Science Right,” with town-hall meetings across the country to mobilize public opposition to the policies. Scientists have even taken to the streets of several Canadian cities in protest.

While the transformation of the National Research Council has been criticized, the government as well as some science-policy analysts say that better connecting businesses with research is an important step for Canada.

Having examined models in other countries, the National Research Council chose to streamline its operations to act as “the pivot between the two worlds” of industry and academics, with an eye toward new products and innovations, said Charles Drouin, a spokesman for the council. He said the agency had not moved away from support for fundamental research but wanted to focus such efforts better. “There is basic research, but it is directed, as opposed to undirected as you would find it in universities.”

Another battleground for the future of basic research has been the Natural Sciences and Engineering Research Council, a federal granting agency that serves as the first stop for financing fundamental research by Canadian scientists.

In 2011-12, the latest year for which data are available, the council’s “discovery” grants for fundamental research accounted for 38.4 percent of its budget, down from 50.1 percent in 2001-2. Its “innovation” grants, which encourage the transfer of university-developed technology to industry, rose to 31.4 percent in 2011-12, up from 25.3 percent a decade earlier. (The council also directs part of its roughly $1-billion budget to postdoctoral fellowships and other awards for young researchers.)

“The government has invested proportionately more on the innovation side, where it was seen that we had more challenges,” said Pierre J. Charest, vice president of research grants and scholarships at the government agency. He noted that the council was “on track” to double the number of scientists forming partnerships with industry.

Mr. Charest said criticism about a smaller percentage of funds for discovery grants missed a larger point — that the budget had grown over the past decade to almost $325 million in 2012-13. However, much of that increase comes from a special supplement for a select group of researchers to explore potentially transformative concepts.

One who has felt the pinch is Norman Hüner, an internationally recognized plant biochemist and physiologist at the University of Western Ontario, who holds a prestigious Canada Research Chair in environmental-stress biology. A longtime recipient of discovery grants, he and his research collaborators are exploring a potential breakthrough in the use of photosynthesis to trick plants to grow in suboptimal conditions — relevant research in Mr. Hüner’s view, given concerns about climate change.

But in 2012, after applying for a new grant to continue his research, the professor received $50,000 a year for five years — a sharp drop from the previous award of $132,000 a year over five years. “I was shocked, absolutely,” he recalled. “I am disillusioned beyond words.”

The cut has led to the departures of some senior scientists from his lab. And save for one new postdoctoral student with her own funds, Mr. Hüner is not replenishing his stable of young researchers. At 67, Mr. Hüner now plans to retire several years ahead of schedule.

Launch media viewer
Norman Hüner, an environmental scientist, has had his funding reduced by more than half. Dave Chidley

Even those involved in commercialization efforts question the Natural Sciences and Engineering Research Council’s new approach.

“If you have ideas that are going to lead to commercialization opportunities, you should absolutely get seed-stage funding,” said James E. Colliander, a mathematician at the University of Toronto. He acknowledged that funding for applied research was “crucially important” but said he was “not sure that the principal vehicle for funding basic research should be the path to get those dollars.” Mr. Colliander has received several major discovery grants and is also involved in an effort to bring to market a web application for large-scale academic exam assessment.

Beyond the changes in the two councils, some wonder if Canadian industry is prepared to step up its role in research innovation. In Canada’s largely foreign-owned industrial sector, research is often carried out at corporate headquarters abroad, while home-grown businesses lack the appetite or budget.

Some liken the federal strategy to pushing on a string.

The current policy appears to be trying to “push” technology from universities to industry, but what is needed to increase the level of innovation is for industry to get better at investing in new ideas and well-qualified researchers, said Arthur Carty, a former science adviser to the prime minister and a former head of the National Research Council. “Companies have to have innovation in their philosophical strategies, and they don’t have it,” said Mr. Carty, now executive director of the University of Waterloo’s Institute for Nanotechnology.

Uncertainty over the response of industry is a common refrain even among those who see merit in the federal strategy.

“Canada has had most of its eggs in the basic-research basket for quite a long time,” said Richard W. Hawkins, Canada research chair in science, technology and innovation policy at the University of Calgary. He has also spent years outside Canada as an adviser to governments and international agencies on innovation policy. “Governments want to invest in science and technology because they think it will lead to growth and innovation,” he said. “Governments all over the world have the same rationale.”

What’s missing in Canada, he said, is a deep understanding about how sectors of the economy could exploit knowledge to diversify and create new industries. “In Canada we know relatively less about our situation than most of our competitor countries,” he said.

But some senior scientists warn of risks to Canada’s higher-education system if pure, scholarly research is perceived as unimportant.

“One of the major contradictions of the Conservative government at the moment is that no one in Canada will question the need to have the best universities in the world,” said Daniel E. Guitton, a professor of neuroscience at McGill University. “Now how do you get them? You’re not going to get them by having people focus on an industry-related problem.”

Science policy analysts say it is too early to judge the impact of the government’s current strategy. But on one point, there is little debate. “To be honest, I’ve not seen this level of advocacy from the scientific community before,” said Paul Dufour, a fellow at the University of Ottawa’s Institute for Science, Society, and Policy. “That’s new in this country, and I think that’s a healthy thing.”

A version of this article appears in print on February 17, 2014, in The International New York Times.

Dispute Over The Future Of Basic Research In Canada

Dispute Over The Future Of Basic Research In Canada.

By KAREN BIRCHARD and JENNIFER LEWINGTON | THE CHRONICLE OF HIGHER EDUCATIONFEB. 16, 2014

CHARLOTTETOWN, Prince Edward Island — Canada’s National Research Council is the country’s premier scientific institution, helping to produce such inventions as the pacemaker and the robotic arm used on the American space shuttle. But last year, its mission changed.

The Canadian government announced a transformation of the 98-year-old agency, formerly focused largely on basic research, into a one-stop “concierge service” to bolster technological innovation by industry — historically weak — and generate high-quality jobs.

This has set off a dispute over the future of Canada’s capacity to carry out fundamental research, with university scientists and academic organizations uncharacteristically vocal about the government’s blunt preference for commercially applicable science.

“We are not sure the government appreciates the role that basic research plays,” said Kenneth Ragan, a McGill University physicist and president of the Canadian Association of Physicists: “The real question is, How does it view not-directed, nonindustrial, curiosity-driven blue-sky research? I worry the view is that it is irrelevant at best and that in many cases they actually dislike it.”

The remodeling of the research council is one in a series of policy changes that have generated fierce pushback by Canadian academics in recent years. The Conservative government of Prime Minister Stephen Harper is also under fire for closing research libraries, shutting down research facilities and restricting when government scientists can speak publicly about their work. Last year the Canadian Association of University Teachers began a national campaign, “Get Science Right,” with town-hall meetings across the country to mobilize public opposition to the policies. Scientists have even taken to the streets of several Canadian cities in protest.

While the transformation of the National Research Council has been criticized, the government as well as some science-policy analysts say that better connecting businesses with research is an important step for Canada.

Having examined models in other countries, the National Research Council chose to streamline its operations to act as “the pivot between the two worlds” of industry and academics, with an eye toward new products and innovations, said Charles Drouin, a spokesman for the council. He said the agency had not moved away from support for fundamental research but wanted to focus such efforts better. “There is basic research, but it is directed, as opposed to undirected as you would find it in universities.”

Another battleground for the future of basic research has been the Natural Sciences and Engineering Research Council, a federal granting agency that serves as the first stop for financing fundamental research by Canadian scientists.

In 2011-12, the latest year for which data are available, the council’s “discovery” grants for fundamental research accounted for 38.4 percent of its budget, down from 50.1 percent in 2001-2. Its “innovation” grants, which encourage the transfer of university-developed technology to industry, rose to 31.4 percent in 2011-12, up from 25.3 percent a decade earlier. (The council also directs part of its roughly $1-billion budget to postdoctoral fellowships and other awards for young researchers.)

“The government has invested proportionately more on the innovation side, where it was seen that we had more challenges,” said Pierre J. Charest, vice president of research grants and scholarships at the government agency. He noted that the council was “on track” to double the number of scientists forming partnerships with industry.

Mr. Charest said criticism about a smaller percentage of funds for discovery grants missed a larger point — that the budget had grown over the past decade to almost $325 million in 2012-13. However, much of that increase comes from a special supplement for a select group of researchers to explore potentially transformative concepts.

One who has felt the pinch is Norman Hüner, an internationally recognized plant biochemist and physiologist at the University of Western Ontario, who holds a prestigious Canada Research Chair in environmental-stress biology. A longtime recipient of discovery grants, he and his research collaborators are exploring a potential breakthrough in the use of photosynthesis to trick plants to grow in suboptimal conditions — relevant research in Mr. Hüner’s view, given concerns about climate change.

But in 2012, after applying for a new grant to continue his research, the professor received $50,000 a year for five years — a sharp drop from the previous award of $132,000 a year over five years. “I was shocked, absolutely,” he recalled. “I am disillusioned beyond words.”

The cut has led to the departures of some senior scientists from his lab. And save for one new postdoctoral student with her own funds, Mr. Hüner is not replenishing his stable of young researchers. At 67, Mr. Hüner now plans to retire several years ahead of schedule.

Launch media viewer
Norman Hüner, an environmental scientist, has had his funding reduced by more than half. Dave Chidley

Even those involved in commercialization efforts question the Natural Sciences and Engineering Research Council’s new approach.

“If you have ideas that are going to lead to commercialization opportunities, you should absolutely get seed-stage funding,” said James E. Colliander, a mathematician at the University of Toronto. He acknowledged that funding for applied research was “crucially important” but said he was “not sure that the principal vehicle for funding basic research should be the path to get those dollars.” Mr. Colliander has received several major discovery grants and is also involved in an effort to bring to market a web application for large-scale academic exam assessment.

Beyond the changes in the two councils, some wonder if Canadian industry is prepared to step up its role in research innovation. In Canada’s largely foreign-owned industrial sector, research is often carried out at corporate headquarters abroad, while home-grown businesses lack the appetite or budget.

Some liken the federal strategy to pushing on a string.

The current policy appears to be trying to “push” technology from universities to industry, but what is needed to increase the level of innovation is for industry to get better at investing in new ideas and well-qualified researchers, said Arthur Carty, a former science adviser to the prime minister and a former head of the National Research Council. “Companies have to have innovation in their philosophical strategies, and they don’t have it,” said Mr. Carty, now executive director of the University of Waterloo’s Institute for Nanotechnology.

Uncertainty over the response of industry is a common refrain even among those who see merit in the federal strategy.

“Canada has had most of its eggs in the basic-research basket for quite a long time,” said Richard W. Hawkins, Canada research chair in science, technology and innovation policy at the University of Calgary. He has also spent years outside Canada as an adviser to governments and international agencies on innovation policy. “Governments want to invest in science and technology because they think it will lead to growth and innovation,” he said. “Governments all over the world have the same rationale.”

What’s missing in Canada, he said, is a deep understanding about how sectors of the economy could exploit knowledge to diversify and create new industries. “In Canada we know relatively less about our situation than most of our competitor countries,” he said.

But some senior scientists warn of risks to Canada’s higher-education system if pure, scholarly research is perceived as unimportant.

“One of the major contradictions of the Conservative government at the moment is that no one in Canada will question the need to have the best universities in the world,” said Daniel E. Guitton, a professor of neuroscience at McGill University. “Now how do you get them? You’re not going to get them by having people focus on an industry-related problem.”

Science policy analysts say it is too early to judge the impact of the government’s current strategy. But on one point, there is little debate. “To be honest, I’ve not seen this level of advocacy from the scientific community before,” said Paul Dufour, a fellow at the University of Ottawa’s Institute for Science, Society, and Policy. “That’s new in this country, and I think that’s a healthy thing.”

Fukushima’s future | openDemocracy

Fukushima’s future | openDemocracy.

BOB STILGER 3 February 2014
 

When communities are devastated by disasters like earthquakes and nuclear explosions, how can they recover? In Fukushima, Japan, transformation may be the only option.

Kesennuma, Japan, after “3/11.” Credit: Reuters/Kyodo. All rights reserved.

When the Great Northeast Earthquake struck Fukushima in Japan’s Tohoku region on March 11 2011, it triggered a tsunami that sent a fifty-foot wave rushing inland at over fifty miles an hour.

In less than a day, nearly 18,000 people were dead or missing, and almost300,000 were homeless. The old normal was gone. Today, communities in the region are struggling to reinvent their lives, but what will their future look like in a context that is permanently changed?

For many, this is not just a matter of regaining property or livelihoods, it’s a profoundly spiritual question that centers on the meaning of happiness and the quality of life.

The earthquake – widely known in Japan as “3/11” – toppled buildings across the region, but it was especially damaging in the coastal towns. In many places the ground literally fell away, dropping by three feet and more. The next day, three of the six reactors exploded at the Dai-Ichi nuclear power plants in Fukushima, while the containment structures for spent nuclear fuel rods were severely compromised at a fourth. Within a very short time, radiation forced another 60,000 people from their homes.

This reactor, Dai-Ichi Four, is now one of the most dangerous places on the planet.  Another large earthquake might lead to nuclear explosions that would force the evacuation of more than 20 million people.  Contaminated water from the reactors at Dai-Ichi is leaking into the Pacific Ocean at an alarming rate. Containing the radiation and making the area safe again is beyond the reach of current technologies.  But while people in California worry about nuclear waste reaching their shores five thousand miles away, those on the ground have more immediate concerns: they want to go home.

What will it take to see these communities reborn, to call them back to life? The local context has changed beyond recognition, with some level of continued contamination guaranteed, along with large-scale depopulation (especially of younger people), and the disappearance of livelihoods based around nuclear energy, tourism, agriculture and fishing. Technical problems abound, and Fukushima is a mess – misunderstood, the object of endless fear-mongering and distortions, and exceptionally complex. The Fairewinds Project provides one of the most accurate overviews of the nuclear situation.

But the underlying problems facing Fukushima are not simply technical, they are human: how to reinvent lives in a new and much more challenging reality. After a community meeting that I recently hosted at Renshoan in Fukushima, an older man approached me.

Somewhat hesitantly he said: “I never thought the future was something I needed to think about.  Tonight I have started, and I think, maybe, thinking about the future is the same as thinking about my happiness.” Trauma presents an opening for much deeper change. Indeed, in this case, transformation may be the only option.

More than a thousand days have passed since the disasters. I have spent half of this time working with communities throughout the Tohoku region to create safe places for people to share their grief, explore new possibilities, and form partnerships with each other to create a different future. This has involved processes such as the Future Center model. Originally developed in Europe, it has now been adapted to the local context, along with ‘dialog technologies’ from the Art of Hosting like the “World Café”, open space, storytelling and appreciative enquiry.

People in Fukushima live in one of three broad realities.  Some are still overwhelmed with despair, since everything they know and love has vanished.  Some would leave Fukushima in an instant if they had a way to relocate elsewhere.  And others have declared that “this is our home, so we will make a new life here together.”  They know that the past is gone and that an unknown future is waiting to be born.

Take Kamada-san, for example, a woman in her late 20s who founded a support organization for other young women called “Peach Heart.” “Some of my friends have left and they keep urging me to come join them,” she told me, “but I can’t go.  I won’t leave the other young women here who are unable to move.  My work is to support them as we talk through so many questions…. Will we be safe?  Will anyone want to marry us?  Can we have babies?  Will they be healthy?”

Kanno-san is another leader of the movement for return. I met him a few days before Christmas in 2013, when he was busy organizing a party in the temporary housing complex in Fukushima that is home to many of the 1,500 people who used to live close to the reactors in Katsurao. “We are forced to compete for increasingly-limited government funds with our neighboring towns,” he said, “even the contracts for reconstruction work are issued in Tokyo and on terms that make it impossible for local firms to do the work.  We have many questions, and must work together to find the way forward.”

How can a community be recreated under such conditions? Most of the people in the region who want to move back home are aged over 60.  According to a recent census in the village of Okuma, for example, almost 20 per cent of former residents want to return, and nearly all of them are elderly. Some children are also coming back, but at nowhere near the population densities of the past.  Their old school system cannot be re-created, and even if it could, it wouldn’t teach what students need to learn in order to live in and create a different Fukushima. Houses, shops and fields all lie empty. The economy has collapsed.  And the tsunami clean-up efforts that have taken place outside the radiation zone have left Fukushima almost untouched.

What’s more, not all parts of the region are open for resettlement.  Radiation levels vary almost street-by-street.  But if the Japanese Government decides that it is safe to move back, those affected will lose their “radiation compensation,” the funds that are paid by government to those who have been forced from their homes. This would leave people who are mostly farmers with pensions of just a few hundred dollars a month.  The stresses and strains of being without a permanent home for the last three years are exacerbated by tensions surrounding the varying levels of financial support that different people receive. In this situation it is even more difficult to develop the trust that’s required to rebuild community.

As people do move back, questions remain as to what to do with the nuclear waste that is aggregated during the decontamination process.  Government contractors are currently at work removing the top six centimeters of soil which contain radiation.  But where will this waste be stored?  This process also removes fertile soil from the fields at the same time.  What about “bioremediation,” which research suggests is a better alternative – leaving the soil in place and using mushrooms and other plants that ‘eat’ radiation for their lunch? Who makes these decisions?

Faced by these questions the obvious answer, say some, is that everyone should be forced to move elsewhere for good.  But where is it safe?  Communities in Fukushima probably know more about radiation than most people on the planet, but still there is no certainty.  Epidemiological research from Hiroshima and Nagasaki about the impact of low-dose radiation is inconclusive: we don’t know why some people get sick while others are unaffected.

Those who are returning have made their decision, yet they know that their lives will never be the same. While there are certainly complex technical issues associated with the nuclear reactors, the deeper questions they face are very human. What will this new society be like?  What does it mean to have a population that is overwhelmingly elderly?  What kind of economy will sustain the region now that previous livelihoods have disappeared?  What is possible now that was not possible before?

These questions are slowly becoming more visible as people like Kamada-san and Kanno-san step forward, holding their grief and their dreams and confronting an uncertain future together.  But this process will take decades.  I get annoyed these days when people use the image of a caterpillar and a butterfly as a metaphor for transformation.  There’s almost something glamorous about everything dissolving into ooze, only for beauty to reemerge.

The transformation of people’s lives in Fukushima is very different. This transformation is one of ordinary people who are raising their voices and using their hands, reaching out to each other, taking one step forward and then another, to build new lives in a place that they call home.

Loss of Librarians Devastating to Science and Knowledge in Canada | DeSmog Canada

Loss of Librarians Devastating to Science and Knowledge in Canada | DeSmog Canada.

Loss of librarians leaves researchers without vital resources

It has been a difficult few years for the curators of knowledge in Canada. While the scientific community is still reeling from the loss of seven of the Department of Fisheries and Oceans’ eleven libraries, news has broken that scientists with Health Canada were left scrambling for resources after the outsourcing and then closure of their main library.

In January CBC news uncovered a report from a consultant hired by the federal government cataloguing mistakes in the government’s handling of the closure. “Staff requests have dropped 90 per cent over in-house service levels prior to the outsource. This statistic has been heralded as a cost savings by senior HC [Health Canada] management,” the report said.

“However, HC scientists have repeatedly said during the interview process that the decrease is because the information has become inaccessible — either it cannot arrive in due time, or it is unaffordable due to the fee structure in place.”

Government spokespeople dismissed the report, saying it was “returned to its author for corrections, which were never undertaken.”

The consultancy company fired back through a letter from its lawyer. “Representations that our client provided a factually inaccurate report and then neglected to respond to requests for changes are untrue,” it read.

However, Health Canada and the DFO are not the only government bodies to lose access to vital archival material in the past two years. Postmedia reports more than twelve departments losing libraries due to the Harper government’s budget cuts, including the Canada Revenue Agency, Citizenship and Immigration, Employment and Social Development Canada, Environment Canada, Foreign Affairs and International Trade, Natural Resources Canada, Parks Canada, the Public Service Commission, Public Works and Government Services, and Transport Canada.

Many of these departments lost multiple libraries, with historical records and books disappearing from shelves, scattered across private collections or tossed in dumpsters. In 2013 even the country’s main home for historic documents, Library and Archives Canada, faced major cuts to service, including hours, interlibrary loans and staffing.

This unprecedented process has triggered concerns about the loss of physical documents and imperfections in the digitization process. A recent report from the Canadian Libaries Association (CLA) expresses these fears in no uncertain terms.

“Currently in Canada the vast majority of research data is at risk of being lost because it is not being systematically managed and preserved. While certain disciplines and research projects have institutional, national, or international support for data management, this support is available for a minority of researchers only. A coordinated and national approach to managing research data in Canada is required in order to derive greater and longer term benefits, both socially and economically, from the extensive public investments that are made in research.”

But equally as worrisome is the loss of the librarians themselves, some of whom have spent decades familiarizing themselves with the extremely specialized materials in their collections.

Anyone who has written an undergraduate research paper knows how maddening it can be to dig through online databases for a single piece of information. The same is true for high level researchers, according to Jeff Mason, past president of the Canadian Health Libraries Association (CHLA/ABSC).

Mason is a librarian at a hospital in Saskatchewan with firsthand experience of working with health professionals. “Much as you would think a doctor would be an expert at treatment and diagnoses, when it comes to information in the health field, librarians are key resources,” he told DeSmog Canada by phone a day after learning of the Health Canada main library’s closure.

“I was shocked to hear that the Health Canada library had been closed because we thought it was safe as an organization,” says Mason.

In a field as specialized as medical research, having a librarian who is familiar with the material is integral to success.

“Unless you really know what you’re doing and spend all day everyday searching for information, these databases, or the internet, can be impossible,” Mason says. “Unless you spend all your times with your hands in it, you can’t really ever be sure that you’ve found everything that’s out there.”

A librarian’s relationship to a collection makes them able to help researchers and physicians alike find necessary information with speed and efficiency. They can aid researchers in formulating questions and narrowing fields of inquiry, streamlining the process of both digital and hard copy searches. “We tell our clients in our hospital if they spent more than 10 minutes looking for something, then they should have come to us,” he says.

With budget cuts and library closures, collections are being shunted to academic libraries that are simply not capable of maintaining the level of service of the original institutions.

“They’re short-staffed and they don’t have enough funds to do what they’re supposed to do,” says Mason. “Now they’re being contacted by government researchers and not-for-profits that used to get their information through the government of Canada.”

Head of collections David Sharp and gift specialist Colin Harness from Carleton University have released a stunning graphic detailing their institution’s efforts to “rescue” collections.

Carleton University library rescue efforts

In 2012 and 2013 Carleton University engaged with 21 different government libraries. They were able to help fourteen libraries, finding homes for 500 rare items from Fisheries and Oceans Canada only, either by taking in their collections or connecting them with resources. Eight of those collections were either dispersed elsewhere or have an unknown status. One collection, from Human Resources and Skills Development Canada, was declined because of “staff and space resource concerns.”

But even if the materials find a safe home either on a physical shelf or in a database, librarians, Mason believes, are still “integral to sound science and sound policy.”

Their loss is “really devastating to the state of science and knowledge in our country.”

The January report by the CLA corroborates Mason’s opinion. “Research libraries are essential institutions in developing and managing data repositories,” it reads. “Libraries and librarians have the expertise in resource description, storage, and access.”

Image Credit: Wikimedia
Image Credit: Colin Harness and David Sharp via dysartjones.com

The Ridiculousness of Economics? :: The Mises Economics Blog: The Circle Bastiat

The Ridiculousness of Economics? :: The Mises Economics Blog: The Circle Bastiat.

People have a strange habit of ridiculing economics for its assumptions and [benchmark] models of optimality. While modern mathematical economics (i.e., professional mathturbation) admittedly rely on sometimes outrageous assumptions that make most of the resulting predictions irrelevant, there is nothing ridiculous or unscientific abouteconomic reasoning. In order to study the social world we need to consider and analyze what’s observed empirically from the point of view of the theory-derived counterfactual. Economic science necessarily begins with theory.

As Mises noted, in the social world there are no constant relations. Consequently, inductive number crunching based on (the seemingly irrefutable phenomenon) data cannot tell us much about the world. So we must rely on what we logically find to be necessarily true, and from it derive specific truths that help us understand observed phenomena in the real world. We thus create counterfactuals that help us assess and perceive what is actually going on, rather than blindly observe.

Interestingly, while economic reasoning is laughed at and ridiculed, people tend to place great faith in applied fields such as medicine as though it were a real science. So perhaps if economics were more like medicine, it would earn the respect as a science (side-effects aside)?

While simplified, what is considered “normal” in medicine are simple averages* or mode values arrived at by inductive (though sometimes voluminous) data sifting. Recommendations are hence based on what is rather than what should be (should, by the way, is considered unscientific). Granted, present average values may eventually be balanced (perhaps even corrected) by what has been learned about the functions of specific organs and the body as a whole, and about the impact of disease, malfunctions, etc. Yet these pieces of knowledge are also ultimately arrived at inductively, which means medicine suffers from a fundamental inability to identify e.g. harmful imbalancesthroughout populations (such that are due to long-lasting suboptimal cultural or eating habits, for instance).

The present revolution in how we view carbohydrates and fats is a case in point: medicine is of course able to measure the improved health values due to e.g. a “primal” diet (as one example), but is utterly unable to envisionthis result and, even less, make such predictions before the empirical observation has already been made. Instead, and based on the “normal” (average/mode) values of the population, we’ve been recommended to indulge in harmful sugars and grains and stay away from healthy fats. This is the problem of relying on induction, and while it might work well in the natural sciences, and is less reliable but likely more beneficial than not in applied natural science (such as medicine), it is impossible in the social sciences.

Imagine an economics relying on this type of approach. This field would have recognized poverty, starvation, and perhaps even slavery as the average state or mode of people in society, both at the inception of economic analysis and throughout history. We would then call this miserable state “equilibrium,” and base our explanations and policy recommendations on this empirically sound identification. Strange, uncommon, and “disequilibrating” phenomena such as prosperity, health, etc. would be statistical anomalies that could ultimately cause disruption of the established equilibrium; we might even choose to exclude them from our statistical analyses.

Economic models would show how societies successfully maximizing such misery (the mode, remember?) have little entrepreneurship, no property rights, and a despotic monarch (among other things). We would therefore conclude that a despot appears necessary to ensure the optimal state of misery, since the lack of a misery-enabling monarch would set radical processes of entrepreneurship, decentralization, and order in motion. These processes could undermine the state of misery and create pockets of prosperity, and perhaps – if no countermeasure is taken – overtake society and subject everyone to this disease.

Our policy recommendations would then be for a society to grant a single monarch absolute power, with the task and duty to stifle entrepreneurship and undermine property rights.

Had economics relied on similar methods as those employed in medicine, it would have been a worthless and dismal science indeed. Fortunately, economics is nothing of the kind. Instead, based on the undeniable truth that people want what they value and that getting more of it therefore makes them better off, we can construct theoretical counterfactuals to serve as “optimal” benchmarks when analyzing society. This is why economists can say that “yes, we are well of – but could be better off if…” This is also why economists can identify where and how suggested policies can or will go wrong. We can identify that waste, destruction, and suboptimalities will ensue, but not exactly when or exactly how much.

This is hardly ridiculous.

%d bloggers like this: