Saturday, August 31, 2013

The wireless network with a mile-wide range that the “internet of things” could be built on - Quartz

The wireless network with a mile-wide range that the “internet of things” could be built on - Quartz

“I think the internet of things is not going to start with products, but projects,” says Taylor. His goal is to use the current crowd-funding effort for Flutter to pay for the coding of the software protocol that will run Flutter, since the microchips it uses are already available from manufacturers. The resulting software will allow Flutter to create a “mesh network,” which would allow individual Flutter radios to re-transmit data from any other Flutter radio that’s in range, potentially giving hobbyists or startups the ability to cover whole cities with networks of Flutter radios and their attached sensors.
+
Taylor’s ultimate goal is to create a system that answers the fundamental needs of all objects in the internet of things, including good range, low power consumption, and just enough speed to get the job done—up to 600 kilobits a second, or about 1/20th the speed of a typical home Wi-Fi connection. One reason for that slow speed is that lower-bandwidth signals, transmitted in the 915 Mhz range in which Flutter operates, travel further. These speeds are more than sufficient when the goal is transmitting sensor readings, which are typically very short strings of data.

Friday, August 30, 2013

How The Power Of Ocean Waves Could Yield Freshwater With Zero Carbon Emissions | ThinkProgress

How The Power Of Ocean Waves Could Yield Freshwater With Zero Carbon Emissions | ThinkProgress

But instead of relying on those electric pumps, Carnegie is using the latest iteration of its CETO technology — CETO 5 — to supply that pressure with wave energy instead. Underwater buoys eleven meters in diameter are installed offshore, and as ocean waves catch them, the movement supplies hydraulic power to pump seawater up underground pipes to shore. At that point, the water runs into the desalination plant, where it directly supplies the pressure for the reverse osmosis. Some of that hydraulic energy is also converted into electric power as needed.
CETO_wave_desalination
CREDIT: Carnegie Wave Energy
The resulting system not only cuts out all carbon dioxide emissions, it also greatly reduces the points where energy can be lost, making the process much more energy efficient and cost-effective.
The two megawatt demonstration project will be situated on Garden Island, near the coastal city of Perth in Western Australia, and will ultimately supply roughly 55 billion litres of drinking water per year. A previous desalination plant set up by Water Corporation in Kwinana, south of Perth, already supplies 45 billion litres. The final total of 100 billion litres a year is half the city’s drinking water needs.
Southwestern Australia has been especially hard hit by droughts, and unaffected by the reprieve from the dry period the rest of the continent has enjoyed. And climate change models project that the traditional freshwater supplies for Perth will dry up even further by 2030. Meanwhile, Australia as a whole has been suffering the ravages of climate change, what with record-setting heat waves, floods, and other extreme weather. So anything that could provide the country freshwater without adding anymore to the globe’s carbon emissions is a welcome development.
HT: RenewEconomy

Thursday, August 29, 2013

Super efficient Shockwave Engine would recharge electric vehicles, | ARPA-E

Shockwave Engine | ARPA-E
Location: 
East Lansing, MI
ARPA-E Award: 
$3,040,631
Project Term: 
01/14/2010 to 05/15/2013
Project Status: 
ACTIVE
Critical Need: 
Most vehicle engines today are only 33% efficient, so there is a critical need to improve their efficiency. Developing more efficient engines could increase fuel efficiency--saving drivers money at the gas pump. It could also help limit U.S. dependence on petroleum-based fuels that produce greenhouse gas emissions like carbon dioxide (CO2), which can contribute to global climate change.
Project Innovation + Advantages: 
MSU is developing a new engine for use in hybrid automobiles that could significantly reduce fuel waste and improve engine efficiency. In a traditional internal combustion engine, air and fuel are ignited, creating high-temperature and high-pressure gases that expand rapidly. This expansion of gases forces the engine's pistons to pump and powers the car. MSU's engine has no pistons. It uses the combustion of air and fuel to build up pressure within the engine, generating a shockwave that blasts hot gas exhaust into the blades of the engine's rotors causing them to turn, which generates electricity. MSU's redesigned engine would be the size of a cooking pot and contain fewer moving parts--reducing the weight of the engine by 30%. It would also enable a vehicle that could use 60% of its fuel for propulsion.
Impact Summary: 
If successful, MSU's redesigned engine would reduce the weight of vehicles by up to 20%, improve their fuel economy by up to 60%, reduce their total cost by up to 30%, and reduce their CO2 emissions by 90%.
Security: 
Increasing vehicle fuel efficiency by 10% could result in 300 million fewer barrels of oil being imported from foreign countries each year.
Economy: 
Reducing fuel waste results in cost savings for the average consumer, who spends nearly $4,000 per year on energy.
Environment: 
More efficient engines could result in the reduction of nearly 200 million metric tons of CO2 emissions in the U.S. each year from passenger vehicles.

Sunday, August 25, 2013

The Latest Clean Energy Cocktail: Bacteria And Fungus | ThinkProgress

The Latest Clean Energy Cocktail: Bacteria And Fungus | ThinkProgress

By throwing together a common fungus and a common bacterium, researchers are producing isobutanol — a biofuel that gallon-for-gallon delivers 82 percent of gasoline’s heat energy. The more common ethanol, by contrast, only gets 67 percent of gasoline’s energy, and does more damage to pipelines and engines. And the University of Michigan research team did it using stalks and leaves from corn plants as the raw material.
The fungus in question was Trichoderma reesei, which breaks down the plant materials into sugars. The team used corn plant leftovers in this case, but many other forms of biomass like switchgrass or forestry waste could also serve. The bacterium was Escherichia coli — good old-fashioned E. coli — which then converted those sugars into isobutanol. Another team of researchers at the University of Wisconsin-Madison recently came up with a similar process by studying leaf cutter ants, but their work produced ethanol instead.
The University of Michigan team also got the fungi and bacteria to co-exist peacefully in the same culture and bioreactor. That means fewer cost barriers to commercializing the process: “The capital investment will be much lower, and also the operating cost will be much lower,” Xiaoxia “Nina” Lin, the team’s leader, explained. “So hopefully this will make the whole process much more likely to become economically viable.”
The big advantage of a cellulosic biofuel like this is twofold. One, because it can be produced from crops that don’t double as a food source, demand for it won’t drive up food prices or contribute to global food insecurity. Traditional corn-based ethanol obviously competes with one of the world’s most basic and widely-used foods, and American and European demand for it has contributed to spiraling food costs and crises in Guatemala and across the developing world. Studies looking into the 2008 food crisis determined that biofuel policies contributed to the problem, compounding the threat of global food insecurity, which in turn helps drive geopolitical upheaval and destabilization.
Two, by driving up demand for food crops, traditional biofuels encourage individuals and countries to clear ever more natural land for agriculture. Grasslands and natural forest store more carbon from the atmosphere than cropland. So the growth in biofuel production, means less natural ecology to absorb carbon, leaving more greenhouse gas in the atmosphere. On top of that, agriculture involves its own carbon emissions from driving tractors and such. So put it all together and traditional biofuel production is largely self-defeating in terms of the final amount of carbon dioxide left in the atmosphere.
But if a process like this one produces biofuel purely from waste materials — stuff left over from crops we would’ve grown regardless, on land we would’ve cleared regardless — those biofuels will deliver a much bigger net positive when it comes to fighting climate change.
“We’re really excited about this technology,” said Jeremy Minty, another member of the team. “The U.S. has the potential to sustainably produce 1 billion tons or more of biomass annually, enough to produce biofuels that could displace 30 percent or more of our current petroleum production.”
And it’s not just fossil fuels that could be replaced, either. Petrochemicals are also used in making a host of other products, especially plastics. The research team hopes their work could be adapted to replace the petrochemicals used in those processes as well.
HT: CleanTechnica
By clicking and submitting a comment I acknowledge the ThinkProgress Privacy Policy and agree to the ThinkProgress Terms of Use. I understand that my comments are also being governed by Facebook, Yahoo, AOL, or Hotmail’s Terms of Use and Privacy Policies as applicable, which can be found here.

Friday, August 23, 2013

Why the U.S. Power Grid's Days Are Numbered - Businessweek

Why the U.S. Power Grid's Days Are Numbered - Businessweek

He’s not alone in his assessment, though. An unusually frank January report by the Edison Electric Institute (EEI), the utilities trade group, warned members that distributed generation and companion factors have essentially put them in the same position as airlines and the telecommunications industry in the late 1970s. “U.S. carriers that were in existence prior to deregulation in 1978 faced bankruptcy,” the report states. “The telecommunication businesses of 1978, meanwhile, are not recognizable today.” Crane prefers another analogy. Like the U.S. Postal Service, he says, “utilities will continue to serve the elderly or the less fortunate, but the rest of the population moves on.” And while his utility brethren may see the grid as “the one true monopoly, I’m working for the day the grid is diminished.”
Anthony Earley Jr., CEO of giant Pacific Gas & Electric, doesn’t share Crane’s timetable for the coming disruption—he thinks it’s further out—but he does agree about the seriousness of the threat. Solar users drain revenue while continuing to use utility transmission lines for backup or to sell their power back to the power company. How can power companies pay for necessary maintenance and upgrades of the grid if that free ride continues? “No less than the stability of the grid is at stake,” he says. So far regulators in Louisiana, Idaho, and California have rejected calls to impose fees or taxes on solar users.
Worldwide revenue from installation of solar power systems will climb to $112 billion a year in 2018, a rise of 44 percent, taking sales away from utilities, according to analysts at Navigant Research, which tracks worldwide clean-energy trends. “Certain regions in California, Arizona, and Hawaii are already feeling the pain,” says Karin Corfee, a managing director of Navigant’s energy practice. “We’ll see a different model emerge.”

Thursday, August 22, 2013

RDI’s Resilient Design Principles – Need Your Feedback | Resilient Design Institute

RDI’s Resilient Design Principles – Need Your Feedback | Resilient Design Institute

Resilience is the capacity to adapt to changing conditions and to maintain or regain functionality and vitality in the face of stress or disturbance.
The Resilient Design Principles
  1. Resilience transcends scales. Strategies to address resilience are relevant at scales of individual buildings, communities, and larger regional and ecosystem scales.
  2. Diverse systems are inherently more resilient. More diverse communities, ecosystems, economies, and social systems are better able to respond to interruptions or change, making them inherently more resilient.
  3. Redundancy enhances resilience. While sometimes in conflict with efficiency and green building priorities, redundant systems for such needs as electricity, water, and transportation, improve resilience.
  4. Simple, elegant, passive systems are more resilient. Features like passive heating and cooling strategies for buildings and natural swales for stormwater management are more resilient than complex systems that can break down and require ongoing maintenance.
  5. Durability strengthens resilience. Features that increase durability, such as rainscreen details on buildings, windows designed to withstand hurricane winds, biological erosion-control measures that grow stronger over time, and beautiful buildings that will be maintained for generations, enhance resilience.
  6. Locally available, renewable resources are more resilient. Reliance on abundant local resources, such as solar energy and annually replenished groundwater, provides greater resilience than nonrenewable resources from far away.
  7. Resilience anticipates interruptions and a dynamic future. Adaptation to a changing climate with higher temperatures, more intense storms, flooding, drought, and wildfire is a growing necessity, while non-climate-related natural disasters, such as earthquakes and solar flares, and anthropogenic actions like terrorism and cyberterrorism, call for resilient design.
  8. Find resilience in nature. Natural systems have evolved to achieve resilience; we can enhance our resilience by relying on or applying lessons from nature.
  9. Resilience is not absolute. Recognize that incremental steps can be taken and that “total resilience” in the face of all situations is not possible. Implement what is feasible and work to achieve greater resilience in stages.
Along with founding the Resilient Design Institute in 2012, Alex is founder of BuildingGreen, Inc. and executive editor of Environmental Building News. To keep up with his latest articles and musings, you can sign up for his Twitter feed.

Scientists of different persuasions remain fundamentally divided over whether such a scenario is even plausible. Carolyn Rupple of the US Geological Survey (USGS) Gas Hydrates Project told NBC News the scenario is "nearly impossible." Ed Dlugokencky, a research scientist at the National Oceanic and Atmospheric Administration's (NOAA) said there has been "no detectable change in Arctic methane emissions over the past two decades." NASA's Gavin Schmidt said that ice core records from previously warm Arctic periods show no indication of such a scenario having ever occurred. Methane hydrate expert Prof David Archer reiterated that "the mechanisms for release operate on time scales of centuries and longer." These arguments were finally distilled in a lengthy, seemingly compelling essay posted on Skeptical Science last Thursday, concluding with utter finality: "There is no evidence that methane will run out of control and initiate any sudden, catastrophic effects." But none of the scientists rejecting the plausibility of the scenario are experts in the Arctic, specifically the East Siberia Arctic Shelf (ESAS). In contrast, an emerging consensus among ESAS specialists based on continuing fieldwork is highlighting a real danger of unprecedented quantities of methane venting due to thawing permafrost. So who's right? What are these Arctic specialists saying? Are their claims of a potentially catastrophic methane release plausible at all? I took a dive into the scientific literature to find out. What I discovered was that Skeptical Science's unusually skewered analysis was extremely selective, and focused almost exclusively on the narrow arguments of scientists out of touch with cutting edge developments in the Arctic. Here's what you need to know. 1. The 50 Gigatonne decadal methane pulse scenario was posited by four Arctic specialists, and is considered plausible by Met Office scientists The authors of the controversial new Nature paper on costs of Arctic warming didn't just pull their decadal methane catastrophe scenario out of thin air. The scenario wasfirst postulated in 2008 by Dr Natalie Shakhova of the University of Alaska Fairbanks, Dr Igor Semiletov from the Pacific Oceanological Institute at the Russian Academy of Sciences, and two other Russian experts. Their paper noted that while seabed permafrost underlaying most of the ESAS was previously believed to act as an "impermeable lid preventing methane escape," new data showing "extreme methane supersaturation of surface water, implying high sea-to-air fluxes" challenged this assumption. Data showed: "Extremely high concentrations of methane (up to 8 ppm) in the atmospheric layer above the sea surface along with anomalously high concentrations of dissolved methane in the water column (up to 560 nM, or 12000% of super saturation)." One source of these emissions "may be highly potential and extremely mobile shallow methane hydrates, whose stability zone is seabed permafrost-related and could be disturbed upon permafrost development, degradation, and thawing." Even if the methane hydrates are deep, fissures, taliks and other soft spots create heat pathwaysfrom the seabed which warms quickly due to shallow depths. Various mechanisms for such processes have been elaborated in detail. The paper then posits the plausibility of a 50 Gigatonne (Gt) methane release occurring abruptly "at any time." Noting that the total quantity of carbon in the ESAS is "not less than 1,400 Gt", the authors wrote: "Since the area of geological disjunctives (fault zones, tectonically and seismically active areas) within the Siberian Arctic shelf composes not less than 1-2% of the total area and area of open taliks (area of melt through permafrost), acting as a pathway for methane escape within the Siberian Arctic shelf reaches up to 5-10% of the total area, we consider release of up to 50 Gt of predicted amount of hydrate storage as highly possible for abrupt release at any time. That may cause ∼12-times increase of modern atmospheric methane burden with consequent catastrophic greenhouse warming." So the 50 Gt scenario used by the new Nature paper does not postulate the total release of the ESAS methane hydrate reservoir, but only a tiny fraction of it. The scale of this scenario is roughly corroborated elsewhere. A 2010 scientific analysis led by the UK's Met Office in Review of Geophysics recognised the plausibility of catastrophic carbon releases from Arctic permafrost thawing of between 50-100 Gt this century, with a 40 Gt carbon release from the Siberian Yedoma region possible over four decades. Shakhova and her team have developed these findings from data derived from over 20 field expeditions from 1999 to 2011. In 2010, Shakhova et. al published a paper in Science based on their annual research trips which highlighted that the ESAS was a key reservoir of methane "more than three times as large as the nearby Siberian wetland... considered the primary Northern Hemisphere source of atmospheric methane." Current average methane concentrations in the Arctic are: "about 1.85 parts per million, the highest in 400,000 years" and "on par with previous estimates of methane venting from the entire World Ocean." As the ESAS is shallow at only 50 metres, most of the methane being released is escaping into the atmosphere rather than being absorbed into water. The existence of such shallow methane hydrates in permafrost - at depths as small as 20m - was confirmed by Shakhova in the Journal of Geophysical Research. There has been direct observation and sampling of these hydrates by Russian geologists in recent decades until now; this has also been confirmed by US government scientists. 2. Arctic methane hydrates are becoming increasingly unstable in the context of anthropogenic climate change and it's impact on diminishing sea ice The instability of Arctic methane hydrates in relation to sea ice retreat - not predicted by conventional models - has been increasingly recognised by experts. In 2007, a study in Eos, Transactions found that: "Large volumes of methane in gas hydrate form can be stored within or below the subsea permafrost, and the stability of this gas hydrate zone is sustained by the existence of permafrost. Degradation of subsea permafrost and the consequent destabilization of gas hydrates could significantly if not dramatically increase the flux of methane, a potent greenhouse gas, to the atmosphere." In 2009, a research team of 19 scientists wrote a paper in Geophysical Research Letters documenting how the past thirty years of a warming Arctic current due to contemporary climate change was triggering unprecedented emissions of methane from gas hydrate in submarine sediments beneath the seabed in the West Spitsbergen continental margin. Prior to the new warming, these methane hydrates had been stableat water depths as shallow as 360m. Over 250 plumes of methane gas bubbles were found rising from the seabed due to the 1C temperature increase in the current: "... causing the liberation of methane from decomposing hydrate... If this process becomes widespread along Arctic continental margins, tens of Teragrams of methane per year could be released into the ocean." The Russian scientists investigating the ESAS also confirmed that the levels of methane release they discovered were new. As Steve Connor reported in the Independent, since 1994 Igor Semilitov: "... has led about 10 expeditions in the Laptev Sea but during the 1990s he did not detect any elevated levels of methane. However, since 2003 he reported a rising number of methane 'hotspots', which have now been confirmed using more sensitive instruments." In 2012, a Nature study mapping over 150,000 Arctic methane seeps concluded that: "... in a warming climate, disintegration of permafrost, glaciers and parts of the polar ice sheets could facilitate the transient expulsion of 14C-depleted methane trapped by the cryosphere cap." 3. Multiple scientific reviews, including one by over 20 Arctic specialists, confirm decadal catastrophic Arctic methane release is plausible A widely cited 2011 Nature review dismissed such a catastrophic scenario as implausible because methane "gas hydrates occur at low saturations and in sediments at such great depths below the seafloor or onshore permafrost that they will barely be affected by [contemporary levels of] warming over even [1,000] yr." But this study and others like it completely ignore the new empirical evidence on permafrost-associated shallow water methane hydrates on the Arctic shelf. Scientific reviews that have accounted for the empirically-observed dynamics of permafrost-associated methane come to the opposite conclusion.Good read on Arctic methane release - Google Groups

Good read on Arctic methane release - Google Groups

Scientists of different persuasions remain fundamentally divided over whether such a scenario is even plausible. Carolyn Rupple of the US Geological Survey (USGS) Gas Hydrates Project told NBC News the scenario is "nearly impossible." Ed Dlugokencky, a research scientist at the National Oceanic and Atmospheric Administration's (NOAA) said there has been "no detectable change in Arctic methane emissions over the past two decades." NASA's Gavin Schmidt said that ice core records from previously warm Arctic periods show no indication of such a scenario having ever occurred. Methane hydrate expert Prof David Archer reiterated that "the mechanisms for release operate on time scales of centuries and longer." These arguments were finally distilled in a lengthy, seemingly compelling essay posted on Skeptical Science last Thursday, concluding with utter finality:
"There is no evidence that methane will run out of control and initiate any sudden, catastrophic effects."
But none of the scientists rejecting the plausibility of the scenario are experts in the Arctic, specifically the East Siberia Arctic Shelf (ESAS). In contrast, an emerging consensus among ESAS specialists based on continuing fieldwork is highlighting a real danger of unprecedented quantities of methane venting due to thawing permafrost.
So who's right? What are these Arctic specialists saying? Are their claims of a potentially catastrophic methane release plausible at all? I took a dive into the scientific literature to find out.
What I discovered was that Skeptical Science's unusually skewered analysis was extremely selective, and focused almost exclusively on the narrow arguments of scientists out of touch with cutting edge developments in the Arctic. Here's what you need to know.

1. The 50 Gigatonne decadal methane pulse scenario was posited by four Arctic specialists, and is considered plausible by Met Office scientists

The authors of the controversial new Nature paper on costs of Arctic warming didn't just pull their decadal methane catastrophe scenario out of thin air. The scenario wasfirst postulated in 2008 by Dr Natalie Shakhova of the University of Alaska Fairbanks, Dr Igor Semiletov from the Pacific Oceanological Institute at the Russian Academy of Sciences, and two other Russian experts.
Their paper noted that while seabed permafrost underlaying most of the ESAS was previously believed to act as an "impermeable lid preventing methane escape," new data showing "extreme methane supersaturation of surface water, implying high sea-to-air fluxes" challenged this assumption. Data showed:
"Extremely high concentrations of methane (up to 8 ppm) in the atmospheric layer above the sea surface along with anomalously high concentrations of dissolved methane in the water column (up to 560 nM, or 12000% of super saturation)."
One source of these emissions "may be highly potential and extremely mobile shallow methane hydrates, whose stability zone is seabed permafrost-related and could be disturbed upon permafrost development, degradation, and thawing." Even if the methane hydrates are deep, fissures, taliks and other soft spots create heat pathwaysfrom the seabed which warms quickly due to shallow depths. Various mechanisms for such processes have been elaborated in detail.
The paper then posits the plausibility of a 50 Gigatonne (Gt) methane release occurring abruptly "at any time." Noting that the total quantity of carbon in the ESAS is "not less than 1,400 Gt", the authors wrote:
"Since the area of geological disjunctives (fault zones, tectonically and seismically active areas) within the Siberian Arctic shelf composes not less than 1-2% of the total area and area of open taliks (area of melt through permafrost), acting as a pathway for methane escape within the Siberian Arctic shelf reaches up to 5-10% of the total area, we consider release of up to 50 Gt of predicted amount of hydrate storage as highly possible for abrupt release at any time. That may cause ∼12-times increase of modern atmospheric methane burden with consequent catastrophic greenhouse warming."
So the 50 Gt scenario used by the new Nature paper does not postulate the total release of the ESAS methane hydrate reservoir, but only a tiny fraction of it.
The scale of this scenario is roughly corroborated elsewhere. A 2010 scientific analysis led by the UK's Met Office in Review of Geophysics recognised the plausibility of catastrophic carbon releases from Arctic permafrost thawing of between 50-100 Gt this century, with a 40 Gt carbon release from the Siberian Yedoma region possible over four decades.
Shakhova and her team have developed these findings from data derived from over 20 field expeditions from 1999 to 2011. In 2010, Shakhova et. al published a paper in Science based on their annual research trips which highlighted that the ESAS was a key reservoir of methane "more than three times as large as the nearby Siberian wetland... considered the primary Northern Hemisphere source of atmospheric methane." Current average methane concentrations in the Arctic are:
"about 1.85 parts per million, the highest in 400,000 years" and "on par with previous estimates of methane venting from the entire World Ocean."
As the ESAS is shallow at only 50 metres, most of the methane being released is escaping into the atmosphere rather than being absorbed into water.
The existence of such shallow methane hydrates in permafrost - at depths as small as 20m - was confirmed by Shakhova in the Journal of Geophysical Research. There has been direct observation and sampling of these hydrates by Russian geologists in recent decades until now; this has also been confirmed by US government scientists.

2. Arctic methane hydrates are becoming increasingly unstable in the context of anthropogenic climate change and it's impact on diminishing sea ice

The instability of Arctic methane hydrates in relation to sea ice retreat - not predicted by conventional models - has been increasingly recognised by experts. In 2007, a study in Eos, Transactions found that:
"Large volumes of methane in gas hydrate form can be stored within or below the subsea permafrost, and the stability of this gas hydrate zone is sustained by the existence of permafrost. Degradation of subsea permafrost and the consequent destabilization of gas hydrates could significantly if not dramatically increase the flux of methane, a potent greenhouse gas, to the atmosphere."
In 2009, a research team of 19 scientists wrote a paper in Geophysical Research Letters documenting how the past thirty years of a warming Arctic current due to contemporary climate change was triggering unprecedented emissions of methane from gas hydrate in submarine sediments beneath the seabed in the West Spitsbergen continental margin. Prior to the new warming, these methane hydrates had been stableat water depths as shallow as 360m. Over 250 plumes of methane gas bubbles were found rising from the seabed due to the 1C temperature increase in the current:
"... causing the liberation of methane from decomposing hydrate... If this process becomes widespread along Arctic continental margins, tens of Teragrams of methane per year could be released into the ocean."
The Russian scientists investigating the ESAS also confirmed that the levels of methane release they discovered were new. As Steve Connor reported in the Independent, since 1994 Igor Semilitov:
"... has led about 10 expeditions in the Laptev Sea but during the 1990s he did not detect any elevated levels of methane. However, since 2003 he reported a rising number of methane 'hotspots', which have now been confirmed using more sensitive instruments."
In 2012, a Nature study mapping over 150,000 Arctic methane seeps concluded that:
"... in a warming climate, disintegration of permafrost, glaciers and parts of the polar ice sheets could facilitate the transient expulsion of 14C-depleted methane trapped by the cryosphere cap."

3. Multiple scientific reviews, including one by over 20 Arctic specialists, confirm decadal catastrophic Arctic methane release is plausible

A widely cited 2011 Nature review dismissed such a catastrophic scenario as implausible because methane "gas hydrates occur at low saturations and in sediments at such great depths below the seafloor or onshore permafrost that they will barely be affected by [contemporary levels of] warming over even [1,000] yr."
But this study and others like it completely ignore the new empirical evidence on permafrost-associated shallow water methane hydrates on the Arctic shelf. Scientific reviews that have accounted for the empirically-observed dynamics of permafrost-associated methane come to the opposite conclusion.

Tuesday, August 20, 2013

Separating Fact from Fiction In Accounts of Germany’s Renewables Revolution

Separating Fact from Fiction In Accounts of Germany’s Renewables Revolution


Myth #2: Renewables undermine grid reliability

Another common misreportage theme is that renewables are degrading the reliability of Germany’s power supply, driving industry abroad. The president of Germany’s network agency has confirmed this is not true. Hearsay anecdotes alleging renewable-caused power glitches are often traceable to Der Spiegel, a frequent source of anti-renewable stories, but evaporate on scrutiny. Charles Mann in The Atlantic cites five references to bolster such claims, but his sources (cited in my response) don’t support his case. One, from a Koch-allied anti-renewable front group (whose political arm, the American Energy Alliance, lobbies for fossil fuels and against renewables), claims renewables are “causing havoc” in the German grid, the other four sources don’t, and none of the five offers any evidence this is happening, because it’s not—as I confirmed with German experts in May 2013, when I was co-keynoting the Chancellor’s electromobility conference in Berlin.
But Der Spiegel is not alone in such misreporting. Die Zeit and others have described local electricity problems caused by a failed coal plant and by restricted Russian gas deliveries as if they proved the unreliability of renewables, which had nothing to do with them. Focus likewise blamed a Munich power outage on renewables, then reported the actual, unrelated cause (a transformer blew up) without a correction.
To be sure, Germany’s grid, built for central stations, was scarcely expanded as renewable generation soared from 3 percent to 23 percent in 20 years. Grid modernization and debottlenecking are therefore needed and are vigorously underway—though the network agency recently slashed plans for new transmission corridors by nearly half because many projects proved unnecessary, and grid investments apparently needn’t rise. But fear of what might happen if those future grid improvements weren’t made doesn’t justify the lie that blackouts and brownouts are rife today.
In fact, German power, like 22-percent-solar-and-windpowered Spanish and 30-percent-windpowered Danish power (both for all of 2012), remains far more reliable than U.S. power and is getting even more reliable. Germany ranks #1 in European grid reliability, Denmark just behind, both about tenfold better than the U.S. Likewise, as Spain’s solar and windpower soared in the past few years, Spain’s reliability index rose too. Across Europe, renewable expansion correlates with more reliable power. Now a German company has even assembled a 570-MW virtual power plant of dispatchable renewables, available nationwide to firm (guarantee steady output from) the varying wind and solar output. Dispatching variable resources is more complex, but the grid’s skilled operators do it well.
The next round of misreporting will doubtless emerge from a new grid-operator survey showing that 7 percent of German firms surveyed (or 25 percent of those that could in principle move production abroad) say they are considering doing so “because of a (possible) worsening of grid reliability.” However, no trend can be inferred because this question was never previously asked; no comparison is possible because it wasn’t asked in other countries; and, of course, reduced reliability is a future hypothetical, not a present reality. The European standard metric, to be sure, doesn’t include outages shorter than three minutes, so vague claims that even briefer outages are rising in Germany can’t be tested from the data—only cast in doubt by lack of specific anecdotal examples

Myth #3: Renewables subsidies are cratering the Germany economy

Perhaps most confusing is Germany’s lively debate about the surcharge that utility customers pay to finance the feed-in tariff (“FIT”)—a fixed 20-year power purchase contract offered to anyone installing new renewable generators, whether solar, wind, biomass-fueled, or other kinds. (Since 2012, you can instead choose market-based payments, as half of renewable producers and four-fifths of windpower operators do, and since autumn 2012, new solar systems over 10 megawatts are no longer eligible for FITs). The FIT declines as renewables’ growth drives down their prices; rooftop solar’s FIT is falling 1.8 percent every month. But partly because prices are falling, solar sales are far outpacing forecasts, raising the surcharge. USA Today columnist Sumi Somaskanda recently wrote: “German consumers are waking up to the costs of going green: As of Jan. 1, they are paying 11 percent more for electricity than they did last year thanks to government plans to replace nuclear plants with wind and solar power that requires significant and constant public money to be made cost effective.” But as I wrote in April, the truth is quite different.
Germany’s renewables surcharge is artificially inflated by hefty and rising industry exemptions that place greater burdens on households—a policy now under legal investigation by the EU as a potential illegal subsidy—but it is not public money, is not a subsidy (Germany hasn’t subsidized photovoltaics since 2004), and is a minute drop in the bucket of German households’ energy costs. It works just like the way many American households pay prices set by state regulators for approved power plants, only it’s far more transparent—and in Germany you have the option of earning back your payments, and far more, by investing as little as $600 in renewable energy yourself. Citizens, cooperatives, and communities own more than half of German renewable capacity, vs. two percent in the U.S.
In 2013, the FIT surcharge raised households’ retail price of electricity 7 percent but renewables lowered big industries’ wholesale price 18 percent. As long-term contracts expire, the past few years’ sharply lower wholesale prices could finally reach retail customers and start sending households’ total electricity prices back down. The latest analysis suggests that this may even occur in 2014, sooner than expected.

Monday, August 19, 2013

Tesla's Model S electric car nabs top US safety rating | The Verge

Tesla's Model S electric car nabs top US safety rating | The Verge

Tesla points out that even though the Model S is a sedan, it scored higher than every minivan and SUV that was tested as well. It also notes that unlike gas vehicles, there's no engine block in the front of the Model S (the electric motor is apparently only about a foot in diameter), giving it comparatively more space to crumple and slow down its occupants, exemplified in the video above. If you'd like to read more about how the Model S managed to score so well, check the source link below.

Sunday, August 11, 2013

Fukushima Commentary | Fukushima Accident | Fukushima Disaster

Fukushima Commentary | Fukushima Accident | Fukushima Disaster
On July 23, Tepco revealed that contamination is leaching into their inner port (quay) at Fukushima Daiichi. Tepco and the Nuclear Regulatory Authority make it seem as if the contamination is going into the Pacific Ocean. There are many unanswered questions with the groundwater issue, but one thing seems certain…the material is not reaching the open sea, at least not yet. Tepco’s recent revelation validates the NRA conjecture of 10 days ago. Tepco’s bases their belief on the water level in the near-shore sampling wells fluctuating with the tide. However, the data Tepco has posted over the past four months raises a considerable number of questions.
First we might ask…what is the source of the contamination? Since the groundwater contains Cesium isotopes 134 and 137, it cannot be coming from any of the waste water storage tanks or underground reservoirs at F. Daiichi. This is because those waters have been effectively stripped of their Cesium content by the station’s “makeshift” filtration system. There are several possible sources. (1) The radioactivity may be coming from basements of the four units holding 70,000 tons of water literally loaded with Cesium. (2) It could be what Tepco has said for more than a month and be residual isotopes already in the plant’s soil from a rather significant leak into a trench between unit #1 and unit #2 reactor buildings in April, 2011. (3) Could it have something to do with another trench from unit #3? Tepco quietly posted a Press handout concerning the possibility of a unit #3 leak on July 11. (http://www.tepco.co.jp/en/nu/fukushima-np/handouts/2013/images/handouts_130711_04-e.pdf ) Or, could it be a combination of all three?
If we assume the contamination is coming from the basements, it poses a pair of over-lapping questions. To begin, Tepco knows that 400 tons of groundwater is seeping into the basements every day. How’s the groundwater getting in there? Cracks in the concrete walls? Broken piping penetrations? The flowpath into the basements has not been stated. Whatever the path of seepage, groundwater is leaking into the basements and there’s no reason to think the contaminated waters are not leaking out via the same pathways. The Nuclear Regulatory Authority wants to freeze the ground surrounding the turbine buildings using an earth-freezing technology that does not yet exist. While the mere suggestion puts the technical competence to the NRA in question, if it works it will merely lower the in-flow of groundwater by 100 tons per day. Tepco already has what seems to be a better methodology to stanch the groundwater influx. They are drilling holes deep in the ground along the shoreline and inserting a chemical to harden the soil itself. (http://210.250.6.22/en/nu/fukushima-np/handouts/2013/images/handouts_130708_03-e.pdf ) Why not do the same thing around the basements of the turbine buildings, too? If it is good enough to keep contaminated groundwater from getting into the station’s near-shore quay, it will surely be better than the NRA’s pie-in-the-sky concoction to freeze the soil. Water-proofing the soils surrounding the basements, and around the suspect cable trench coming out of unit #2 should eliminate it as a source of possible leaks. Then there’s the unit #3 trench, but we’ll come back to it later.
Next, how bad is the groundwater contamination? Is it really “highly radioactive”? The highest groundwater Cesium reading to date is 11,000 Becquerels per liter inside one of the now-numerous sampling wells at F. Daiichi. Sounds like a lot, doesn’t it. Want to know what’s actually highly radioactive? The water in one of the trenches connected to the unit #2 turbine basement! The Press reports Tepco has found it to contain 2.35 billion Bq/liter of Cesium. That can be called “highly radioactive” by any standard. If 11,000 Bq/liter is “highly radioactive”, then what descriptive term should the Press use for 2.35 billion Bq/liter?
To continue, three of the groundwater sampling wells have elevated levels of Tritium (more on this later), but only one has shown increases in both Cesium isotopes over the past 2 weeks. (see the Tepco handout, above, for well locations). Well no. 1-2has readings of 11,000 Becquerel per liter for Cs-137 and 5,400 Bq/liter for Cs-134. (http://www.tepco.co.jp/en/nu/fukushima-np/f1/smp/2013/images/2tb-east_13072301-e.pdf ) These are the contamination levels that are always cited in the Press, both inside and outside of Japan, even though the Cesium in the rest of the wells is about 100 times lower. But here’s the important point…when the sample water from well #1-2 has the suspended solids filtered out, the cleansed water has readings of 50 Bq/liter of Cs-134 and 71 Bq/liter of Cs-137. (http://www.tepco.co.jp/en/nu/fukushima-np/f1/smp/2013/images/2tb-east_13072303-e.pdf ) These readings are higher than the other four near-shore sampling wells, but more than 99% lower than unfiltered. This demonstrates that the vast majority of the Cesium in the unfiltered sample is contained in the suspended sediment, probably stirred up by the fluctuating water level in the well. So, why doesn’t Tepco post the filtered sample data along with the unfiltered for well #1-2? It seems they only posted the unfiltered data only once on July 22nd. Further, has Tepco attempted to filter the sample waters taken from the other near-shore wells? If not, why not? This could be significant.
Here’s why it is important. Since the filtering of suspended solids removes more than 99% of the radioactivity, the Cesium is clearly bonded with the soil. The only way the high levels of Cesium in the groundwater can get into the station’s quay would be if the soil itself is being spilled into the seawater. Is it? With the station’s quay effectively isolated from the outer port area, and the outer port surrounded by some massive break-walls, there is no shore erosion. There might be a tiny loss of Cesium-impregnated soil leaving the shore, but the vast majority is staying put. We can say this with confidence when we look at the Cesium level inside the essentially stagnant quay. We find that all sampling points have not demonstrably changed in Cs-134 and Cs-137 concentrations since early April. (http://www.tepco.co.jp/en/nu/fukushima-np/f1/smp/2013/images/intake_canal_130726-e.pdf ) The levels have fluctuated over the past four months, but that is to be expected with activity levels as low as these in full liter samples. The range of upper and lower fluctuation points has stayed quite constant for all 12 sampling points along the quay’s shoreline. If there is a “highly radioactive” leak coming out of unit #3, there does not seem to be an increased Cesium level to prove it. It should be noted that the Cesium levels inside the quay have not changed significantly since March, 2012, but the above link back to April 2013 should suffice for this commentary.
Next we have the detected Tritium (H3), which raises more questions. Well number 1-2 has an H3 level of 350,000 Bq/liter, well number 1-3 is at 270,000 Bq/liter, and well #1 has 420,000 Bq/liter. (Wells 1-2 and 1-3 are between units 2&3 reactor buildings, and well #1 is next to reactor building #1) The Cs-134 levels in both well #1 and well 1-3 are…undetectable! The Cs-137 in both is less than 1Bq/liter. Why is the well with the highest level of Tritium not showing any Cesium? There is no correlation between H3 concentrations and the Cesium concentrations. There ought to be a correlation, but there isn’t. Why is there no correlation between isotopic concentrations? On a related note, why is there an elevated level of H3 (1,100 Bq/liter) at the unit #1 near-shore sampling point, but less than 400 Bq/liter everywhere else in the quay? If there is a leak to the quay is out of the unit #3 trench, why isn’t the quay water adjacent to unit #3 showing an increase over the levels detected in April?
Finally we get to the ultimate question. Is any of this contamination going out to sea? The inner Quay is sealed off from the waters which are inside the heavy stone break-walls that surround the station. The break-wall has a single opening to the open sea. Seawater sampling outside the quay, but inside the break-wall shows nothing. No detectible Tritium…no detectible Cesium. It appears the contamination in the quay is not getting into the outer port area. The silt dam that seals the entrance to the quay seems to be doing its job quite well. In addition, samples taken from the open sea surrounding F. Daiichi also show nothing. In other words, there seems to be no groundwater-borne contamination going into the Pacific Ocean from Fukushima Daiichi. So, why do the Nuclear Regulatory Authority and Tepco both make it sound like the Pacific Ocean is being “tainted”?
Many might question the veracity of the data posted by Tepco’s staff at F. Daiichi, given the general level of distrust relative to the company. But, there is no-one else’s data to analyze. Keep in mind that Tepco discovered the problem with groundwater contamination. No-one else did. They are the ones who have reported it to the world, albeit belatedly…and there-in lies the problem. The company’s level of transparency relative to public disclosure is not perfect, and some of their statements may be tainted with paranoiac twists, but their radiological data should not be distrusted. We have no other data to go on.
Questions…questions…questions…
July 20, 2013
Naoto Kan: Japan’s Pinocchio
This past Tuesday, Naoto Kan submitted a defamation suit against Prime Minister Shinzo Abe. It is very unusual for a former prime minister to sue an incumbent. The suit is because Abe posted an Email on March 20, 2011, saying Kan fabricated his part in the infamous seawater cooling dispute during the Fukushima accident. Abe also said Kan’s trying to stop Tepco from cooling with seawater is a case of severe mismanagement and that he should resign. Kan charges Abe with keeping “erroneous” information on his website and ignoring Kan’s repeated entreaties to remove the Email from archives. Kan also charges Abe with making a “false accusation” that defames the former PM. Since the Email has not been deleted, Kan has filed the suit, along with $110,000 in damages. In response…well…there is no response from Mr. Abe. He refuses to comment.

Wednesday, August 7, 2013

Directory:Walipini Underground Greenhouses - PESWiki

Directory:Walipini Underground Greenhouses - PESWiki
A directory of resources pertaining to the Walipini underground greenhouse mothod developed by the Benson Institute in Provo Utah.
One of the main principles involves embedding the greenhouse in the earth to take advantage of the earth's constant temperature, to store the solar energy collected during the day. Water barrels can also be used to store the thermal heat and carry it through the night or cloudy days (which are not as cold). Water is a much better thermal mass storage mechanism than soil.
The solar gain comes through a light-permeable material such as plastic, Visqueen, polycarbinate. The angle of the panels is designed to be 90-degrees to the Winter Soltace sun (Dec. 21 / June 21, depending on hemisphere). The the upper portion of the walls are insulated down past the frost line.
The word "Walipini" comes from the Aymara Indian language and means "place of warmth". They've been able to grow banannas at 14,000 feet elevation in the Andes.