Future Energy eNews        IntegrityResearchInstitute.org        Jan. 7,  2006

1) Unlimited Electric Power from a Tree - The copper-aluminum battery is easy to make
2) DNA Self-Assemby Used to Make Nanostructures - Epic proportion nanotech breakthrough
3) Why 'Integrity' was the Most Looked up Word in 2005CS Monitor article helps IRI mission
4) Punctuations of Future Energy - Arlington Institute view on global warming, peak oil, and energy
5) Can Seismic Signal the Next Big One? - New Scientist finds ULF waves precede quakes
6) Seismic Signal is Not New - Dr. Elizabeth Rauscher is world's expert in quake prediction
7) Earthquake Predictor - IEEE Spectrum article is the best - QuakeFinder is the company

1) Electric Power Source Breakthrough? Unlimited Energy from the Environment

Gordon Wadle, MagCap, December 20, 2005, http://www.magcap.com/ 

Energy Generated by Non-Animal Organism Multiplied Into Clean, Free Electric Current

CANTON, Mass., Dec. 20 -- An alternative electric power generating system that draws energy from a seemingly unlikely yet abundant, eminently renewable and virtually free power source has been submitted for patenting by MagCap Engineering, LLC, Canton, Mass., in collaboration with Gordon W. Wadle, an inventor from Thomson, Ill.

Wadle has invented a way to capture the energy generated by a living non-animal organism -- such as a tree. Chris Lagadinos, president of MagCap, developed circuitry that converts this natural energy source into useable DC power capable of sustaining a continuous current to charge and maintain a battery at full charge.

"As unbelievable as it sounds, we've been able to demonstrate the feasibility of generating electricity in this manner," said Wadle. "While the development is in its infancy, it has the potential to provide an unlimited supply of constant, clean energy without relying on fossil fuels http://www.theautochannel.com/news/2005/12/20/184393.html a power generating plant complex or an elaborate transmission network."

The developers now intend to establish a collaborative agreement with a company, academic institution or potential investors who can help finance the additional research and development necessary to take the invention to the next level -- a practical, commercially viable power generating system.

Wadle likened the invention to the discovery of electricity over 200 years ago when charged particles were harnessed to create an electric current. "Now we've learned that there is an immense, inexhaustible source of energy literally all around us that can be harnessed and converted into usable electric power," he said.

Ultimately, it should prove to be more practical than solar energy or wind power, and certainly more affordable than fuel cells, he added.

Wadle said he got the original idea of harnessing a tree for electrical energy from studying lightening, more than 50 percent of which originates from the ground. This prompted him to develop the theories resulting in a method to access this power source. Lagadinos then designed circuitry that filtered and amplified these energy emanations, creating a useable power source.

Basically, the existing system includes a metal rod embedded in the tree, a grounding rod driven into the ground, and the connecting circuitry, which filters and boosts the power output sufficient to charge a battery. In its current experimental configuration, the demonstration system produces 2.1 volts, enough to continuously maintain a full charge in a nickel-cadmium battery attached to an LED light.

"Think of the environment as a battery, in this case," said Lagadinos, "with the tree as the positive pole and the grounding rod as the negative."

Near term -- within the next six months or so -- and with additional research and development, Lagadinos said the system could be enhanced enough to generate 12 volts and one amp of power, "a desirable power level that could be used to power just about anything," he said.

It is enough power to charge batteries for any type of vehicle: http://www.theautochannel.com/news/2005/12/20/184393.html, including hybrids http://www.theautochannel.com/news/2005/12/20/184393.html and electric cars http://www.theautochannel.com/news/2005/12/20/184393.html, or to use with an AC converter to produce household power, he added. The LED industry is a prime example of a potential user of this power source.

Other applications would be to provide power for signs, security lights, street, park and hiking trail lights, surveillance or sensor equipment -- any application that heretofore couldn't be serviced because it lay beyond the hard-wired power grid.

Government agencies and the military could find the system especially useful because the power is basically free, unlimited and can be produced in remote locations.

MagCap is now seeking to establish a collaborative relationship with a third party, explained Lagadinos and Wadle. This is a step that could not be taken until proper patent protection was applied for.

A patent application for this pioneering invention was filed in December by the developers' patent counsel, Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C., Boston, Mass.

While the basic concept of this invention -- using a tree to generate electric power -- seems too incredible to be true, Lagadinos said it can be demonstrated quite simply. "Simply drive an aluminum roofing nail through the bark and into the wood of a tree -- any tree -- approximately one half inch; drive a copper water pipe six or seven inches into the ground, then get a standard off-the-shelf digital volt meter and attach one probe to the pipe, the other to the nail and you'll get a reading of anywhere from 0.8 to 1.2 volts of DC power," he said.

"You can't do anything with it in that form because it is 'dirty' -- i.e. highly unstable and too weak to power anything," he added. In order to properly harness this potential energy source, MagCap devised two test circuits: one with three capacitors that were connected in parallel by means of a switch and charged to 0.7 volts each. When fully charged they are switched to a series mode, multiplying the voltage to 2.1 volts and flashing an LED to show that sufficient power could be generated to produce a useable result.

The second circuit included a filtering device to stabilize and "clean" the current so it could be used to charge and maintain a NiCad battery. The battery then could be connected to the LED to keep the LED lit continuously.

Wadle pointed out that there seems to be no limit to the amount of power that can be drawn from an individual tree, no matter how many "taps" are inserted -- each produces the same amount of energy, an average of 0.7 - 0.8 volts. Size of the tree also seems not to matter.

Interestingly, while conventional wisdom would seem to indicate that the tree draws much of its energy from photosynthesis via its leaves, the voltage output actually increases to 1.2-1.3 volts in the winter after the leaves have fallen.

Headquartered in Canton, Mass., MagCap Engineering, LLC is a leading custom designer and manufacturer of magnetics of all sizes for the broadcast, telecommunication, microwave, military, defense and energy industries. For more information, see http://www.magcap.com/ ..

NOTE: Much as the potato battery that used to be on the market, the copper and aluminum will be consumed during the operation of this 'tree battery' but serves to be a good survivalist resource for low voltage and current needs. Certainly every house could purchase a copper ground stake and a few aluminum roofing nails to have on hand in an emergency.   - Editor

2) DNA Self-Assembly Used to Mass-Produce Patterned Nanostructures

Thom LaBean & Christopher Dwyer, Duke University, posted on Eurekalert, Dec. 23, 2005  http://www.eurekalert.org/pub_releases/2005-12/du-dsu122005.php

Duke University scientists have used the self-assembling properties of DNA to mass-produce nanometer-scale structures in the shape of a 4x4 grids, on which patterns of molecules can be specified. They said the achievement represents a step toward mass-producing electronic or optical circuits at a scale 10 times smaller than the smallest circuits now being manufactured.

Instead of using silicon as the platform for tiny circuits, as is done in the current manufacturing technique of photolithography, the Duke researchers used DNA strands to create grids less than one ten-millionth of a meter square. The smallest features on these square DNA lattices are approximately five to 10 billionths of a meter (nanometers), according to the scientists, compared with about 65 nanometers in silicon circuits created using photolithography.

To demonstrate their ability to mass-produce grids with infinitesimal patterns, the scientists created batches of trillions of separate grids with the letters "D," "N" and "A" written with a protein that can be seen through atomic force microscopy (AFM). An image of the grids with the letter patterns is available at <<http://www.dukenews.duke.edu/mmedia/hires/dna.jpg>>.

The scientists, members of the Triangle-area TROIKA collaboration to construct computing devices using DNA, were able to create the grids by using the binding properties of DNA to ensure that large numbers of DNA strands would assemble themselves in specified patterns.

The two corresponding authors on the paper were Thom LaBean, an associate research professor of computer science and an adjunct associate professor of the practice of chemistry at Duke, and Christopher Dwyer, an assistant professor of electrical and computer engineering and computer science. Their research is scheduled to be published in Volume 45 of the journal Angewandte Chemie and released early online Dec. 23, 2005. It was funded by the National Science Foundation.

"The process we've described creates lattices -- with patterns we specify -- at least tenfold smaller than the best lithography being used right now," LaBean said. "Plus, because we're using DNA building blocks that assemble themselves, we can simultaneously make trillions of copies of a desired structure."

To create the tiny DNA grids, LaBean, Dwyer and their colleagues began with tiny building blocks they called "tiles." Each tile was made up of strands of DNA bent like pipe cleaners into the shape of a cross. In the middle of each cross was a loop of DNA that can be attached to another molecule that can in turn bind to a protein molecule to give the tile a tag visible by AFM. Each arm of the cross, about 10 nanometers long, had a pair of "sticky ends" where the DNA strand was made of up of unpaired bases that tend to bind with reciprocal bases. Tiles with complementary sticky ends link together when mixed.

The structure of the tiles created the molecular equivalent of puzzle pieces that would self-assemble only in a specific arrangement when mixed together, with the DNA loop loaded with a desired "cargo" molecule that would form the structure the researchers wished to create. In one experiment, the scientists specified 16 unique puzzle pieces that fit together as a 4x4 grid that formed a puzzle spelling the letter "D." Because each piece would only match up with its predetermined neighbors, the scientists could mix together a trillion of each type of tile in one batch to generate a trillion 4x4 grids.

Coming up with the specifications for each DNA strand in the tiles proved to be a complex mathematical problem. The challenge was to specify a sequence of bases for each pair of sticky ends at the end of each of the 16 tiles' four arms (a total of 128 sequences) that would bind a tile only with its intended neighbor and not with any other tiles or itself.

"It turns out there are a lot of combinations to consider," Dwyer said. "That meant running a lot of searches. We had to run three hundred computers for two weeks to get an answer."

The answer for the optimal base configurations for each DNA strand was disclosed to Duke's Office of Licensing and Ventures as a first step towards a potential patent.

The researchers have not yet produced a functional circuit on a grid. However, in future studies, they plan to generate grids larger than four tiles by four tiles and to populate the grids with molecules that can conduct electrons or light waves to form simple circuits. Based on the characteristics those circuits would have, Dwyer and colleagues have drawn up designs for computer chips.

The researchers also may explore how their method of assembling tiles can be used create biological structures that could act as tiny sensors.


Co-authors of the research paper in Duke's Department of Computer Science were graduate student Constantin Pistol, Professor John Reif and Associate Professor Alvin Lebeck. Other authors were Sang Jung Ahn of the Korea Research Institute of Standards and Science and Sung Ha Park, formerly a graduate student at Duke and now a postdoctoral fellow at the California Institute of Technology.

Contact: James Todd
james.todd@duke.edu <mailto:james.todd@duke.edu>
Duke University http://www.duke.edu


3) Why 'Integrity' was Such a Sought-After Word This Year

It beat 'refugee' and 'contempt' as the most looked-up word of 2005, according to Merriam-Webster's online dictionary.

By Sara Miller Llana | Staff writer of The Christian Science Monitor
from the December 20, 2005 edition - http://www.csmonitor.com/2005/1220/p02s01-ussc.html 

Between the CIA leak investigation, scandals in Congress, and disgraced athletes, 2005 had more than its fair share of ethical disappointment.

The result? "Integrity" was the most looked-up word of 2005, according to Merriam-Webster's online dictionary.

That comes as no surprise to many. The reflex to type a word into www.m-w.com is often prompted by the desire to understand an event and its context. That is one reason "tsunami" and "filibuster" also made the top 10 list.

In a year in which it seemed in short supply, integrity - defined as firm adherence to a code; incorruptibility - was in high demand.

"So many people have challenged other people's integrity this year," says Richard Katula, who teaches political rhetoric at Northeastern University in Boston. "I don't remember a time since the Nixon impeachment hearings when political discourse was so coarsened and crude."

The word "refugee" made it to No. 2, after hurricane Katrina necessitated the evacuation of thousands of Gulf Coast residents. The ensuing debate, over whether "refugee" was the proper term for displaced residents or whether it was in fact pejorative, summoned thousands of Americans to their dictionaries to decide for themselves. The word received more queries in one month than most words in an entire year.

Less-weighty scenarios shaped this year's list, too. Fingers rushed to type in the word "insipid" after the adjective was uttered on "American Idol."

For those in the word business, the public's effort to understand the verbal signs of the times is promising.

"It shows that the English-speaking population is not the bunch of illiterate dolts that some critics like to portray," says John Morse, president of Merriam-Webster in Springfield, Mass. "Dictionarymakers always had a pretty good sense of what words are used most often ... but never really knew what words are looked up most often."

Along with ubiquitous, irony, and metaphor - words that sit at the sweet spot of complexity and curiosity - integrity has traditionally hovered near the top of Webster's hit parade.

But the noun moved to the front of the pack in recent years, reflecting, perhaps, its conspicuous absence from some committee rooms, boardrooms, locker rooms, and classrooms across America.

In Dr. Katula's class, the word integrity has become central to debate, especially with the overload of information today. "Students spend so much time on the Internet, they are constantly asking what information has integrity," he says.

For those in the integrity business, the news comes as both good and bad. Tim Dodd, the executive director of the Center for Academic Integrity at Duke University in Durham, N.C., says he wrote a colleague an e-mail when he found out that his area of expertise was the most popular word of 2005: In all honesty, I'm not sure whether I'm heartened or depressed by this finding, he wrote.

It's heartening, he explains later, that people are curious enough about its significance to look it up. But it's sobering, he adds, that they need to look up integrity in the first place.

Top 10 most looked-up words of 2005

1. integrity n. firm adherence to a code, especially moral or artistic values; incorruptibility.

2. refugee n. one that flees; especially a person who flees to a foreign country or power to escape danger or persecution.

3. contempt n. willful disobedience to or open disrespect of a court, judge or legislative body.

4. filibuster n. the use of extreme dilatory tactics in an attempt to delay or prevent action, especially in a legislative assembly.

5. insipid adj. lacking in qualities that interest, stimulate or challenge; dull, flat.

6. tsunami n. a great sea wave produced especially by submarine earth movement or volcanic eruption.

7. pandemic n. occurring over a wide geographic area and affecting an exceptionally high proportion of the population.

8. conclave n. a private meeting or secret assembly, especially a meeting of Roman Catholic cardinals secluded continuously while choosing a pope.

9. levee n. an embankment for preventing flooding; a continuous dike or ridge (as of earth) for confining the irrigation areas of land to be flooded.

10: inept n. generally incompetent; bungling.

For Further Information:

4) Punctuations of Future Energy

John L. Petersen, Big Issues for 2006,  http://www.arlingtoninstitute.org/

I’d like to send along to you my best wishes for 2006.  It really is a cliché, but it is kind of amazing how time flies.  It doesn’t seem very long ago that we were working very hard getting ready for the turnover of the century, wondering what might happen if there was large-scale computer failure.  Now we have another set of equally important issues that are moving off the horizon into our near-term field of concern. 

For a number of years now here at The Arlington Institute we have been talking about the increasing rate of change and the growing significance and implications of the big issues on our global horizon.  >From our point of view, we’re now watching it all happen – big, accelerating change with more potential wild card surprises. 

The best books that I’ve read that discuss the technological drivers of the change is Ray Kurzweil’s The Singularity Is Near : When Humans Transcend Biology and Joel Garreau’s Radical Evolution : The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human. Both are very important overviews of where we’re going in terms of modifying ourselves.  The first two chapters of Kurzweil alone are worth the cost of the book.  I highly recommend them both. 

Although technology is clearly one of the major change drivers that we are living with, it is not the only one – by any means.  The changing climate has extraordinary near-term potential implications for all of us who live on this planet.  Last year Walter Cronkite wrote in the Philadelphia Inquirer that “Global warming is at least as important as gay marriage or the cost of Social Security. And if it is not seriously debated in the general election, it will measure the irresponsibility of the entire political class. This is an issue that cannot, and must not, be ignored any longer.”  If you do not remember the debate on global warming, you are forgiven, because it has never taken place.

The lack of US governmental interest and concern for effectively confronting this issue is convincing growing numbers of people that they cannot presume that the government will fulfill their obligation to provide for the national security of the country in this instance and initiate policies to effectively offset the clear trend and develop contingencies for possible climate shift.  Let me recommend two recent sources in this regard.

The New York Times had a very good December book review by Bill McKibben called The Coming Meltdown (http://www.nybooks.com/articles/18616?email) that serves up the magnitude of the issue and how climate change (particularly rapid climate change) is so huge in its implications that it is hard to effectively comprehend the potential scope of the problem.  If you think Katrina was bad, wait until the climate rapidly changes and among other things, food doesn’t grow where and how it used to. 

This is not a farfetched idea.  Whitley Streiber has written about a scenario where the rapidly warming arctic surface air (the subject of McKibben’s review) that has been held down by the denser cold artic upper air masses, suddenly rises – like warm air does – and a huge amount of frigid air displaces it at the surface and sweeps down from the pole, initiating a mini ice age . . . in a matter of months.*     

It seems to me that another issue that has the same architecture – common structure – as climate change is peak oil.  The notion that we are rapidly approaching the point in time where we will begin, for the first time, to extract less and less of the energy source that has fundamentally fueled the industrial revolution (i.e. that life that most people reading this enjoy) is profound in its implications. 

The problem is that the demand, driven by population growth and economic development of countries like the U.S. and China, continues to expand rapidly even though the supply suddenly starts to decrease.  Like climate change this is a very fundamental factor (energy) that defines our options for living on this planet.  The notion that the most important energy source in the world for which there is no clear alternative would rapidly start to go away, is enough to keep thoughtful people up late at night writing scenarios of doom. 

They’re doing that, of course. If you’re not familiar with the peak oil issue, you should be. Here’s one of quite a few sources  (http://www.lifeaftertheoilcrash.net/Preparations.html) that are painting the pictures that will remind you of Y2K scenarios.

There are those, of course, who say we will keep finding cheap oil and that the peak is decades away. But if they’re wrong (and so far, the trends appear to be against them) then the whole world could be on the verge of a major shift that, absent the rapid integration of a new global energy source, could be quite painful. 

Rapid climate change and peak oil are so big that they have the fundamental requirement that you need to be working on an alternative long before the actual event takes place or things come unraveled rather fast.  To give you a sense of that, check out this study  (http://www.energybulletin.net/4789.html) by SAIC for “a government agency” outlining three different scenarios based upon actively responding to peak oil twenty years before the peak, ten years before, and at the time of identifying the peak.  The bottom line is that if you don’t actively start to put in place alternatives two decades before the peak, the underlying infrastructure and economies are very badly damaged.  It is catastrophic if you wait until the peak is obvious. It’s the same for rapid climate change.

There are big, deep forces at play here, committed to changing the way humans live on this earth.  Lindsey Grant, former deputy assistant secretary for environment and population affairs at the State Department and National Security Council staffer has written a very persuasive little tome called The Collapsing Bubble: Growth and Fossil Energy.  Grant writes things like, “World population quadrupled in one century, a change so astonishing that it has altered – or should have altered – our assumptions as to the human connection to the rest of the planet.”  He put together his own projection on how energy and population might interact in the U.S. in the coming half century... (omitted for brevity - Ed. note) ... he presumes that oil peaks about now and that coal takes over as the major fuel supporting our economy until the coal peaks about 2075 and then everything comes apart.  I think it is interesting think about how the use of coal might (or most likely might not) expand so quickly to take up the slack from the petroleum peak. 

For more information,contact John L. Petersen johnp@arlingtoninstitute.org

* Strieber and Bell, "The Coming Global Superstorm" was the book basis for the movie, Day After Tomorrow       - Ed. note

5) Can Earth's Seismic Radio Help Predict Quakes?
Padma Tata,  NewScientist news service, 18 November 2005
WITH refugees still huddling in tents across Kashmir after tens of thousands died in October's earthquake there, the need for earthquake prediction systems is once again thrown into stark relief. Knowing that the geologically restless Himalayas will produce more, stronger quakes is no use: what people need to know is when and where a quake will strike next.

So far, however, earthquake prediction has proved an elusive art: no one has worked out how to read Earth's vital signs to provide accurate warnings. But there is hope. Among the welter of dead ends - from monitoring animal behaviour to measuring radioactive gas emissions or the flow of groundwater - a new bellwether is coming to the fore: electromagnetic radiation.

Prior to some recent quakes, scientists have detected electromagnetic pulses emanating from the ground and electromagnetic disturbances in the ionosphere, the planet's tenuous envelope of charged particles extending from about 80 to 1000 kilometres up. "There are definitely hints of something [electromagnetic] happening in the region of earthquakes before the earth moves," says Colin Price, a geophysicist at Tel Aviv University in Israel.

Price and others have been working in quake-prone regions in California, Japan and Russia. At a meeting of the International Union of Radio Science (URSI) in New Delhi, India, in October, he and his colleagues speculated that as underground stresses build up, rocks containing magnetic particles begin to fracture, generating ultra-low-frequency (ULF) radio waves - below 1 hertz - as they are torn apart. Detect these radio waves, suggest the researchers, and you might have the makings of a prediction system. (Scientists are now also studying how animals can hear ULF and react to it quickly, since no animals were killed in the Indonesian tsunami a year ago. Order the DVD from www.pbs.org 800-336-1917 "Can Animals Predict Disaster?" - Ed. note)

Some research groups are already tunnelling underground to pick up radio pulses in the ULF range, while others are using sensor-stuffed satellites to measure radio disturbances in the ionosphere above quake-prone regions. Because there have been many false dawns in earthquake prediction, Price is cautious. "But if the chances are one in a hundred that we succeed, the huge benefits of success make this research worth continuing," he says.

Quakes' telltale radio signals were first rumbled almost 20 years ago, following an accidental discovery by Anthony Fraser-Smith of the Space, Telecommunications and Radioscience (STAR) Laboratory at Stanford University in California. During the Loma Prieta earthquake that hit the San Francisco area in October 1989, Fraser-Smith was monitoring electromagnetic noise at frequencies up to 10 hertz. He noticed the electromagnetic noise increased almost 20-fold for two weeks before the earthquake - and continued at that level about a month afterwards. It peaked 3 hours before the quake, between 0.01 and 0.5 hertz.

Since then, others have tried to make similar measurements in seismically active regions. Groups in Japan and Russia have observed similar signals to Fraser-Smith's, but for up to one or two months before a quake. Could this be the long-sought early warning of seismic catastrophe?

Minoru Tsutsui at Kyoto Sangyo University in Japan is trying to find out. His team has bored a hole 100 metres deep and 10 centimetres wide in a back lot on the campus. They placed one directional ULF antenna at the bottom of the hole and another above the ground. The relative strengths of any ULF signals detected at the two antennas allow the team to work out which direction the pulses come from.

On 4 January 2004 the system began detecting ULF radio pulses coming from the south-east. Two days later, a magnitude-5.5 quake struck the area, with an epicentre 130 kilometres away - to the south-east. Six hours after the quake, the ULF signals spread out, arriving from both the south-east and south-west, and died off the next day.

Since then, the Kyoto team has discovered that this effect is only detectable above a certain threshold quake strength (Geophysical Research Letters, DOI: 10.1029/2005GL023691). Tsutsui now wants to investigate the mechanism that produces these ULF radio pulses. Until we know this, he says, "we cannot easily predict where the epicentre will be".

Meanwhile, Price's team is collecting data from inside a 175-metre-deep, 3-metre-wide tunnel dug into the rift valley between the Dead Sea and Eilat in Israel, an area that frequently experiences tremors up to magnitude 6. But so far, they have had little luck. "Since 2003, we have not had any large events within 100 kilometres of our station," says Price. None of the earthquakes has been stronger than magnitude 4.4, which doesn't appear to be intense enough to generate the telltale ULF pulses.

They may not be measurable in any case, says Masashi Hayakawa, an electronics engineer at the University of Electro-Communications in Tokyo, who has strong reservations about Tsutsui's conclusions. He points out that all kinds of natural phenomena produce ULF signals - thunderstorms, solar activity and meteors among them - and thinks Tsutsui won't be able to pick out ULF signals caused by imminent earthquakes from the noise.

Hayakawa thinks the atmosphere holds the answers: he measured ionospheric disturbances between 3 and 30 kilohertz a few days before Japan's 1995 Kobe quake. And Michel Parrot of France's National Council for Scientific Research (CNRS) in Orleans agrees. He points to preliminary data from a European Space Agency satellite called the Detector for Electromagnetic Emissions Transmitted from Earthquake Regions (DEMETER). Using a battery of sensors that measure the temperature, density and composition of the ionosphere, DEMETER measured an increase in ion density and temperature of the ionosphere seven days before a quake of magnitude 7 hit Japan's Kii peninsula on 5 September 2004.

This year it observed similar disturbances two days before the 23 January quake in Indonesia and five days before a quake on 30 August near Japan, and last November two days before a quake close to New Zealand.

As luck would have it, DEMETER was turned off during the 26 December 2004 Asian tsunami quake off Sumatra and the quake in Kashmir on 8 October this year, so it captured no data on these two events, Parrot says.

Ian Main, a seismologist at the University of Edinburgh, UK, believes the ULF and ionospheric findings are intriguing, but not yet convincing enough to establish a link to earthquakes. To do that, a far larger number of quakes must be examined, he says.

How to recognise the Big One, the moment it begins

In places like Japan and California, where tiny tremors occur every day, it is not enough to know when and where an earthquake will hit - you also need to know just how serious it will be. Now seismologists think they can predict how bad an earthquake will be within seconds of an underground rupture beginning, perhaps giving vital seconds' warning of the main seismic waves. Their findings could provide an early-warning system to help limit earthquake damage.

Until now, seismologists believed you could only assess an earthquake's full force once the quake was over. Earthquakes were thought to progress unpredictably, with one rupture triggering the next as weak masses of rock give way along a fault line. But when Richard Allen at the University of California, Berkeley, began looking at earthquake records he realised that this was wrong. "An earthquake essentially knows how big it will be within the first few seconds of rupture," he says.

When an earthquake strikes, seismic waves radiate from the focus at about 4 to 6 kilometres per second. These fastest-travelling primary waves, which are weak and do little damage, can arrive more than 20 seconds before the slower, more destructive waves hit. Allen and his colleague Erik Olsen, at the University of Wisconsin in Madison, studied 71 earthquakes in Japan, Taiwan, California and Alaska and found that they could use the frequency of the waves arriving at sensors within the first 4 seconds to predict the magnitude of the entire quake (Nature, vol 438, p 212).

While 20 seconds might not sound long, it might be enough time for automated emergency systems to stop trains, move machinery into safe modes, shut down gas lines and sound alarms to warn people to shelter under desks or in doorways. And the emergency systems would only activate if a major quake is on its way - avoiding unnecessary panic.

Accurate magnitude predictions could lead to smart buildings that adjust their stiffness to cope with incoming shock waves.

Related Articles

First, there is a rich history of RF (J. Geophys. Res. 87, B4, 2851-2859, 10 April 1982), LF and ELF (J. Geomag. Geoelectr. 38, 1031-1040, 1986) as well as ULF magnetic signals (Tectonophysics, 6 (1) 1968, 59-68 and Geophysical Research Letters, 17, 9, August, 1990) preceding the onslaught of earthquakes. Secondly, Professor Emeritus Elizabeth A. Rauscher (U. of Nevada) has worked out how to read the Earth’s vital signs to provide accurate warnings of impending earthquakes and volcanic eruptions and even patented the triangulating magnetic coil instrumentation (US #4,724,390) back in 1988. Her invention is based on the changes in appearance and amplitude of specific ELF frequencies found to be characteristic of an area, from the local tectonic plate piezoelectric stresses prior to earthquake/volcanic events. Once these are identified by the trained operator, Dr. Rauscher points out in her new book, they can then pinpoint the epicenter (from triangulation of three widely spaced coil signals) and give significant advance warning prior to the seismic event.

The most dramatic prediction, for which she is most famous, is documented as follows: “During the University of California, Irvine International Workshop on Low Frequency Electrical Precursors, Lake Arrowhead, CA, sponsored by the USGS and NSF on June 14-17, 1992, one of us (E.A.R.) announced our prediction of a large, greater than 7 event in the region of the conference.* On June 27, 1992, the Landers 7.5 event east of Los Angeles occurred.” This two-week advance notice of an impending earthquake is published in Electromagnetic Phenomena Related to Earthquake Prediction by Hayakawa et al., (Bise & Rauscher, “Ambient Electromagnetic Fields as Possible Seismic and Volcanic Precursors” Terra Sci. Pub. Co., Tokyo, 1994, p. 221 - 242). The ironic and synchronistic nature of the predicted earthquake event, right in the midst of the world's open-minded but unconvinced geologists, cannot be overemphasized. Another decade had to pass before the topic of earthquake prediction is a leading news item in scientific journals once again. However, trial and error still is the best technique for those unfamiliar with the Rauscher predictive protocol.

Dr. Rauscher’s book on her well-documented, scientific technique for earthquake prediction is scheduled for release in Spring, 2006. However, as the outspoken Haroun Tazieff, former Director of Research at the French National Scientific Research Center, states in his 20-year old McGraw Hill book, Earthquake Prediction (ISBN 0-07-62992-7), there is an unexplainable psychological inertia and politically motivated delay in using the prediction technology he and others have pioneered over the years.

Integrity Research Institute publishes a comprehensive report which includes a complete reprint of Dr. Rauscher's pioneering article mentioned above. The title of the report is "Earthquake Predictions with ELF Magnetometer" - report # 201. Order online: http://users.erols.com/iri/orderAll.html .


For more information, contact Dr. Elizabeth Rauscher, 3500 S. Tomahawk, Bldg, 188, Apache Junction, AZ 85219, phone: 480-982-2285.


* NOTE: "USGS" is the US Geological Survey www.usgs.gov ; "NSF" is the National Science Foundation www.nsf.gov  - Ed. note


7) Earthquake AlarmIEEE Spectrum
Tom Bleier and Friedemann Freund, IEEE Spectrum, December, 2005  http://spectrum.ieee.org/dec05/2367/
Impending earthquakes have been sending us warning signals—and people are starting to listen

Deep under Pakistan-administered Kashmir, rocks broke, faults slipped, and the earth shook with such violence on 8 October that more than 70,000 people died and more than 3 million were left homeless [see photo, "Devastated"]. But what happened in the weeks and days and hours leading up to that horrible event? Were there any signs that such devastation was coming? We think there were, but owing to a satellite malfunction we can't say for sure.

How many lives could have been saved in that one event alone if we'd known of the earthquake 10 minutes in advance? An hour? A day?

Currently, predictions are vague at best. By studying historical earthquake records, monitoring the motion of the earth's crust by satellite, and measuring with strain monitors below the earth's surface, researchers can project a high probability of an earthquake in a certain area within about 30 years. But short-term earthquake forecasting just hasn't worked.

Accurate short-term forecasts would save lives and enable businesses to recover sooner. With just a 10-minute warning, trains could move out of tunnels, and people could move to safer parts of buildings or flee unsafe buildings. With an hour's warning, people could shut off the water and gas lines coming into their homes and move to safety. In industry, workers could shut down dangerous processes and back up critical data; those in potentially dangerous positions, such as refinery employees and high-rise construction workers, could evacuate. Local government officials could alert emergency-response personnel and move critical equipment and vehicles outdoors. With a day's warning, people could collect their families and congregate in a safe location, bringing food, water, and fuel with them. Local and state governments could place emergency teams and equipment strategically and evacuate bridges and tunnels.

It seems that earthquakes should be predictable. After all, we can predict hurricanes and floods using detailed satellite imagery and sophisticated computer models. Using advanced Doppler radar, we can even tell minutes ahead of time that a tornado will form.

Accurate earthquake warnings are, at last, within reach. They will come not from the mechanical phenomena—measurements of the movement of the earth's crust—that have been the focus of decades of study, but, rather, from electromagnetic phenomena. And, remarkably, these predictions will come from signals gathered not only at the earth's surface but also far above it, in the ionosphere.

For decades, researchers have detected strange phenomena in the form of odd radio noise and eerie lights in the sky in the weeks, hours, and days preceding earthquakes. But only recently have experts started systematically monitoring those phenomena and correlating them to earthquakes.

A light or glow in the sky sometimes heralds a big earthquake. On 17 January 1995, for example, there were 23 reported sightings in Kobe, Japan, of a white, blue, or orange light extending some 200 meters in the air and spreading 1 to 8 kilometers across the ground. Hours later a 6.9-magnitude earthquake killed more than 5500 people. Sky watchers and geologists have documented similar lights before earthquakes elsewhere in Japan since the 1960s and in Canada in 1988.

Another sign of an impending quake is a disturbance in the ultralow frequency (ULF) radio band—1 hertz and below—noticed in the weeks and more dramatically in the hours before an earthquake. Researchers at Stanford University, in California, documented such signals before the 1989 Loma Prieta quake, which devastated the San Francisco Bay Area, demolishing houses, fracturing freeways, and killing 63 people.

Both the lights and the radio waves appear to be electromagnetic disturbances that happen when crystalline rocks are deformed—or even broken—by the slow grinding of the earth that occurs just before the dramatic slip that is an earthquake. Although a rock in its normal state is, of course, an insulator, this cracking creates tremendous electric currents in the ground, which travel to the surface and into the air.

The details of how the current is generated remain something of a mystery. One theory is that the deformation of the rock destabilizes its atoms, freeing a flood of electrons from their atomic bonds, and creating positively charged electron deficiencies, or holes.

One of us, Freund, working at NASA Ames Research Center in Mountain View, Calif., demonstrated through laboratory rock-crushing experiments that the sundering of oxygen-to-oxygen bonds in the minerals of a fracturing rock could produce holes. These holes manage to propagate through rock up toward the surface, while the electrons flow down into Earth's hot mantle. The movement of these charges, measured at 300 meters per second in the lab, causes changes in the rock's magnetic field that propagate to the surface.

Another theory is that the fracture of rock allows ionized groundwater thousands of meters below the surface to move into the cracks. The flow of this ionized water lowers the resistance of the rock, creating an efficient pathway for an electric current. However, some researchers doubt that water can migrate quickly enough into the rock to create large enough currents; for this theory to be correct, the water would have to move hundreds of meters per second.

Whatever the cause, the currents generated alter the magnetic field surrounding the earthquake zone. Because the frequencies of these magnetic field changes are so low—with wavelengths of about 30,000 kilometers—they can easily penetrate kilometers of solid rock and be detected at the surface. Signals at frequencies above a few hertz, by contrast, would rapidly be attenuated by the ground and lost.

We can detect such electromagnetic effects in a number of ways [see illustration, "Signs of Quakes to Come"]. Earthquake forecasters can use ground-based sensors to monitor changes in the low-frequency magnetic field. They can also use these instruments to measure changes in the conductivity of air at the earth's surface as charge congregates on rock outcroppings and ionizes the air.

Using satellites, forecasters can monitor noise levels at extremely low frequency (ELF)—below 300 Hz. They can also observe the infrared light that some researchers suspect is emitted when the positive holes migrate to the surface and then recombine with electrons.

Scientists around the world are looking at all of these phenomena and their potential to predict earthquakes accurately and reliably. One group is at QuakeFinder, a Palo Alto, Calif.­based company cofounded by one of us, Bleier, in 2000. QuakeFinder researchers have begun directly monitoring magnetic field changes through a network of ground-based stations, 60 so far, in California [see photo, "Earthquake Investigator"]. In 2003, the company joined forces with Stanford and Lockheed Martin Corp.'s Sunnyvale, Calif., center to launch an experimental satellite designed to remotely monitor magnetic changes. A larger, more sensitive satellite is in the design stages. QuakeFinder hopes to develop an operational earthquake warning system within the next decade.

The 1989 Loma Prieta earthquake near San Francisco sent out strong signals of magnetic disturbances fully two weeks before the 7.1-magnitude quake occurred. The idea that such signals existed was still a new one then, certainly not well enough accepted to justify a decision to issue a public warning.

We happen to have excellent data from that quake. Stanford professor Anthony C. Fraser-Smith had buried a device called a single-axis search-coil magnetometer to monitor the natural background ULF magnetic-field strength at about 7 km from what turned out to be the center of that quake. He selected this spot simply because it was in a quiet area, away from the rumblings of the Bay Area Rapid Transit trains and other man-made ULF noise. He monitored a range of frequencies from 0.01 to 10 Hz, essentially, the ULF band and the lower part of the ELF band.

On 3 October, two weeks before the quake, Fraser-Smith's sensors registered a huge jump in the ULF magnetic field at the 0.01-Hz frequency—about 20 times that of normal background noise at that frequency. Three hours before the quake, the 0.01-Hz signal jumped to 60 times normal. Elevated ULF signals continued for several months after the quake, a period rife with aftershocks, and then they disappeared.

The Loma Prieta quake was a stunning confirmation of the value of ULF signals in predicting earthquakes. This validation of the theory prompted Bleier to establish a network of earthquake sensors in the Bay Area, an effort that grew into QuakeFinder.

Other researchers around the world who monitored changes in the magnetic field at ULF frequencies had noticed similar, but not as extreme, changes prior to other events. These observations occurred shortly before a 6.9-magnitude quake in Spitak, Armenia, in December 1988 and before a devastating 8.0-magnitude earthquake in Guam in August 1993.

Author Bleier recorded spikes of activity, four to five times normal size, in the 0.2- to 0.9-Hz range for 9 hours before a 6.0-magnitude earthquake in Parkfield, Calif., on 28 September 2003. Solar storms sometimes cause ripples in the magnetic field at those frequencies, but there had been no appreciable solar activity for six days prior to the quake.

In Taiwan, sensors that continuously monitor Earth's normal magnetic field registered unusually large disturbances in a normally quiet signal pattern shortly before the 21 September 1999 Chi-Chi, Taiwan, earthquake, which measured 7.7. Using data from two sensors, one close to the epicenter, and one many kilometers away, researchers were able to screen out the background noise by subtracting one signal from the other, leaving only the magnetic field noise created by the imminent earthquake. Two teams, one in Taiwan and one in the United States, calculated that the currents required to generate those magnetic-field disturbances were between 1 million and 100 million amperes.

Besides detecting magnetic-field disturbances, ground-based sensors can record changes in the conductivity of the air over the quake zone caused by current welling up from the ground. These sensors can vary in form, but those we use are made from two 15-centimeter by 15-cm steel plates locked into position about 1 cm apart. A 50-volt dc battery charges one plate; the other is grounded. A resistor and voltmeter between the battery and the first plate senses any flow of current.

Normally, the air gap between the plates acts as an insulator, and no current flows. If, however, there are charged particles in the air, a current begins to flow, creating a voltage drop across the resistor that registers with the voltmeter. The currents created in this way are not large—on the order of millivolts—but are detectable.

Last year QuakeFinder installed 25 ELF detectors with such air- conductivity sensors in California's Mojave Desert to determine if increased air conductivity actually precedes earthquakes and contributes to the formation of the so-called earthquake lights [see photo "Mysterious Lights"]. But to date, no large earthquakes have struck near these sensors, so no data are available yet.

Ground-based sensors are not the only mechanisms for monitoring the signals given off by impending earthquakes. Above the ground, satellite-based instruments are picking up interesting patterns in low-frequency signals and detecting other oddities.

In 1989, after the devastating earthquake in Armenia, a Soviet Cosmos satellite observed ELF-frequency disturbances whenever it passed over a region slightly south of the epicenter. The activity persisted up to a month after the quake. Unfortunately, no data were gathered just prior to the initial quake. In 2003, the U.S. satellite QuakeSat detected a series of ELF bursts two months before and several weeks after a 22 December, 6.5-magnitude earthquake in San Simeon, Calif.

In June 2004, a multinational consortium lead by the French government launched a new earthquake detection satellite called DEMETER (for Detection of Electro-Magnetic Emissions Transmitted from Earthquake Regions). DEMETER, much more sensitive than earlier satellites, has already detected some unusual increases in ion density and ELF disturbances above large quakes around the world. Unfortunately, the satellite was malfunctioning in the days before October's temblor in Kashmir. Because the project is so new, researchers are still working on the tools for processing DEMETER's data. Its backers are expecting more detailed analyses to be available this month. Infrared radiation detected by satellites may also prove to be a warning sign of earthquakes to come. Researchers in China reported several instances during the past two decades of satellite-based instruments registering an infrared signature consistent with a jump of 4 to 5 oC before some earthquakes. Sensors in NASA's Terra Earth Observing System satellite registered what NASA called a "thermal anomaly" on 21 January 2001 in Gujarat, India, just five days before a 7.7-magnitude quake there; the anomaly was gone a few days after the quake [see satellite images, "Warm Before the Storm"]. In both cases, researchers believe, these sensors may have detected an infrared luminescence generated by the recombination of electrons and holes, not a real temperature increase.

Even the existing Global Positioning System may serve as part of an earthquake warning system. Sometimes the charged particles generated under the ground in the days and weeks before an earthquake change the total electron content of the ionosphere—a region of the atmosphere above about 70 km, containing charged particles. If the ground is full of positively charged holes, it would attract electrons from the ionosphere, decreasing the airborne electron concentration over an area as much as 100 km in diameter and pulling the ionosphere closer to Earth. This change in electron content can be detected by alterations in the behavior of GPS navigation and other radio signals. Each GPS satellite transmits two signals. The relative phase difference between the two signals when they reach a receiver changes, depending on the electron content of the ionosphere, so tracking these phase changes at a stationary receiver allows researchers to monitor changes in the ionosphere.

Researchers in Taiwan monitored 144 earthquakes between 1997 and 1999, and they found that for those registering 6.0 and higher the electron content of the ionosphere changed significantly one to six days before the earthquakes.

Earthquake forecasters can also watch for changes in the ionosphere by monitoring very-low-frequency (3- to 30-kilohertz) and high-frequency (3- to 30-megahertz) radio transmissions. The strength of a radio signal at a receiver station changes with the diurnal cycle: it is greater at night than in daylight, as anyone who listens to late-night radio from far-off stations knows. The altitude of the ionosphere, which moves lower as the positive holes migrate to the surface, also has an effect on radio signals; the lower the ionosphere, the stronger the signals. So at dawn on an earthquake day, a curve drawn to represent the drop-off in radio signal strength will appear markedly different from the normal curve for that signal at that location.

The connection between large earthquakes and electromagnetic phenomena in the ground and in the ionosphere is becoming increasingly solid. Researchers in many countries, including China, France, Greece, Italy, Japan, Taiwan, and the United States, are now contributing to the data by monitoring known earthquake zones.

Using these phenomena for earthquake prediction will take a combination of satellite and ground-based sensors. Satellites can cover most of the planet, but at ELF frequencies signal sources are hard to pinpoint. Ground-based monitors have smaller detection ranges, up to 50 km, depending on the sensitivity of the magnetometer and the size of the quake, but are far more precise. With a network of such sensors, forecasters looking at the amplitude of signals received at each sensor might be able to locate a quake within 10 to 20 km. This means that, for an area as large as California, accurate earthquake detection might require that forecasters distribute 200 to 300 magnetic-field and air-conductivity sensors on the ground.

QuakeFinder and other groups are trying to get funding to integrate space- and ground-based sensors to detect all these precursor signals—electronically detected ELF and ULF magnetic-field changes, ionospheric changes, infrared luminescence, and air-conductivity changes—along with traditional mechanical and GPS monitoring of movements of the earth's crust. With such a broad range of phenomena being monitored, spikes registered by different monitors detecting different types of signals would make forecasts more reliable. Forecasters may then be able to issue graduated warnings within weeks, days, and hours, declaring increasing threat levels as the evidence from different sensors begins pointing in the same direction.

Useful as such an earthquake warning system would be, we're not ready to deploy one yet. For one thing, the scientific underpinnings of the phenomena need to be better understood before public officials and others have confidence in the data. On this front, author Freund has been investigating the theory that currents are generated by breaking oxygen-to-oxygen bonds in rocks under stress. He has experimented with various rock samples, demonstrating at the laboratory scale that cracking rock can produce positive charges, which, on a geophysical scale, could form significant ground currents and infrared emissions. Other rock-crushing experiments are under way in Japan and Russia. In Mexico, meanwhile, researchers are focusing on understanding the related changes in the ionosphere.

A working prediction system won't come cheaply, but it's nothing compared with the loss of life and the billions of dollars in damage that earthquakes can cause. The 200 to 300 ground-based sensors necessary to blanket California alone will cost $5 million to $10 million. A dedicated satellite with magnetic, infrared, and other sensors would cost $10 million to $15 million to build and launch.

Meanwhile, a few technical challenges remain to be solved. At satellite altitudes, space itself is full of noise, compromising the data gathered. The data must be digitally processed with filters and pattern-matching software, still being refined. And down on the ground, man-made noise fills the electromagnetic spectrum. Researchers are attempting to use differential processing of two distant sensors to reduce or eliminate such interference.

We expect these problems, both technical and financial, to be worked out within the next 10 years. Then governments in active earthquake areas such as California, China, Japan, Russia, and Taiwan could install warning systems as early as 2015, saving lives and minimizing the chaos of earthquakes.

About the Authors

Tom Bleier is CEO of www.QuakeFinder.com, in Palo Alto, Calif. He previously spent 37 years developing, building, and testing defense and commercial satellites and ground-control systems, most recently for Stellar Solutions Inc., a satellite-systems engineering company, also in Palo Alto.

Friedemann Freund is a senior researcher at NASA Ames Research Center, in Mountain View, Calif., and is also a professor in the physics department of San Jose State University, in California. His research focuses on how stress can cause electric current in rocks.

To Probe Further

QuakeFinder's earthquake forecasting research, using ground-based and satellite-based electromagnetic monitoring techniques, is described at http://www.quakefinder.com.

Electromagnetic signals created by the fracturing of rocks before earthquakes are analyzed at http://science.nasa.gov/headlines/y2003/11aug_earthquakes.htm.

France's DEMETER satellite's monitoring of earthquake signals is discussed at http://smsc.cnes.fr/DEMETER/GP_actualite.htm.

>>>Provided as a public service from www.IntegrityResearchInstitute.org which depends upon support from people like you. IRI accepts tax-deductible contributions year-long.<<<