Monday, February 27, 2012

Super Bugs From Space Offer New Source Of Power

sBacteria normally found 30km above the earth have been identified as highly efficient generators of electricity.

Bacillus stratosphericus - a microbe commonly found in high concentrations in the stratosphere - is a key component of a new 'super' biofilm that has been engineered by a team of scientists from Newcastle University.

Isolating 75 different species of bacteria from the Wear Estuary, Country Durham, UK, the team tested the power-generation of each one using a Microbial Fuel Cell (MFC).

By selecting the best species of bacteria, a kind of microbial "pick and mix", they were able to create an artificial biofilm, doubling the electrical output of the MFC from 105 Watts per cubic metre to 200 Watts per cubic metre.

While still relatively low, this would be enough power to run an electric light and could provide a much needed power source in parts of the world without electricity.

Among the 'super' bugs was B. Stratosphericus, a microbe normally found in the atmosphere but brought down to earth as a result of atmospheric cycling processes and isolated by the team from the bed of the River Wear.

Publishing their findings today in the American Chemical Society's Journal of Environmental Science and Technology, Grant Burgess, Professor of Marine Biotechnology at Newcastle University, said the research demonstrated the "potential power of the technique."

"What we have done is deliberately manipulate the microbial mix to engineer a biofilm that is more efficient at generating electricity," he explains.

"This is the first time individual microbes have been studied and selected in this way. Finding B.Stratosphericus was quite a surprise but what it demonstrates is the potential of this technique for the future - there are billions of microbes out there with the potential to generate power."

The use of microbes to generate electricity is not a new concept and has been used in the treatment of waste water and sewage plants.

Microbial Fuel Cells, which work in a similar way to a battery, use bacteria to convert organic compounds directly into electricity by a process known as bio-catalytic oxidation.

A biofilm - or 'slime' - coats the carbon electrodes of the MFC and as the bacteria feed, they produce electrons which pass into the electrodes and generate electricity.

Until now, the biofilm has been allowed to grow un-checked but this new study shows for the first time that by manipulating the biofilm you can significantly increase the electrical output of the fuel cell.

Funded by the Engineering and Physical Sciences Research Council (EPSRC), the Biotechnology and Biological Sciences Research Council (BBSRC) and the Natural Environment Research Council (NERC), the study identified a number of electricity-generating bacteria.

As well as B. Stratosphericus, other electricity-generating bugs in the mix were Bacillus altitudinis - another bug from the upper atmosphere - and a new member of the phylum Bacteroidetes.

Newcastle University is recognised as a world-leader in fuel cell technology. Led by Professor Keith Scott, in the University's School of Chemical Engineering and Advanced Materials, the team played a key role in the development of a new lithium/air powered battery two years ago. Professor Scott says this latest fuel cell research can take the development of MFCs to a new level.


Citation:
Source information: Enhanced electricity production by use of reconstituted artificial consortia of estuarine bacteria grown as biofilms. Jinwei Zhang, Enren Zhang, Keith Scott and Grant Burgess. ACS Journal of Environmental Science & Technology2012. DOI - 10.1021/es2020007

Contacts and sources:
Newcastle University

Immortal Worms Defy Aging

Researchers from The University of Nottingham have demonstrated how a species of flatworm overcomes the ageing process to be potentially immortal.

The discovery, published in the Proceedings of the National Academy of Sciences, is part of a project funded by the Biotechnology and Biological Sciences Research Council (BBSRC) and Medical Research Council (MRC) and may shed light on the possibilities of alleviating ageing and age-related characteristics in human cells.

Planarian worms have amazed scientists with their apparently limitless ability to regenerate. Researchers have been studying their ability to replace aged or damaged tissues and cells in a bid to understand the mechanisms underlying their longevity.

Planarian Worm
 Credit: Wikipedia

Dr Aziz Aboobaker from the University's School of Biology, said: "We've been studying two types of planarian worms; those that reproduce sexually, like us, and those that reproduce asexually, simply dividing in two. Both appear to regenerate indefinitely by growing new muscles, skin, guts and even entire brains over and over again.

"Usually when stem cells divide — to heal wounds, or during reproduction or for growth — they start to show signs of ageing. This means that the stem cells are no longer able to divide and so become less able to replace exhausted specialised cells in the tissues of our bodies. Our ageing skin is perhaps the most visible example of this effect. Planarian worms and their stem cells are somehow able to avoid the ageing process and to keep their cells dividing."

One of the events associated with ageing cells is related to telomere length. In order to grow and function normally, cells in our bodies must keep dividing to replace cells that are worn out or damaged. During this division process, copies of the genetic material must pass on to the next generation of cells. The genetic information inside cells is arranged in twisted strands of DNA called chromosomes. At the end of these strands is a protective 'cap' called a telomere. Telomeres have been likened to the protective end of a shoelace which stops strands from fraying or sticking to other strands.

Each time a cell divides the protective telomere 'cap' gets shorter. When they get too short, the cell loses its ability to renew and divide. In an immortal animal we would therefore expect cells to be able to maintain telomere length indefinitely so that they can continue to replicate. Dr Aboobaker predicted that planarian worms actively maintain the ends of their chromosomes in adult stem cells, leading to theoretical immortality.

Dr Thomas Tan made some exciting discoveries for this paper as part of his PhD. He performed a series of challenging experiments to explain the worm's immortality. In collaboration with the rest of the team, he also went some way to understanding the clever molecular trick that enabled cells to go on dividing indefinitely without suffering from shortened chromosome ends.

Previous work, leading to the award of the 2009 Nobel Prize for Physiology or Medicine, had shown that telomeres could be maintained by the activity of an enzyme called telomerase. In most sexually reproducing organisms the enzyme is most active only during early development. So as we age, telomeres start to reduce in length.

This project identified a possible planarian version of the gene coding for this enzyme and turned down its activity. This resulted in reduced telomere length and proved it was the right gene. They were then able to confidently measure its activity and resulting telomere length and found that asexual worms dramatically increase the activity of this gene when they regenerate, allowing stem cells to maintain their telomeres as they divide to replace missing tissues.

Dr Tan pointed out the importance of the interdisciplinary expertise: "It was serendipitous to be sandwiched between Professor Edward Louis's yeast genetics lab and the Children's Brain Tumour Research Centre, both University of Nottingham research centres with expertise in telomere biology. Aziz and Ed kept demanding clearer proof and I feel we have been able to give a very satisfying answer."

However, what puzzled the team is that sexually reproducing planarian worms do not appear to maintain telomere length in the same way. The difference they observed between asexual and sexual animals was surprising, given that they both appear to have an indefinite regenerative capacity. The team believe that sexually reproductive worms will eventually show effects of telomere shortening, or that they are able to use another mechanism to maintain telomeres that would not involve the telomerase enzyme.

Dr Aboobaker concluded: "Asexual planarian worms demonstrate the potential to maintain telomere length during regeneration. Our data satisfy one of the predictions about what it would take for an animal to be potentially immortal and that it is possible for this scenario to evolve. The next goals for us are to understand the mechanisms in more detail and to understand more about how you evolve an immortal animal."

Professor Douglas Kell, BBSRC Chief Executive, said: "This exciting research contributes significantly to our fundamental understanding of some of the processes involved in ageing, and builds strong foundations for improving health and potentially longevity in other organisms, including humans."

Contacts and sources:
Emma Thorne
University of Nottingham 

Ice Age Coyotes Were Supersized Compared To Coyotes Today, Fossil Study Reveals

Coyotes probably shrunk due to dwindling food supply, rather than warming climate, researchers say


This skeleton is a composite from the University of California Museum of Paleontology.
 
Credit: Photo by F. Robin O'Keefe.\

 Coyotes today are pint-sized compared to their Ice Age counterparts, finds a new fossil study. Between 11,500 and 10,000 years ago — a mere blink of an eye in geologic terms — coyotes shrunk to their present size. The sudden shrinkage was most likely a response to dwindling food supply and changing interactions with competitors, rather than warming climate, researchers say.

In a paper appearing this week in Proceedings of the National Academy of Sciences, researchers studied museum collections of coyote skeletons dating from 38,000 years ago to the present day. It turns out that between 11,500 and 10,000 years ago, at the end of a period called the Pleistocene, coyotes in North America suddenly got smaller.

"Pleistocene coyotes probably weighed between 15-25 kilograms, and overlapped in size with wolves. But today the upper limit of a coyote is only around 10-18 kilograms," said co-author Julie Meachen of the National Evolutionary Synthesis Center in Durham, North Carolina.

"Within just over a thousand years, they evolved into the smaller coyotes that we have today," she added.

What caused coyotes to shrink? Several factors could explain the shift. One possibility is warming climate, the researchers say. Between 15,000 and 10,000 years ago, global average annual temperatures quickly rose by an average of six degrees. "Things got a long warmer, real fast," Meachen said.

Large animals are predicted to fare worse than small animals when temperatures warm up. To find out if climate played a role in coyotes' sudden shrinkage, Meachen and co-author Joshua Samuels of John Day Fossil Beds National Monument in Oregon measured the relationship between body size and temperature for dozens of Ice Age coyotes, and for coyotes living today, using thigh bone circumference to estimate body size for each individual.

But when they plotted body size against coldest average annual temperature for each animal's location, they found no relationship, suggesting that climate change was unlikely to be the main factor.

If the climate hypothesis is true, then we should see similar changes in other Ice Age carnivores too, Meachen added. The researchers also studied body size over time in the coyote's larger relative, the wolf, but they found that wolf body sizes didn't budge. "We're skeptical that climate change at the end of the Pleistocene was the direct cause of the size shift in coyotes," Meachen said.

This is a modern coyote and a Pleistocene coyote skull.




Credit: Original artwork by Doyle V. Trankina

Another possibility is that humans played a role. In this view, coyotes may have shrunk over time because early human hunters —believed to have arrived in North America around 13,000 years ago — selectively wiped out the bigger coyotes, or the animals coyotes depended on for food, leaving only the small to survive. Stone tool butchery marks on Ice Age animal bones would provide a clue that human hunters had something to do with it, but the fossil record has turned up too few examples to test the idea. "Human hunting as the culprit is really hard to dispute or confirm because there's so little data," Meachen said.

A third, far more likely explanation, is dwindling food supply and changing interactions with competitors, the researchers say. Just 1000 years before the sudden shrinkage in coyotes, dozens of other species were wiped out in a wave of extinctions that killed off many large mammals in North America. Until then, coyotes lived alongside a great diversity of large prey, including horses, sloths, camels, llamas and bison. "There were not only a greater diversity of prey species, but the species were also more abundant. It was a great food source," Meachen said.

While coyotes survived the extinctions, there were fewer large prey left for them to eat. Smaller individuals that required less food to survive, or could switch to smaller prey, would have had an advantage.

Before the die-off, coyotes also faced stiff competition for food from other large carnivores, including a bigger version of wolves living today called the dire wolf. After bigger carnivores such as dire wolves went extinct, coyotes would have no longer needed their large size to compete with these animals for food.

The findings are important because they show that extinction doesn't just affect the animals that disappear, the researchers say — it has long-term effects on the species that remain as well.

"In a time of increasing loss of biodiversity, understanding the degree to which species interactions drive evolutionary change is important," says Saran Twombly, program director in the National Science Foundation (NSF)'s Division of Environmental Biology, which supported the research.

"Species interactions are delicate balancing acts. When species go extinct, we see the signature of the effects on the species that remain," Meachen said.


###



CITATION: Meachen, J. and J. Samuels (2012). "Evolution in coyotes (Canis latrans) in response to the megafaunal extinctions." Proceedings of the National Academy of Sciences.

The National Evolutionary Synthesis Center (NESCent) is a nonprofit science center dedicated to cross-disciplinary research in evolution. Funded by the National Science Foundation, NESCent is jointly operated by Duke University, The University of North Carolina at Chapel Hill, and North Carolina State University. For more information about research and training opportunities at NESCent, visit www.nescent.org.







Contacts and sources:
Robin Ann Smith 
National Evolutionary Synthesis Center (NESCent)

Ultra-Fast Outflows Help Monster Black Holes Shape Their Galaxies

A curious correlation between the mass of a galaxy's central black hole and the velocity of stars in a vast, roughly spherical structure known as its bulge has puzzled astronomers for years. An international team led by Francesco Tombesi at NASA's Goddard Space Flight Center in Greenbelt, Md., now has identified a new type of black-hole-driven outflow that appears to be both powerful enough and common enough to explain this link.

Most big galaxies contain a central black hole weighing millions of times the sun's mass, but galaxies hosting more massive black holes also possess bulges that contain, on average, faster-moving stars. This link suggested some sort of feedback mechanism between a galaxy's black hole and its star-formation processes. Yet there was no adequate explanation for how a monster black hole's activity, which strongly affects a region several times larger than our solar system, could influence a galaxy's bulge, which encompasses regions roughly a million times larger.

"This was a real conundrum. Everything was pointing to supermassive black holes as somehow driving this connection, but only now are we beginning to understand how they do it," Tombesi said.

The supermassive black holes in active galaxies can produce narrow particle jets (orange) and wider streams of gas (blue-gray) known as ultra-fast outflows, which are powerful enough to regulate both star formation in the wider galaxy and the growth of the black hole. Inset: A close-up of the black hole and its accretion disk.

 (Artist concept credit: ESA/AOES Medialab)

Active black holes acquire their power by gradually accreting -- or "feeding" on -- million-degree gas stored in a vast surrounding disk. This hot disk lies within a corona of energetic particles, and while both are strong X-ray sources, this emission cannot account for galaxy-wide properties. Near the inner edge of the disk, a fraction of the matter orbiting a black hole often is redirected into an outward particle jet. Although these jets can hurl matter at half the speed of light, computer simulations show that they remain narrow and deposit most of their energy far beyond the galaxy's star-forming regions.

Astronomers suspected they were missing something. Over the last decade, evidence for a new type of black-hole-driven outflow has emerged. At the centers of some active galaxies, X-ray observations at wavelengths corresponding to those of fluorescent iron show that this radiation is being absorbed. This means that clouds of cooler gas must lie in front of the X-ray source. What's more, these absorbed spectral lines are displaced from their normal positions to shorter wavelengths -- that is, blueshifted, which indicates that the clouds are moving toward us.

In two previously published studies, Tombesi and his colleagues showed that these clouds represented a distinct type of outflow. In the latest study, which appears in the Feb. 27 issue of Monthly Notices of the Royal Astronomical Society, the researchers targeted 42 nearby active galaxies using the European Space Agency's XMM-Newton satellite to hone in on the location and properties of these so-called "ultra-fast outflows" -- or UFOs, for short. The galaxies, which were selected from the All-Sky Slew Survey Catalog produced by NASA's Rossi X-ray Timing Explorer satellite, were all located less than 1.3 billion light-years away.

The outflows turned up in 40 percent of the sample, which suggests that they're common features of black-hole-powered galaxies. On average, the distance between the clouds and the central black hole is less than one-tenth of a light-year. Their average velocity is about 14 percent the speed of light, or about 94 million mph, and the team estimates that the amount of matter required to sustain the outflow is close to one solar mass per year -- comparable to the accretion rate of these black holes.

"Although slower than particle jets, UFOs possess much faster speeds than other types of galactic outflows, which makes them much more powerful," Tombesi explained.

"They have the potential to play a major role in transmitting feedback effects from a black hole into the galaxy at large."

By removing mass that would otherwise fall into a supermassive black hole, ultra-fast outflows may put the brakes on its growth. At the same time, UFOs may strip gas from star-forming regions in the galaxy's bulge, slowing or even shutting down star formation there by sweeping away the gas clouds that represent the raw material for new stars. Such a scenario would naturally explain the observed connection between an active galaxy's black hole and its bulge stars.

Tombesi and his team anticipate significant improvement in understanding the role of ultra-fast outflows with the launch of the Japan-led Astro-H X-ray telescope, currently scheduled for 2014. In the meantime, he intends to focus on determining the detailed physical mechanisms that give rise to UFOs, an important element in understanding the bigger picture of how active galaxies form, develop and grow.


Contacts and sources:
Francis Reddy
NASA/Goddard Space Flight Center

Sunday, February 26, 2012

Squeezing What Hasn’t Been Squeezed Before: Scientists Score Another Victory Over Uncertainty In Quantum Physics Measurements


Squeezing what hasn't been squeezed before

Most people attempt to reduce the little uncertainties of life by carrying umbrellas on cloudy days, purchasing automobile insurance or hiring inspectors to evaluate homes they might consider purchasing. For scientists, reducing uncertainty is a no less important goal, though in the weird realm of quantum physics, the term has a more specific meaning.

For scientists working in quantum physics, the Heisenberg Uncertainty Principle says that measurements of properties such as the momentum of an object and its exact position cannot be simultaneously specified with arbitrary accuracy. As a result, there must be some uncertainty in either the exact position of the object, or its exact momentum. The amount of uncertainty can be determined, and is often represented graphically by a circle showing the area within which the measurement actually lies.

Michael Chapman, a professor in the School of Physics at Georgia Tech, poses with optical equipment in his laboratory. Chapman’s research team is exploring squeezed states using atoms of Bose-Einstein condensates.  
Michael Chapman with optical equipment
Credit: Gary Meek

Over the past few decades, scientists have learned to cheat a bit on the Uncertainty Principle through a process called "squeezing," which has the effect of changing how the uncertainty is shown graphically. Changing the circle to an ellipse and ultimately to almost a line allows one component of the complementary measurements – the momentum or the position, in the case of an object – to be specified more precisely than would otherwise be possible. The actual area of uncertainty remains unchanged, but is represented by a different shape that serves to improve accuracy in measuring one property.

This squeezing has been done in measuring properties of photons and atoms, and can be important to certain high-precision measurements needed by atomic clocks and the magnetometers used to create magnetic resonance imaging views of structures deep inside the body. For the military, squeezing more accuracy could improve the detection of enemy submarines attempting to hide underwater or improve the accuracy of atom-based inertial guidance instruments.

Now physicists at the Georgia Institute of Technology have added another measurement to the list of those that can be squeezed. In a paper appearing online February 26 in the journal Nature Physics, they report squeezing a property called the nematic tensor, which is used to describe the rubidium atoms in Bose-Einstein condensates, a unique form of matter in which all atoms have the same quantum state. The research was sponsored by the National Science Foundation (NSF).

"What is new about our work is that we have probably achieved the highest level of atom squeezing reported so far, and the more squeezing you get, the better," said Michael Chapman, a professor in Georgia Tech's School of Physics. "We are also squeezing something other than what people have squeezed before."

Scientists have been squeezing the spin states of atoms for 15 years, but only for atoms that have just two relevant quantum states – known as spin ½ systems. In collections of those atoms, the spin states of the individual atoms can be added together to get a collective angular momentum that describes the entire system of atoms.

In the Bose-Einstein condensate atoms being studied by Chapman's group, the atoms have three quantum states, and their collective spin totals zero – not very helpful for describing systems. So Chapman and graduate students Chris Hamley, Corey Gerving, Thai Hoang and Eva Bookjans learned to squeeze a more complex measure that describes their system of spin 1 atoms: nematic tensor, also known as quadrupole.

Nematicity is a measure of alignment that is important in describing liquid crystals, exotic magnetic materials and some high temperature superconductors.

"We don't have a spin vector pointing in a particular direction, but there is still some residual information in where this collection of atoms is pointing," Chapman explained. "That next higher-order description is the quadrupole, or nematic tensor. Squeezing this actually works quite well, and we get a large degree of improvement, so we think it is relatively promising."

Experimentally, the squeezing is created by entangling some of the atoms, which takes away their independence. Chapman's group accomplishes this by colliding atoms in their ensemble of some 40,000 rubidium atoms.

"After they collide, the state of one atom is connected to that of the other atom, so they have been entangled in that way," he said. "This entanglement creates the squeezing."

Reducing uncertainty in measuring atoms could have important implications for precise magnetic measurements. The next step will be to determine experimentally if the technique can improve the measurement of magnetic field, which could have important applications.

"In principle, this should be a straightforward experiment, but it turns out that the biggest challenge is that magnetic fields in the laboratory fluctuate due to environmental factors such as the effects of devices such as computer monitors," Chapman said. "If we had a noiseless laboratory, we could measure the magnetic field both with and without squeezed states to demonstrate the enhanced precision. But in our current lab environment, our measurements would be affected by outside noise, not the limitations of the atomic sensors we are using."

The new squeezed property could also have application to quantum information systems, which can store information in the spin of atoms and their nematic tensor.

"There are a lot of things you can do with quantum entanglement, and improving the accuracy of measurements is one of them," Chapman added. "We still have to obey Heisenberg's Uncertainty Principle, but we do have the ability to manipulate it."

Contacts and sources:
John Toon
Georgia Institute of Technology Research News

How To Rescue The Immune System

In a study published in Nature Medicine, Loyola researchers report on a promising new technique that potentially could turn immune system killer T cells into more effective weapons against infections and possibly cancer.

The technique involves delivering DNA into the immune system's instructor cells. The DNA directs these cells to overproduce a specific protein that jumpstarts important killer T cells. These killer cells are typically repressed in patients who have HIV or cancer, said José A. Guevara-Patino, MD, PhD, senior author of the study. Guevara is an Associate Professor in the Oncology Institute of Loyola University Chicago Stritch School of Medicine.

Guevara and colleagues reported their technique proved effective in jumpstarting defective immune systems in immuno-compromised mice and in human killer T cells taken from people with HIV.

Guevara said a clinical trial in cancer patients could begin in about three years.

The study involved killer cells, known as CD8 T cells, and their instructor cells, known as antigen-presenting cells. The instructor cells instruct CD8 T cells to become killer T cells to kill infected cells or cancer cells -- and to remain vigilant if they reencounter pathogens or if the cancer comes back.

In addition to getting instructions from the antigen-presenting cells, CD8 T cells need assistance from helper T cells to become effective killers. Without this assistance, the killer T cells can't do their job.

In patients who have HIV, the virus destroys helper T cells. In cancer patients, helper T cells also are affected. Among a tumor's insidious properties is its ability to prevent killer T cells from attacking tumors. It does this by putting helper T cells into a suppressed stage, limiting their ability to assist CD8 T cells, said Andrew Zloza, MD, PhD, one of the leading authors of the study.

In the study, snippets of DNA were delivered into skin instructor cells by a device known as a gene gun. The DNA directed the instructor cells to produce specific proteins, which act like molecular keys. When CD8 T cells interact with the instructor cells, the keys unlock the CD8 T cells' killer properties -- jumpstarting them to go out and kill pathogens and cancer cells.

With the use of this technique, the killer T cells would not need the assistance of helper T cells. So even if a tumor were to put the helper T cells in a suppressive cage, the killer T cells would still be able to go out and kill cancer cells. Researchers expect that future studies using the technique will make it applicable to many diseases, including cancer. 

The study received major funding from the national office of the American Cancer Society, the Illinois chapter of the American Cancer Society and the National Institutes of Health.

Other authors are Frederick Kohlhapp (co-first author), Gretchen Lyons (co-first author), Jason Schenkel, Tamson Moore, Andrew Lacek, Jeremy O'Sullivan, Vineeth Varanasi, Jesse Williams, Michael Jagoda, Emily Bellavance, Amanda Marzo, Paul Thomas, Biljana Zafirova, Bojan Polic, Lena Al-Harthi and Anne Sperling.

Contacts and sources:
Jim Ritter 
Loyola University Health System

Volcanoes Deliver 2 Flavors Of Water

Seawater circulation pumps hydrogen and boron into the oceanic plates that make up the seafloor, and some of this seawater remains trapped as the plates descend into the mantle at areas called subduction zones. By analyzing samples of submarine volcanic glass near one of these areas, scientists found unexpected changes in isotopes of hydrogen and boron from the deep mantle. They expected to see the isotope "fingerprint" of seawater. But in volcanoes from the Manus Basin they also discovered evidence of seawater distilled long ago from a more ancient plate descent event, preserved for as long as 1 billion years. The data indicate that these ancient oceanic "slabs" can return to the upper mantle in some areas, and that rates of hydrogen exchange in the deep Earth may not conform to experiments. The research is published in the February 26, 2012, advanced on line publication of Nature Geoscience.

Undersea Volcanoes
Credit: Wikipedia

As Carnegie coauthor Erik Hauri explained, "Hydrogen and boron have both light and heavy isotopes. Isotopes are atoms of the same element with different numbers of neutrons. The volcanoes in the Manus Basin are delivering a mixture of heavy and light isotopes that have been observed nowhere else. The mantle under the Manus Basin appears to contain a highly distilled ancient water that is mixing with modern seawater."

When seawater-soaked oceanic plates descend into the mantle, heavy isotopes of hydrogen and boron are preferentially distilled away from the slab, leaving behind the light isotopes, but also leaving it dry and depleted of these elements, making the "isotope fingerprint" of the distillation process difficult to identify. But this process appears to have been preserved in at least one area: submarine volcanoes in the Manus Basin of Papua New Guinea, which erupted under more than a mile of seawater (2,000 meters). Those pressures trap water from the deep mantle within the volcanic glass.

Lead author Alison Shaw and coauthor Mark Behn, both former Carnegie postdoctoral researchers, recognized another unique feature of the data. Lab experiments have shown very high diffusion rates for hydrogen isotopes, which move through the mantle as tiny protons. This diffusion should have long-ago erased the hydrogen isotope differences observed in the Manus Basin volcanoes.

"That is what we typically see at mid-ocean ridges," remarked Hauri. "But that is not what we found at Manus Basin. Instead we found a huge range in isotope abundances that indicates hydrogen diffusion in the deep Earth may not be analogous to what is observed in the lab."

The team's * finding means is that surface water can be carried into the deep Earth by oceanic plates and be preserved for as long as 1 billion years. They also indicate that the hydrogen diffusion rates in the deep Earth appear to be much slower than experiments show. It further suggests that these ancient slabs may not only return to the upper mantle in areas like the Manus Basin, they may also come back up in hotspot volcanoes like Hawaii that are produced by mantle plumes.

The results are important to understanding how water is transferred and preserved in the mantle and how it and other chemicals are recycled to the surface.


###



*Other researchers on the team include lead author A.M. Shaw and M.D. Behn from Woods Hole Oceanographic Institution, D.R. Hilton Scripps Institution of Oceanography and UC San Diego, C.G. Macpherson Durham University, and J.M. Sinton University of Hawaii.

The Carnegie Institution for Science (carnegieScience.edu) has been a pioneering force in basic scientific research since 1902. It is a private, nonprofit organization with six research departments throughout the U.S. Carnegie scientists are leaders in plant biology, developmental biology, astronomy, materials science, global ecology, and Earth and planetary science.

Contacts and sources:
Erik Hauri
Carnegie Institution

Saturday, February 25, 2012

European Neanderthals Were On The Verge Of Extinction Even Before The Arrival Of Modern Humans

New findings from an international team of researchers show that most neanderthals in Europe died off around 50,000 years ago.

The previously held view of a Europe populated by a stable neanderthal population for hundreds of thousands of years up until modern humans arrived must therefore be revised. This new perspective on the neanderthals comes from a study of ancient DNA published today in Molecular Biology and Evolution. The results indicate that most neanderthals in Europe died off as early as 50,000 years ago. After that, a small group of neanderthals recolonised central and western Europe, where they survived for another 10,000 years before modern humans entered the picture.

Mounted Neanderthal Skeleton
File:Neanderthalensis.jpg
Credit: Wikipedia/American Museum of Natural History


The study is the result of an international project led by Swedish and Spanish researchers in Uppsala, Stockholm and Madrid. “The fact that neanderthals in Europe were nearly extinct, but then recovered, and that all this took place long before they came into contact with modern humans came as a complete surprise to us.

This indicates that the neanderthals may have been more sensitive to the dramatic climate changes that took place in the last Ice Age than was previously thought”, says Love Dalén, associate professor at the Swedish Museum of Natural History in Stockholm. In connection with work on DNA from neanderthal fossils in northern Spain, the researchers noted that the genetic variation among European neanderthals was extremely limited during the last ten thousand years before the neanderthals disappeared. Older European neanderthal fossils, as well as fossils from Asia, had much greater genetic variation, on par with the amount of variation that might be expected from a species that had been abundant in an area for a long period of time. “

The amount of genetic variation in geologically older neanderthals as well as in Asian neanderthals was just as great as in modern humans as a species, whereas the variation among later European neanderthals was not even as high as that of modern humans in Iceland”, says Anders Götherström, associate professor at Uppsala University. The results presented in the study are based entirely on severely degraded DNA, and the analyses have therefore required both advanced laboratory and computational methods.

The research team has involved experts from a number of countries, including statisticians, experts on modern DNA sequencing and paleoanthropologists from Denmark, Spain and the US. Only when all members of the international research team had reviewed the findings could they feel certain that the available genetic data actually reveals an important and previously unknown part of neanderthal history.

“This type of interdisciplinary study is extremely valuable in advancing research about our evolutionary history. DNA from prehistoric people has led to a number of unexpected findings in recent years, and it will be really exciting to see what further discoveries are made in the coming years”, says Juan Luis Arsuaga, professor of human paleontology at the Universidad Complutense of Madrid.
http://mbe.oxfordjournals.org/content/early/2012/02/23/molbe':v.mss074.short?rss=1


Contacts and sources:  Uppsala Universitet

Citation: Partial genetic turnover in neandertals: continuity in the east and population replacement in the west
Mol Biol Evol (2012) doi: 10.1093/molbev/mss074
First published online: February 23, 2012


Friday, February 24, 2012

How Heavy And Light Isotopes Separate In Magma

Mass wins the race toward cool -- and leaves a clue to igneous rock formation

In the crash-car derby between heavy and light isotopes vying for the coolest spots as magma turns to solid rock, weightier isotopes have an edge, research led by Case Western Reserve University shows.

This tiny detail may offer clues to how igneous rocks form.

As molten rock cools along a gradient, atoms want to move towards the cool end. This happens because hotter atoms move faster than cooler atoms and, therefore, hotter atoms move to the cool region faster than the cooler atoms move to the hot region.

Although all isotopes of the same element want to move towards the cool end, the big boys have more mass and, therefore, momentum, enabling them to keep moving on when they collide along the way.

"It's as if you have a crowded, sealed room of sumo wrestlers and geologists and a fire breaks out at one side of the room," said Daniel Lacks, chemical engineering professor and lead author of the paper. "All will try to move to the cooler side of the room, but the sumo wrestlers are able to push their way through and take up space on the cool side, leaving the geologists on the hot side of the room."

Lacks worked with former postdoctoral researcher Gaurav Goel and geology professor James A. Van Orman at Case Western Reserve; Charles J. Bopp IV and Craig C. Lundstrum, of University of Illinois, Urbana; and Charles E. Lesher of the University of California at Davis. They described their theory and confirming mathematics, computer modeling, and experiments in the current issue of Physical Review Letters.

Lacks, Van Orman and Lesher also published a short piece in the current issue of Nature, showing how their findings overturn an explanation based on quantum mechanics, published in that journal last year.

"The theoretical understanding of thermal isotope separation in gases was developed almost exactly 100 years ago by David Enskog, but there is as yet not a similar full understanding of this process in liquids," said Frank Richter, who is the Sewell Avery Distinguished Professor at the University of Chicago and a member of the National Academy of Sciences. He was not involved in the research. "This work by Lacks et al. is an important step towards remedying this situation."

This separation among isotopes of the same element is called fractionation.

Scientists have been able to see fractionation of heavy elements in igneous rocks only since the 1990s, Van Orman said. More sensitive mass spectrometers showed that instead of a homogenous distribution, the concentration ratio of heavy isotopes to light isotopes in some igneous rocks was up to 0.1 percent higher than in other rocks.

One way of producing this fractionation is by temperature.

To understand how this happens, the team of researchers created a series of samples made of molten magnesium silicate infused with elements of different mass, from oxygen on up to heavy uranium.

The samples, called silicate melts, were heated at one end in a standard lab furnace, creating temperature gradients in each. The melts were then allowed to cool and solidify.

The scientists then sliced the samples along gradient lines and dissolved the slices in acid. Analysis showed that no matter the element, the heavier isotopes slightly outnumbered the lighter at the cool end of the gradient.

Computer simulations of the atoms, using classical mechanics, agreed with the experimental results.

"The process depends on temperature differences and can be seen whether the temperature change across the sample is rapid or gradual," Lacks said.

Thermal diffusion through gases was one of the first methods used to separate isotopes, during the Manhattan Project. It turns out that isotope fractionation through silicate liquids is even more efficient than through gases.

"Fractionation can occur inside the Earth wherever a sustained temperature gradient exists," Van Orman said. "One place this might happen is at the margin of a magma chamber, where hot magma rests against cold rock. Another is nearly 1,800 miles inside the Earth, at the boundary of the liquid core and the silicate mantle."

The researchers are now adding pressure to the variables as they investigate further. This work was done at atmospheric pressure but where the Earth's core and mantle meet, the pressure is nearly 1.4 million atmospheres.

Lacks and Van Orman are unsure whether high pressure will result in greater or lesser fractionation. They can see arguments in favor of either.


Contacts and sources: 

Light-Emitting Nanocrystal Diodes Go Ultraviolet

A multinational team of scientists has developed a process for creating glass-based, inorganic light-emitting diodes (LEDs) that produce light in the ultraviolet range. The work, reported this week in the online Nature Communications, is a step toward biomedical devices with active components made from nanostructured systems.

LEDs based on solution-processed inorganic nanocrystals have promise for use in environmental and biomedical diagnostics, because they are cheap to produce, robust, and chemically stable. But development has been hampered by the difficulty of achieving ultraviolet emission. In their paper, Los Alamos National Laboratory's Sergio Brovelli in collaboration with the research team lead by Alberto Paleari at the University of Milano-Bicocca in Italy describe a fabrication process that overcomes this problem and opens the way for integration in a variety of applications.

Embedding nanocrystals in glass provides a way to create UV-producing LEDs for biomedical applications.
 
Credit: Los Alamos National Laboratory

The world needs light-emitting devices that can be applied in biomedical diagnostics and medicine, Brovelli said, either as active lab-on-chip diagnostic platforms or as light sources that can be implanted into the body to trigger some photochemical reactions. Such devices could, for example, selectively activate light-sensitive drugs for better medical treatment or probe for the presence of fluorescent markers in medical diagnostics. These materials would need to be fabricated cheaply, on a large scale, and integrated into existing technology.

The paper describes a new glass-based material, able to emit light in the ultraviolet spectrum, and be integrated onto silicon chips that are the principal components of current electronic technologies.

The new devices are inorganic and combine the chemical inertness and mechanical stability of glass with the property of electric conductivity and electroluminescence (i.e. the ability of a material to emit light in response to the passage of an electric current).

As a result, they can be used in harsh environments, such as for immersion into physiologic solutions, or by implantation directly into the body. This was made possible by designing a new synthesis strategy that allows fabrication of all inorganic LEDs via a wet-chemistry approach, i.e. a series of simple chemical reactions in a beaker. Importantly, this approach is scalable to industrial quantities with a very low start-up cost. Finally, they emit in the ultraviolet region thanks to careful design of the nanocrystals embedded in the glass.

In traditional light-emitting diodes, light emission occurs at the sharp interface between two semiconductors. The oxide-in-oxide design used here is different, as it allows production of a material that behaves as an ensemble of semiconductor junctions distributed in the glass.

This new concept is based on a collection of the most advanced strategies in nanocrystal science, combining the advantages of nanometric materials consisting of more than one component. In this case the active part of the device consists of tin dioxide nanocrystals covered with a shell of tin monoxide embedded in standard glass: by tuning the shell thickness is it possible to control the electrical response of the whole material.


Contacts and sources:
The paper was produced with the financial support of Cariplo Foundation, Italy, under Project 20060656, the Russian Federation under grant 11.G34.31.0027, the Silvio Tronchetti Provera Foundation, and Los Alamos National Laboratory's Directed Research and Development Program.

The paper is titled, "Fully inorganic oxide-in-oxide ultraviolet nanocrystal light emitting devices," and can be downloaded from the following online Nature Communications link:http://dx.doi.org/10.1038/ncomms1683

Its authors are Sergio Brovelli1, 2, Norberto Chiodini1, Roberto Lorenzi1, Alessandro Lauria1, Marco Romagnoli3,4& Alberto Paleari1
1. Department of Materials Science, University of Milano-Bicocca, Italy.
2. Chemistry Division, Los Alamos National Laboratory, Los Alamos, New Mexico.
3. Material Processing Center, Massachusetts Institute of Technology, Cambridge, Massachusetts..
4. On leave from Photonic Corp, Culver City, California.

Rethinking The Social Structure Of Ancient Eurasian Nomads: Current Anthropology Research

Prehistoric Eurasian nomads are commonly perceived as horse riding bandits who utilized their mobility and military skill to antagonize ancient civilizations such as the Chinese, Persians, and Greeks. Although some historical accounts may support this view, a new article by Dr. Michael Frachetti (Washington University, St. Louis) illustrates a considerably different image of prehistoric pastoralist societies and their impact on world civilizations more than 5000 years ago.

In the article, recently published in the February issue of Current Anthropology, Frachetti argues that early pastoral nomads grew distinct economies across the steppes and mountains of Eurasia and triggered the formation of some the earliest and most extensive networks of interaction in prehistory. The model for this unique form of interaction, which Frachetti calls "nonuniform" institutional complexity, describes how discrete institutions among small-scale societies significantly impact the evolution of wider-scale political economies and shape the growth of great empires or states.

Around 3500 BC, regionally distinct herding economies were found across the Eurasian steppes. In some regions, these societies were the first to domesticate and ride horses. Over the next 2000 years, key innovations introduced by steppe nomads such as chariots, domesticated horses, and advanced bronze metallurgy spread across the mountains and deserts of Inner Asia and influenced the political and economic character of ancient civilizations from China to Mesopotamia, Iran, and the Indus Valley.

Although the mobile societies that fueled these networks came to share certain ideological and economic institutions, in many cases their political organization remained autonomous and idiosyncratic. Still, these regional economic and social ties forged between neighboring mobile communities helped new ideologies and institutions propagate over vast territories, millennia before the fabled "Silk Road."


Contacts and sources:
Kevin Stacey
University of Chicago Press Journals
 
Michael D. Frachetti, "Multiregional Emergence of Mobile Pastoralism and Nonuniform Institutional Complexity across Eurasia." Current Anthropology53:1 (February 2012).

Current Anthropology is a transnational journal devoted to research on humankind, encompassing the full range of anthropological scholarship on human cultures and on the human and other primate species. Communicating across the subfields, the journal features papers in a wide variety of areas, including social, cultural, and physical anthropology as well as ethnology and ethnohistory, archaeology and prehistory, folklore, and linguistics. It is published through a partnership between the Wenner-Gren Foundation for Anthropological Research and the University of Chicago Press.

Study Shows Significant State-By-State Differences In Black, White Life Expectancy

A UCLA-led group of researchers tracing disparities in life expectancy between blacks and whites in the U.S. has found that white males live about seven years longer on average than African American men and that white women live more than five years longer than their black counterparts.

But when comparing life expectancy on a state-by-state basis, the researchers made a surprising discovery: In those states in which the disparities were smallest, the differences often were not the result of African Americans living longer but of whites dying younger than the national average. And, interestingly, the area with the largest disparities wasn't a state at all but the nation's capital, Washington D.C.

The findings are published in the February issue of the peer-reviewed journal Health Services Research.

"In health-disparities research, there is an assumption that large disparities are bad because vulnerable populations are not doing as well as they should, while areas with small disparities are doing a better job at health equity," said Dr. Nazleen Bharmal, the study's lead researcher and a clinical instructor in the division of general internal medicine and health services research at the David Geffen School of Medicine at UCLA. "In our study, we show that the reason there are small disparities in life expectancy is because white populations are doing as poorly as black populations, and the goal in these states should be to raise health equity for all groups."

The data on which the researchers relied included both health-related and non–health-related deaths, such as murder and accidents. The findings, however, still highlight the need to improve the health of the nation's African Americans, the researchers said.

The research team studied death-certificate data from the U.S. Multiple Cause of Death for the years 1997–2004. The data covered 17,834,236 individuals in all 50 states and the District of Columbia. The researchers noted race/ethnicity, sex, the age at death and the state where each subject was born, lived and died.

Overall, the national life expectancy was 74.79 years for white men and 67.66 years for black men. Among women, the average life span was 79.84 years for whites and 74.64 for blacks. In every state, gaps were narrower between women than men.

New Mexico had the smallest disparities between blacks and whites (3.76 years for men and 2.45 years for women), while the District of Columbia had the largest (13.77 years for men and 8.55 years for women).

States with the largest disparities

In addition to Washington, D.C., the states with the largest disparities between white and black men were New Jersey, Nebraska, Wisconsin, Michigan, Pennsylvania and Illinois; in these states, the gap was greater than eight years because African American men's lives were shorter than the national average for black men and white men's life spans were equal to or greater than the national average.

For women, the states with the largest disparities in longevity were Illinois, Rhode Island, Kansas, Michigan, New Jersey, Wisconsin, Minnesota, Iowa, Florida and Nebraska, where the difference between black and white women was more than six years. White women in these states lived longer than average, while black women had average or lower life expectancy.

States with the smallest disparities

In addition to New Mexico, which had the smallest disparities, eight other states had black–white disparities of less than six years among men: Kentucky, West Virginia, Nevada, Oklahoma, Washington, Colorado, New York and Arizona.

In four of these states — Kentucky, West Virginia, Nevada and Oklahoma — the smaller disparities were due to a combination of African American men living longer than the national average and whites having shorter lives. But in New Mexico, Washington, Colorado, New York and Arizona, both black and white men lived longer than average, with black men having life spans that were particularly longer than the national average.

Among women, the states with the smallest differences were New Mexico, New York, West Virginia, Kentucky and Alabama — each with disparities of less than four years. These smaller disparities were the result of black women being longer-lived than average and whites being shorter-lived.

Fifty-eight percent of African Americans live in 10 states: New York, California, Texas, Florida, Georgia, Illinois, North Carolina, Maryland, Missouri and Louisiana. Eliminating the disparities in just these states, the researchers said, would bring the national disparity down substantially. For instance, eliminating the disparity in Florida alone would reduce the national disparity from 7.13 years to 6.63 years for men and from 5.20 years to 4.74 years for women.

Because disease prevention and health promotion efforts identify and monitor magnitudes in disparities, these findings could point to new ways that government agencies can track and measure differences in health outcomes, the authors write. Also, the researchers feel that these differences in life expectancy should be considered when funding health programs at local and national levels. Finally, they write, state governments should consider these differences in black–white longevity in formulating health policy, given that coverage through health programs such as Medicaid varies widely among states.

There are some limitations to the study. Among them, the researchers did not account for population changes during the years covered in their analysis, though, they said, it is doubtful such changes would alter the overall findings. Also, they did not consider the tendency of people to move from place to place, which could influence health. They also could not use data from 11 states, but those areas had such small numbers of African Americans that the estimates would not have been reliable.

Also, the study covered all causes of mortality, including murder and accidental deaths. Going forward, the researchers plan to investigate disparities in life expectancy by cause of death.

In addition to Dr. Nazleen Bharmal, researchers included Chi-Hong Tseng and Mitchell Wong of UCLA and Robert Kaplan of the National Institutes of Health.

Bharmal was funded by the Robert Wood Johnson Clinical Scholars Program and a National Research Service Award Fellowship at UCLA.

Contacts and sources: 
Enrique Rivero
University of California - Los Angeles Health Sciences

General Internal Medicine and Health Services Research is a division within the department of medicine at the David Geffen School of Medicine at UCLA. It provides a unique interactive environment for collaborative efforts between health services researchers and clinical experts with experience in evidence-based work. The division's 100-plus clinicians and researchers are engaged in a wide variety of projects that examine issues related to access to care, quality of care, health measurement, physician education, clinical ethics and doctor–patient communication. The division's researchers have close working relationships with economists, statisticians, social scientists and other specialists throughout UCLA and frequently collaborate with their counterparts at the RAND Corp. and Charles Drew University.

Mysterious Cycle Of Booms And Busts In Marine Biodiversity

A mysterious cycle of booms and busts in marine biodiversity over the past 500 million years could be tied to a periodic uplifting of the world’s continents, scientists report in the latest issue of The Journal of Geology.

The researchers discovered periodic increases in the amount of the isotope strontium-87 found in marine fossils. The timing of these increases corresponds to previously discovered low points in marine biodiversity that occur in the fossil record roughly every 60 million years. Adrian Melott, professor of physics and astronomy at the University of Kansas and lead author, thinks these periodic extinctions and the increased amounts of strontium-87 are linked.

Adrian Melott 
Credit: University of Kansas

“Strontium-87 is produced by radioactive decay of another element, rubidium, which is common in igneous rocks in continental crust,” Melott said. “So, when a lot of this type of rock erodes, a lot more Sr-87 is dumped into the ocean, and its fraction rises compared with another strontium isotope, Sr-86.”

An uplifting of the continents, Melott explains, is the most likely explanation for this type of massive erosion event.

“Continental uplift increases erosion in several ways,” he said. “First, it pushes the continental basement rocks containing rubidium up to where they are exposed to erosive forces. Uplift also creates highlands and mountains where glaciers and freeze-thaw cycles erode rock. The steep slopes cause faster water flow in streams and sheet-wash from rains, which strips off the soil and exposes bedrock. Uplift also elevates the deeper-seated igneous rocks where the Sr-87 is sequestered, permitting it to be exposed, eroded and put into the ocean.”

The massive continental uplift suggested by the strontium data would also reduce sea depth along the continental shelf where most sea animals live. That loss of habitat due to shallow water, Melott and collaborators say, could be the reason for the periodic mass extinctions and periodic decline in diversity found in the marine fossil record.

“What we’re seeing could be evidence of a ‘pulse of the earth’ phenomenon,” Melott said. “There are some theoretical works which suggest that convection of mantle plumes, rather like a lava lamp, should be coordinated in periodic waves.” The result of this convection deep inside the earth could be a rhythmic throbbing—almost like a cartoon thumb smacked with a hammer—that pushes the continents up and down.

Melott’s data suggest that such pulses likely affected the North American continent. The same phenomenon may have affected other continents as well, but more research would be needed to show that, he says.

The co-authors on the study were Richard Bambach of the National Museum of Natural History, Kenni Petersen of Aarhus University, Denmark, and John McArthur of University College London.


Contacts and sources:

Heat Shrinks Horses: Evolution Of Earliest Horses Driven By Climate Change

The hotter it gets, the smaller the animal?

An artist's reconstruction of a modern horse compared with Sifrhippus.
An artist's reconstruction of Sifrhippus compared with a modern horse. 
Credit: Danielle Byerley, UFL

When Sifrhippus sandae, the earliest known horse, first appeared in the forests of North America more than 50 million years ago, it would not have been mistaken for a Clydesdale.

It weighed in at around 12 pounds--and it was destined to get much smaller over the ensuing millennia.

Sifrhippus lived during the Paleocene-Eocene Thermal Maximum (PETM), a 175,000-year interval of time some 56 million years ago in which average global temperatures rose by about 10 degrees Fahrenheit.

Earliest horses show that past global warming affected the body size of mammals.

Credit: Jonathan Bloch and Stephen Chester, University of Florida

The change was caused by the release of vast amounts of carbon into the atmosphere and oceans.

About a third of mammal species responded with a significant reduction in size during the PETM, some by as much as one-half.

Teeth of Sifrhippus at its larger size with teeth from the same species after its size shrank.
Photo of Sifhippus teeth at its largest size compared with teeth of same species after size shrank. 
Credit: Kristen Grace, UFL

Sifrhippus shrank by about 30 percent, to the size of a small house cat--about 8.5 pounds--in the PETM's first 130,000 years, then rebounded to about 15 pounds in the final 45,000 years of the PETM.

Scientists have assumed that rising temperatures or high concentrations of carbon dioxide primarily caused the "dwarfing" phenomenon in mammals during this period.

New research led by Ross Secord of the University of Nebraska-Lincoln and Jonathan Bloch of the Florida Museum of Natural History at the University of Florida offers evidence of the cause-and-effect relationship between temperature and body size.

Their findings also provide clues to what might happen to animals in the near future from global warming.

From fossils, researchers determined oxygen levels on Earth some 56 million years ago.
Photo showing teeth and jawbone of Sifrhippus. 
Credit: Kristen Grace, UFL

In a paper published in this week's issue of the journal Science, Secord, Bloch and colleagues used measurements and geochemical composition of fossil mammal teeth to document a progressive decrease in Sifrhippus' body size that correlates very closely to temperature change over a 130,000-year span.

"The reduction in available oxygen some 50 million years ago led to a reduction in the body size of animal life," says H. Richard Lane, program director in the National Science Foundation's (NSF) Division of Earth Sciences, which funded the research. "What does that say about the future for Earth's animals?"

Bloch said that multiple trails led to the discovery.

One was the fossils themselves, recovered from the Cabin Fork area of the southern Bighorn Basin near Worland, Wyo.

Stephen Chester at Yale, a paper co-author, had the task of measuring the horses' teeth.

What he found when he plotted them through time caught Bloch and Secord by surprise.

"He pointed out that the first horses in the section were much larger than those later on," Bloch says. "I thought something had to be wrong, but he was right and the pattern became more robust as we collected more fossils."

Secord performed the geochemical analysis of the teeth. What he found was an even bigger surprise.

"It was absolutely startling when Ross pulled up the data," Bloch said. "We realized that it was exactly the same pattern that we were seeing with the horse body.

"For the first time, going back into deep time--tens of millions of years--we were able to show that indeed temperature was causing essentially a one-to-one shift in body size in this lineage of horse.

"Because it's over a long enough time, you can argue very strongly that what you're looking at is natural selection and evolution that it's actually corresponding to the shift in temperature and driving the evolution of these horses."

Secord says that the finding raises important questions about how plants and animals will respond to rapid change in the not-too-distant future.

"This has implications for what we might expect to see over the next century or two with climate models that are predicting warming of as much as 4 degrees Centigrade over the next 100 years," he says, which is 7 degrees Fahrenheit.

Those predictions are based largely on the 40 percent increase of atmospheric carbon dioxide levels, from 280 to 392 parts per million, since the start of the Industrial Revolution in the mid-19th century.

Ornithologists, Secord says, have already started to notice that there may be a decrease in body size among birds.

"One of the issues is that warming during the PETM happened much more slowly, over 10,000 to 20,000 years to increase by 10 degrees, whereas now we're expecting it to happen over a century or two."

"So there's a big difference in scale. One of the questions is, 'Are we going to see the same kind of response?' Are animals going to be able to keep up and readjust their body sizes over the next couple of centuries?"

Increased temperatures are not the only change to which animals may have to adapt.

Experiments show that increased atmospheric carbon dioxide lowers the nutritional content of plants, which could have been a secondary driver of dwarfism during the PETM.

Other co-authors of the paper are Doug Boyer of Brooklyn College, Aaron Wood of the Florida Museum of Natural History, Scott Wing of the Smithsonian National Museum of Natural History, Mary Kraus of the University of Colorado-Boulder, Francesca McInerny of Northwestern University and John Krigbaum of the University of Florida.

The research was also funded by University of Nebraska-Lincoln.

Contacts and sources:
Cheryl Dybas
National Science Foundation

Black Hole Unmasked

A study of X-rays emitted a long time ago in a galaxy far, far away has unmasked a stellar mass black hole in Andromeda, a spiral galaxy about 2.6 million light-years from Earth.

Two Clemson University researchers joined an an international team of astronomers, including scientists at Germany's Max Planck Institute for Extraterrestrial Physics, in publishing their findings in a pair of scientific journals this week.

Scientists had suspected the black hole was possible since late 2009 when an X-ray satellite observatory operated by the Max Planck Institute detected an unusual X-ray transient light source in Andromeda.

The ultraluminous X-ray source studied by Clemson astrophysicists lies deep in the heart of the Andromeda galaxy, about 2.6 million light-years from Earth.
The ultraluminous X-ray source studied by Clemson astrophysicists lies deep in the heart of the Andromeda galaxy, about 2.6 million light-years from Earth.
Image by: XMM-Newton observatory 

"The brightness suggested that these X-rays belonged to the class of ultraluminous X-ray sources, or ULXs," said Amanpreet Kaur, a Clemson graduate student in physics and lead author of the paper published in the Astronomy & Astrophysics Journal. "But ULXs are rare. There are none at all in the Milky Way where Earth is located, and this is the first to be confirmed in Andromeda. Proving it required detailed observations."

Because ULX sources are rare — usually with just one or two in a galaxy, if they are present at all — there was very little data with which astronomers could make conjectures.

"There were two competing explanations for their high luminosities," said Clemson physics professor Dieter Hartmann, Kaur's mentor and a co-author of the paper. "Either a stellar mass black hole was accreting at extreme rates or there was a new subspecies of intermediate mass black holes accreting at lower rates. One of the greatest difficulties in attempting to find the right answer is the large distance to these objects, which makes detailed observations difficult or even impossible."

Working with scientists in Germany and Spain, the Clemson researchers studied data from the Chandra observatory and proved that the X-ray source was a stellar mass black hole that is swallowing material at very high rates.

Follow-up observations with the Swift and HST satellites yielded important complementary data, proving that it not only is the first ULX in Andromeda but also the closest ULX ever observed. Despite its great distance away, Andromeda is actually the nearest major galactic neighbor to our own Milky Way.

"We were very lucky that we caught the ULX early enough to see most of its light curve, which showed a very similar behavior to other X-ray sources from our own galaxy,” said Wolfgang Pietsch of the Max Planck Institute. The emission decayed exponentially with a characteristic timescale of about one month, which is a common property of stellar mass X-ray binaries. "This means that the ULX in Andromeda likely contains a normal, stellar black hole swallowing material at very high rates."

The emission of the ULX source, the scientists said, probably originates from a system similar to X-ray binaries in our own galaxy, but with matter accreting onto a black hole that is at least 13 times more massive than our Sun.

Unlike X-ray binaries in our own Milky Way, this source is much less obscured by interstellar gas and dust, allowing detailed investigations also at low X-ray energies.

Ideally, the astronomers would like to replicate their findings by re-observing the source in another outburst. However, if it is indeed similar to the X-ray binaries in our own Milky Way, they may be in for a long wait: Such outbursts can occur decades apart.

"On the other hand, as there are so many X-ray binaries in the Andromeda galaxy, another similar outbursting source could be captured any time by the ongoing monitoring campaign," Hartmann said. "While 'monitoring' may not sound exciting, the current results show that these programs are often blessed with discovery and lead to breakthroughs; in particular, if they are augmented with deep and sustained follow-up."

Contacts and sources:

Thursday, February 23, 2012

Shapeshifting Earth: NASA Pinning Down 'Here' Better Than Ever

Before our Global Positioning System (GPS) navigation devices can tell us where we are, the satellites that make up the GPS need to know exactly where they are. For that, they rely on a network of sites that serve as "you are here" signs planted throughout the world. The catch is, the sites don't sit still because they're on a planet that isn't at rest, yet modern measurements require more and more accuracy in pinpointing where "here" is.

To meet this need, NASA is helping to lead an international effort to upgrade the four systems that supply this crucial location information. NASA's initiative is run by Goddard Space Flight Center in Greenbelt, Md., where the next generation of two of these systems is being developed and built. And Goddard, in partnership with NASA's Jet Propulsion Laboratory in Pasadena, Calif., is bringing all four systems together in a state-of-the-art ground station.

The telescope on NASA Goddard's Next Generation Satellite Laser Ranging (NGSLR) system peeks through the open dome. The NGSLR laser ranges to Earth-orbiting satellites and to NASA's Lunar Reconnaissance Orbiter.
 
Credit: Credit: NASA/GSFC/Elizabeth Zubrits

"NASA and its sister agencies around the world are making major investments in new stations or upgrading existing stations to provide a network that will benefit the global community for years to come," says John LaBrecque, Earth Surface and Interior Program Officer at NASA Headquarters.

GPS won't be the only beneficiary of the improvements. All observations of Earth from space—whether it's to measure how far earthquakes shift the land, map the world's ice sheets, watch the global mean sea level creep up or monitor the devastating reach of droughts and floods—depend the International Terrestrial Reference Frame, which is determined by data from this network of designated sites.

Shapeshifting

Earth is a shapeshifter. Land rises and sinks. The continents move. The balance of the atmosphere shifts over time, and so does the balance of the oceans. All of this tweaks Earth's shape, orientation in space and center of mass, the point deep inside the planet that everything rotates around. The changes show up in Earth's gravity field and literally slow down or speed up the planet's rotation.

"In practical terms, we can't determine a location today and expect it to be good enough tomorrow—and especially not next year," says Herbert Frey, the head of the Planetary Geodynamics Laboratory at Goddard and a member of the Space Geodesy Project team.

Measuring such properties of Earth is the realm of geodesy, a time-honored science that dates back to the Greek scholar Eratosthenes, who achieved a surprisingly accurate estimate of the distance around the Earth by using basic geometry.

Around 240 BC, Eratosthenes found that when the sun sat directly above the Nile River town of Syene, it shone an angle of 7.2 degrees (1/50 of a circle) in the northern city of Alexandria. Reasoning that the distance from Alexandria to Syene was 1/50 of the way around the globe, he came up with a circumference of roughly 25,000 miles for Earth, quite close to the modern measurement of 24,902 miles.

"Even with the sophisticated tools we have now, geodesy is still all about geometry," says Frank Lemoine, a Goddard geophysicist on the project.

Team geodesy

As in ancient Greece, geodesy today is a team sport, relying on observations conducted in multiple places. Over the years, four types of space geodesy measurements, carried out by a squad of ground stations and satellites, developed independently. Together, they tell the story of what's happening on Earth and keep track of the Terrestrial Reference Frame.

"While there is some overlap in what can be gleaned from each geodetic technique, they also provide different forms of information based on how they operate," says Jet Propulsion Laboratory's David Stowers, who manages the hardware and data flow for the GPS portion of the space geodesy initiative. "GPS was designed specifically as a positioning system and is fairly ubiquitous, thus providing data strength in numbers. It is unique in providing a physical point of reference for the Terrestrial Reference Frame, with the position of the GPS antenna as the primary type of data."

Another technique, Very Long Baseline Interferometry (VLBI), acts as a kind of GPS for Earth. To deduce Earth's orientation in space, and the small variations in the Earth's rate of rotation, ground stations spread across the globe observe dozens of quasars, which are distant enough to be stable reference points.

"VLBI is the one technique that connects measurements made on Earth to the celestial reference frame—that is, the rest of the universe," says Stephen Merkowitz, who is the project manager for NASA's space geodesy initiative.

The key is the painstakingly accurate timing of when the quasar signals arrive. "With this information, we can determine the geometry of the stations that made the observations," says Chopo Ma, head of the VLBI program at Goddard.

By knowing the geometry, researchers aim to measure the distances between the ground stations down to the millimeter, or about the thickness of a penny.

Keeping tabs on Earth's center of mass is the job of satellite laser ranging (SLR). It measures the distances to orbiting satellites by shooting short pulses of laser light at satellites and measuring the time it takes for the light to complete the round trip back to the ground station.

"SLR tells us where the center of mass is, because satellites always orbit around the planet's center of mass," says Lemoine.

Another way to measure distances to satellites is with DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite), which was built and is operated by the French space agency, known as CNES.

DORIS takes advantage of the Doppler effect, which is at work when an ambulance's siren changes pitch as it's driving toward or away from you. The same effect retunes the frequency of a radio signal emitted by a DORIS beacon as the signal travels from the ground into space and is received by a satellite orbiting the Earth. By measuring the frequency change, researchers can work backward to figure out the distance from the beacon to the satellite that picked it up.

Like GPS, DORIS requires little hardware on the ground, so its beacons are spread all over the globe, even in areas as remote as the Mount Everest base camp.

Coming together

All ground stations were not created equal. Some sites are home to one technique, others to two or three, and the sophistication of the techniques can vary from station to station. Right now, only Goddard and the station in Johannesburg, South Africa, are providing results from all four. NASA wants to change that.

"The plan for the upgraded system is to have at least three, and preferably all four, techniques at every station," says LaBrecque. "This is one of the keys to achieving the goals of a millimeter of accuracy and a tenth of a millimeter of stability for future measurements."

At the Goddard Geophysical and Astronomical Observatory (GGAO) in Greenbelt, Md., where the state-of-the-art prototype station is being developed, a new VLBI antenna was just installed. Capable of moving faster than its predecessors, the antenna will complete more observations during a run. It's the first piece of a completely revamped VLBI system that will be more sensitive yet less prone to interference from things like cell phones.

The Next Generation SLR is also being developed at GGAO, under the direction of Jan McGarry, with the goals of more automated operation and the ability to target satellites in higher orbits. Already operational for selected tasks, the system has been ranging to NASA's Lunar Reconnaissance Orbiter since June 2009.

Another key innovation at Goddard's new station is the "vector tie" system that will link together all four measurement techniques. "Right now, we have these four independent techniques, and they're just that: independent," Lemoine says. "Presently, at a particular ground station, the techniques are only tied together by expensive and infrequently performed ground surveys."

But with the vector tie system, which will use a laser to continuously monitor the reference points of each technique, researchers will know exactly where a station's GPS, VLBI, SLR and DORIS sit relative to each other at all times, allowing them to better correct one of the last sources of error in the terrestrial reference frame.

Between 25 and 40 upgraded stations would need to be deployed worldwide the complete the new network. The importance of this investment is detailed in the report "Precise Geodetic Infrastructure: National Requirements for a Shared Resource" by the National Research Council of the National Academies in Washington, D.C.

Agencies around the world, including Germany's Bundesamt für Kartographie und Geodäsie, France's Institut Géographique National, the Geographical Survey Institute of Japan, and Geoscience Australia, would build the stations. Together, these groups would choose the best locations, and the work would be done in cooperation with the Global Geodetic Observing System, a scientific organization that helps maintain the terrestrial reference frame.

"By bringing at least three of the four techniques together in each station, we will get a stronger system overall," says Frey. "NASA is leading the way in this, building a prototype station that will go beyond our current scientific requirements and serve the satellites of the future."

Contacts and sources: 

Making Droplets Drop Faster: New Nanopatterned Surfaces Could Improve The Efficiency Of Powerplants And Desalination Systems

The condensation of water is crucial to the operation of most of the powerplants that provide our electricity — whether they are fueled by coal, natural gas or nuclear fuel. It is also the key to producing potable water from salty or brackish water. But there are still large gaps in the scientific understanding of exactly how water condenses on the surfaces used to turn steam back into water in a powerplant, or to condense water in an evaporation-based desalination plant.

New research by a team at MIT offers important new insights into how these droplets form, and ways to pattern the collecting surfaces at the nanoscale to encourage droplets to form more rapidly. These insights could enable a new generation of significantly more efficient powerplants and desalination plants, the researchers say.


The new results were published online this month in the journal ACS Nano, a publication of the American Chemical Society, in a paper by MIT mechanical engineering graduate student Nenad Miljkovic, postdoc Ryan Enright and associate professor Evelyn Wang.

Although analysis of condensation mechanisms is an old field, Miljkovic says, it has re-emerged in recent years with the rise of micro- and nanopatterning technologies that shape condensing surfaces to an unprecedented degree. The key property of surfaces that influences droplet-forming behavior is known as “wettability,” which determines whether droplets stand high on a surface like water drops on a hot griddle, or spread out quickly to form a thin film.

It’s a question that’s key to the operation of powerplants, where water is boiled using fossil fuel or the heat of nuclear fission; the resulting steam drives a turbine attached to a dynamo, producing electricity. After exiting the turbine, the steam needs to cool and condense back into liquid water, so it can return to the boiler and begin the process again. (That’s what goes on inside the giant cooling towers seen at powerplants.)

Typically, on a condensing surface, droplets gradually grow larger while adhering to the material through surface tension. Once they get so big that gravity overcomes the surface tension holding them in place, they rain down into a container below. But it turns out there are ways to get them to fall from the surface — and even to “jump” from the surface — at much smaller sizes, long before gravity takes over. That reduces the size of the removed droplets and makes the resulting transfer of heat much more efficient, Miljkovic says.

One mechanism is a surface pattern that encourages adjacent droplets to merge together. As they do so, energy is released, which “causes a recoil from the surface, and droplets will actually jump off,” Miljkovic says. That mechanism has been observed before, he notes, but the new work “adds a new chapter to the story. Few researchers have looked at the growth of the droplets prior to the jumping in detail.”

That’s important because even if the jumping effect allows droplets to leave the surface faster than they would otherwise, if their growth lags, you might actually reduce efficiency. In other words, it’s not just the size of the droplet when it gets released that matters, but also how fast it grows to that size.

“This has not been identified before,” Miljkovic says. And in many cases, the team found, “you think you’re getting enhanced heat transfer, but you’re actually getting worse heat transfer.”

In previous research, “heat transfer has not been explicitly measured,” he says, because it’s difficult to measure and the field of condensation with surface patterning is still fairly young. By incorporating measurements of droplet growth rates and heat transfer into their computer models, the MIT team was able to compare a variety of approaches to the surface patterning and find those that actually provided the most efficient transfer of heat.

One approach has been to create a forest of tiny pillars on the surface: Droplets tend to sit on top of the pillars while only locally wetting the surface rather than wetting the whole surface, minimizing the area of contact and facilitating easier release. But the exact sizes, spacing, width-to-height ratios and nanoscale roughness of the pillars can make a big difference in how well they work, the team found.

“We showed that our surfaces improved heat transfer up to 71 percent [compared to flat, non-wetting surfaces currently used only in high-efficiency condenser systems] if you tailor them properly,” Miljkovic says. With more work to explore variations in surface patterns, it should be possible to improve even further, he says.

The enhanced efficiency could also improve the rate of water production in plants that produce drinking water from seawater, or even in proposed new solar-power systems that rely on maximizing evaporator (solar collector) surface area and minimizing condenser (heat exchanger) surface area to increase the overall efficiency of solar-energy collection. A similar system could improve heat removal in computer chips, which is often based on internal evaporation and recondensation of a heat-transfer liquid through a device called a heat pipe.

Chuan-Hua Chen, an assistant professor of mechanical engineering and materials science at Duke University who was not involved in this work, says, “It is intriguing to see the coexistence of both sphere- and balloon-shaped condensate drops on the same structure. … Very little is known at the scales resolved by the environmental electron microscope used in this paper. Such findings will likely influence future research on anti-dew materials and … condensers.”

The next step in the research, underway now, is to extend the findings from the droplet experiments and computer modeling — and to find even more efficient configurations and ways of manufacturing them rapidly and inexpensively on an industrial scale, Miljkovic says.

This work was supported as part of the MIT S3TEC Center, an Energy Frontier Research Center funded by the U.S. Department of Energy.

Contacts and sources:
David L. Chandler, MIT News Office

MIT Research: The High Price Of Losing Manufacturing Jobs

Study: Overseas manufacturing competition hits U.S. regions hard, leaving workers unemployed for years and local economies struggling.

The loss of U.S. manufacturing jobs is a topic that can provoke heated arguments about globalization. But what do the cold, hard numbers reveal? How has the rise in foreign manufacturing competition actually affected the U.S. economy and its workers?

A new study co-authored by MIT economist David Autor shows that the rapid rise in low-wage manufacturing industries overseas has indeed had a significant impact on the United States. The disappearance of U.S. manufacturing jobs frequently leaves former manufacturing workers unemployed for years, if not permanently, while creating a drag on local economies and raising the amount of taxpayer-borne social insurance necessary to keep workers and their families afloat.

Geographically, the research shows, foreign competition has hurt many U.S. metropolitan areas — not necessarily the ones built around heavy manufacturing in the industrial Midwest, but many areas in the South, the West and the Northeast, which once had abundant manual-labor manufacturing jobs, often involving the production of clothing, footwear, luggage, furniture and other household consumer items. Many of these jobs were held by workers without college degrees, who have since found it hard to gain new employment.

"The effects are very concentrated and very visible locally," says Autor, professor and associate head of MIT's Department of Economics. "People drop out of the labor force, and the data strongly suggest that it takes some people a long time to get back on their feet, if they do at all." Moreover, Autor notes, when a large manufacturer closes its doors, "it does not simply affect an industry, but affects a whole locality."

In the study, published as a working paper by the National Bureau of Economic Research, Autor, along with economists David Dorn and Gordon Hanson, examined the effect of overseas manufacturing competition on 722 locales across the United States over the last two decades. This is also a research focus of MIT's ongoing study group about manufacturing, Production in the Innovation Economy (PIE); Autor is one of 20 faculty members on the PIE commission.

The findings highlight the complex effects of globalization on the United States.

"Trade tends to create diffuse beneficiaries and a concentration of losers," Autor says. "All of us get slightly cheaper goods, and we're each a couple hundred dollars a year richer for that." But those losing jobs, he notes, are "a lot worse off." For this reason, Autor adds, policymakers need new responses to the loss of manufacturing jobs: "I'm not anti-trade, but it is important to realize that there are reasons why people worry about this issue."

-- Double trouble: businesses, consumers both spend less when industry leaves

In the paper, Autor, Dorn (of the Center for Monetary and Fiscal Studies in Madrid, Spain) and Hanson (of the University of California at San Diego) specifically study the effects of rising manufacturing competition from China, looking at the years 1990 to 2007. At the start of that period, low-income countries accounted for only about 3 percent of U.S. manufacturing imports; by 2007, that figure had increased to about 12 percent, with China representing 91 percent of the increase.

The types of manufacturing for export that grew most rapidly in China during that time included the production of textiles, clothes, shoes, leather goods, rubber products — and one notable high-tech area, computer assembly. Most of these production activities involve soft materials and hands-on finishing work.

"These are labor-intensive, low-value-added [forms of] production," Autor says. "Certainly the Chinese are moving up the value chain, but basically China has been most active in low-end goods."

In conducting the study, the researchers found more pronounced economic problems in cities most vulnerable to the rise of low-wage Chinese manufacturing; these include San Jose, Calif., Providence, R.I., Manchester, N.H., and a raft of urban areas below the Mason-Dixon line — the leading example being Raleigh, N.C. "The areas that are most exposed to China trade are not the Rust Belt industries," Autor says. "They are places like the South, where manufacturing was rising, not falling, through the 1980s."

All told, as American imports from China grew more than tenfold between 1991 and 2007, roughly a million U.S. workers lost jobs due to increased low-wage competition from China — about a quarter of all U.S. job losses in manufacturing during the time period.

And as the study shows, when businesses shut down, it hurts the local economy because of two related but distinct "spillover effects," as economists say: The shuttered businesses no longer need goods and services from local non-manufacturing firms, and their former workers have less money to spend locally as well.

A city at the 75th percentile of exposure to Chinese manufacturing, compared to one at the 25th percentile, will have roughly a 5 percent decrease in the number of manufacturing jobs and an increase of about $65 per capita in the amount of social insurance needed, such as unemployment insurance, health care insurance and disability payments.

"People like to think that workers flow freely across sectors, but in reality, they don't," Autor says. At a conservative estimate, that $65 per capita wipes out one-third of the per-capita gains realized by trade with China, in the form of cheaper goods. "Those numbers are really startling," Autor adds.

The study draws on United Nations data on international trade by goods category among developing and developed countries, combined with U.S. economic data from the Census Bureau, the Bureau of Economic Analysis and the Social Security Administration.

-- New policies for a new era?

In Autor's view, the findings mean the United States needs to improve its policy response to the problem of disappearing jobs. "We do not have a good set of policies at present for helping workers adjust to trade or, for that matter, to any kind of technological change," he says.

For one thing, Autor says, "We could have much better adjustment assistance — programs that are less fragmented, and less stingy." The federal government's Trade Adjustment Assistance (TAA) program provides temporary benefits to Americans who have lost jobs as a result of foreign trade. But as Autor, Dorn and Hanson estimate in the paper, in areas affected by new Chinese manufacturing, the increase in disability payments is a whopping 30 times as great as the increase in TAA benefits.

Therefore, Autor thinks, well-designed job-training programs would help the government's assistance efforts become "directed toward helping people reintegrate into the labor market and acquire skills, rather than helping them exit the labor market."

Still, it will likely take more research to get a better idea of what the post-employment experience is like for most people. To this end, Autor, Dorn and Hanson are conducting a new study that follows laid-off manufacturing workers over time, nationally, to get a fine-grained sense of their needs and potential to be re-employed.

"Trade may raise GDP," Autor says, "but it does make some people worse off. Almost all of us share in the gains. We could readily assist the minority of citizens who bear a disproportionate share of the costs and still be better off in the aggregate."



Contacts and sources: