Tuesday, January 31, 2012

Egyptian Mummy And Prostate Cancer: 2nd Oldest Case In The World May Point The Way To A Cure

An international research team in Lisbon, Portugal has diagnosed the oldest case of prostate cancer in ancient Egypt and the second oldest case in the world. Using high-resolution computerized tomography (CT) scanners, the researchers identified the cancer in a mummy subject known as M1.

"Cancer is such a hot topic these days; experts are constantly trying to probe in hopes of answering the one question- when and how did the ailment really evolve?” said Salima Ikram; member of the research team and professor of Egyptology at The American University in Cairo (AUC). “Findings such as these bring us one step closer to finding the cause of cancer, and, ultimately, the cure to a disease that has besieged mankind for so long.” 

Salima Ikram 
 
Credit: AUC

The study, published in the International Journal of Paleopathology used CT scans with pixel resolution of 0.33 millimeters on three Egyptian mummies from the collection of the National Archaeological Museum in Lisbon. The images revealed several small, round, dense bone lesions located mainly in M1’s pelvis, spine and proximal limbs, indicative of metastatic prostate cancer.

Until recently, researchers have believed the widespread occurrence of carcinogens in food and in the environment were the main causes of cancer in the modern industrial age. However, according to Ikram, “We’re starting to see that the causes of cancer seem to be less environmental, more genetic. Living conditions in ancient times were very different; there were no pollutants or modified foods, which leads us to believe that the disease is not necessarily only linked to industrial factors.”

Ikram suggested that there are more deaths attributable to cancer today simply because people are living longer. “Life expectancy in ancient Egyptian societies ranged from 30 to 40 years, meaning that those afflicted with the disease were probably dying from reasons other than its progression,” she argued.

The earliest detection of prostate cancer in the world came from the 2,700-year-old skeleton of a Scythian king in Russia, leading scientists to suspect that cancer was quite prevalent in the past despite the scarcity of recorded cases.

Contacts and sources:
Rehab Saad El-DomiatiThe American University in Cairo


Origin Of Life: New Pathway To Life's Chemical Building Blocks Plausible Says Scripps Research Team

For decades, chemists considered a chemical pathway known as the formose reaction the only route for producing sugars essential for life to begin, but more recent research has called into question the plausibility of such thinking. Now a group from The Scripps Research Institute has proven an alternative pathway to those sugars called the glyoxylate scenario, which may push the field of pre-life chemistry past the formose reaction hurdle.

The team is reporting the results of their highly successful experiments online ahead of print in the Journal of the American Chemical Society.

"We were working in uncharted territory," says Ramanarayanan ("Ram") Krishnamurthy, a Scripps Research chemist who led the research, "We didn't know what to expect but the glyoxylate scenario with respect to formation of carbohydrates is not a hypothesis anymore, it's an experimental fact."

Scripps Research Institute investigators Ramanarayanan ("Ram") Krishnamurthy (right) and Vasu Sagi are reporting their latest results in the Journal of the American Chemical Society. 
Credit: Scripps Research InstituteScripps Research Institute

The quest to recreate the chemistry that might have allowed life to emerge on a prehistoric Earth began in earnest in the 1950s. Since that time researchers have focused on a chemical path known as the formose reaction as a potential route from the simple, small molecules that might have been present on the Earth before life began to the complex sugars essential to life, at least life as we know it now.

The formose reaction begins with formaldehyde, thought to be a plausible constituent of a prebiotic world, going through a series of chemical transformation leading to simple and then more complex sugars, including ribose, which is a key building block in DNA and RNA.

But as chemists continued to study the formose reaction they realized that the chemistry involved is quite messy, producing lots of sugars with no apparent biological use and only the tiniest bit of ribose. As such experimental results mounted, the plausibility of the formose reaction as the prebiotic sugar builder came into question. But the problem was that no one had established a reasonable alternative.

A New Pathway
Then in 2007, Albert Eschenmoser, an organic chemist who recently retired from Scripps Research, proposed a new pathway he dubbed the glyoxylate scenario.This involved glyoxylate as an alternative starting point to formaldehyde, and reactions with dihydroxyfumarate (DHF) that Eschenmoser hypothesized could launch a cascade of reactions that would lead to sugars. Glyoxylate was a good starting point because of the possibility that it could be produced by oligomerization of carbon monoxide under potentially prebiotic conditions.

Eschenmoser and Krishnamurthy began developing the experiments to test the hypothesis. At the time, very little was known about relevant reactions involving DHF, and nothing beyond theory about how it reacted with glyoxylate.

The idea that DHF might be involved in a plausible biosynthetic pathway to sugars (via a decarboxylative conversion to glycolaldehyde which aldolizes to sugars) dates back about as far as work on the formose reaction, but the experiments proved otherwise, causing DHF to fall from focus.

Success

"We were thrown a lot of curve balls we had to really think through," said Krisnamurthy of the years he spent working with postdoctoral fellow Vasu Sagi, who is lead author of the new paper. The team's experiments revealed that under the right conditions, DHF and glyoxylate, when in the presence of a few other plausible prebiotic chemicals including formaldehyde, would produce sugars known as ketoses. Ketoses in turn can be converted to critical sugars, including some essential to forming certain amino acids, the DNA and RNA building blocks such as ribose.

In remarkable contrast to the formose reaction, which might only convert a fraction of a percent of its starting materials into ribose, the experiments Sagi slaved over, sometimes monitoring them 24 hours a day, converted virtually 100 percent of the glyoxylategave clean conversion of DHF to ketoses.

Such efficiency is so rare in prebiotic chemistry and was so unexpected in the glyoxylate dihydroxyfumarate experiments, that the scientists were leery at first of their results. "We had to prove it by repeating the experiments many times," said Sagi, but the results held.

"Prebiotic reactions are usually pretty messy, so when we saw how clean this was we were really pleasantly surprised," said Krishnamurthy.

Interestingly, during the course of the work, Sagi and Krishnamurthy discovered DHF can react with itself to produce a new compounds never before documented, which the group reported separately late last year.

The Rest of the Story

Though the new research soundly proves the plausibility of one of the facets of the glyoxylate scenario, the chemistry involved is only one of three key series of reactions researchers will have to identify in order to complete a viable path from a primordial soup to life's building blocks.

While glyoxylate is a plausible prebiotic component, there's not yet a known prebiotic pathway to DHF, so the Krishnamurthy team is already working to identify possibilities.

A third critical conversion would have to occur after production of ketoses. Right now, the only known paths for the conversion of ketoses to ribose and other critical sugars are transformations by living organisms. Whether and how such conversion might have proceeded before life arose remains an open research question.

This research was funded by the Skaggs Research Foundation, NASA Astrobiology: Exobiology and Evolutionary Biology Program (NNX09AM96G), and jointly supported by the National Science Foundation and the NASA Astrobiology Program under the NSF Center for Chemical Evolution (CHE-1004570).

In addition to Sagi and Krishnamurthy, authors on the paper, titled, "Exploratory Experiments on the Chemistry of the Glyoxylate Scenario: Formation of Ketosugars from Dihydroxyfumarate," were Venkateshwarlu Punna, Fang Huf, and Geeta Meher, all from Scripps Research. For more information, see the study at http://pubs.acs.org/doi/abs/10.1021/ja211383c.

Contacts and sources:
Mika Ono
Scripps Research Institute

Satellite Study Reveals Critical Habitat And Corridors For World's Rarest Gorilla

Conservationists working in Central Africa to save the world's rarest gorilla have good news: the Cross River gorilla has more suitable habitat than previously thought, including vital corridors that, if protected, can help the great apes move between sites in search of mates, according to the North Carolina Zoo, the Wildlife Conservation Society, and other groups.

The newly published habitat analysis, which used a combination of satellite imagery and on-the-ground survey work, will help guide future management decisions for Cross River gorillas living in the mountainous border region between Nigeria and Cameroon.

The Cross River gorilla, the most endangered great ape in Africa, is seen here in Cameroon's Limbe Wildlife Center. Images of wild Cross River gorillas are rare, due to the rugged terrain in which they exist and the great ape's elusive behavior.
 
Credit: Nicky Lankester

The study appears in the online edition of the journal Oryx. The authors include: Richard A. Bergl of the North Carolina Zoo; Ymke Warren (deceased), Aaron Nicholas, Andrew Dunn, Inaoyom Imong, and Jacqueline L. Sunderland-Groves of the Wildlife Conservation Society; and John F. Oates of Hunter College, CUNY.

"We're pleased with our results, which have helped us to identify both new locations where the gorillas live and apparently unoccupied areas of potential gorilla habitat," said Dr. Bergl of the North Carolina Zoo, lead author of the study. "The study is a great example of how scientific research can be directly applied to great ape conservation."

WCS conservationist and co-author Andrew Dunn said: "The good news for Cross River gorillas is that they still have plenty of habitat in which to expand, provided that steps are taken to minimize threats to the population."

Using high-resolution satellite images, the research team mapped the distribution of forest and other land-cover types in the Cross River region. In order to ground truth the land-cover map, field researchers traveled to more than 400 control points to confirm its accuracy. They found that the land-cover rating system had an accuracy rate of 90 percent or higher. The land-cover map was combined with other environmental data to determine the extent of the Cross River gorilla's habitat. The entire Cross River region was divided into 30 x 30 meter pixels, and each pixel was rated in terms of its suitability as gorilla habitat (with steep, forested areas of low human activity receiving a high rating, and lowland areas more significantly impacted by people receiving a low rating). These ratings were translated into a habitat suitability map for the area.

With the new habitat suitability map to guide them, the team then selected 12 locations possessing all the characteristics of gorilla habitat (mainly forested landscapes far from human settlements) for field surveys. Most of these areas had no previous record of gorillas, but to their surprise, the team found signs of gorilla presence (in the form of gorilla dung and nests) in 10 of the 12 sites, thereby confirming the value of using satellite image analysis to predict suitable habitat and to prioritize areas in which to conduct further surveys.


A recent study by the Wildlife Conservation Society, the North Carolina Zoo, and others used satellite imagery to study the Cross River gorilla's habitat -- the mountainous border region between Nigeria and Cameroon.
 
Credit: Aaron Nicholas/Wildlife Conservation Society.

Overall, the findings of the study represent a significant expansion of known Cross River gorilla range. The area now known to be occupied by gorillas is more than 50 percent larger than had previously been documented. The findings also support recent genetic analyses that suggest a high degree of connectivity between the 11 known locations where gorillas occur.

The study also located parts of the population under threat from isolation through fragmentation. For example, Afi Mountain Wildlife Sanctuary in Nigeria, which contains a significant portion of the Cross River gorilla population, is only tenuously connected to the nearest sub-population of gorillas by farmland and other forms of habitat degradation.

"For small populations such as this one, the maintenance of connective corridors is crucial for their long term survival," said WCS researcher Inaoyom Imong. "The analysis is the first step in devising ways to rehabilitate degraded pathways."

Authors of the study will use their findings at the upcoming Cross River gorilla workshop (scheduled for February in Limbe, Cameroon) to help formulate a new 5-year regional plan for the subspecies. "This latest research has greatly expanded our knowledge on Cross River gorilla distribution, which will lead to more effective management decisions," said WCS conservationist and co-author Aaron Nicholas.

Dr. James Deutsch, Executive Director for WCS's Africa Program, said: "Accurately assessing the state of available habitat is a vital foundation for future conservation efforts for the Cross River gorilla. A new action plan for the subspecies will build on the collaborative partnership already underway between Nigeria and Cameroon and ensure a future for this unique primate."

The Cross River gorilla is the rarest of the four subspecies of gorilla, numbering fewer than 300 individuals across its entire range, limited to the forested mountainous terrain on the border region of Nigerian and Cameroon. The subspecies is listed as "Critically Endangered" and is threatened by both habitat disturbance and hunting, as the entire population lives in a region of high human population density and heavy natural resource exploitation.

Conservation work on Cross River gorillas in this region is a priority for several U.S. government agencies, including the U.S. Agency for International Development, U.S. Fish and Wildlife Service, and U.S. Forest Service.

Contacts and sources:
John Delaney
Wildlife Conservation Society

The study was made possible through the generous support of: the Arcus Foundation; Great Ape Conservation Fund; KfW (German Development Bank); Lincoln Park Zoo; National Geographic Conservation Trust; Primate Conservation Inc.; and U.S. Fish and Wildlife Service.

Ultra-Fast Photodetector And Terahertz Generator Using Graphene

Extremely thin, more stable than steel and widely applicable: the material graphene is full of interesting properties. As such, it is currently the shining star among the electric conductors. Photodetectors made with graphene can process and conduct both light signals and electric signals extremely fast. Upon optical stimulation, graphene generates a photocurrent within picoseconds (0,000 000 000 001 second). 

Until now, none of the available methods were fast enough to measure these processes in graphene. Professor Alexander Holleitner and Dr. Leonhard Prechtel, scientists at the Technische Universitaet Muenchen (TUM), have now developed a method to measure the temporal dynamics of this photo current.

A graphene sheet overstretches the small gap between two metalic contacts 
Credit:  Walter Schottky Institute of the TU Muenchen  

Graphene leaves a rather modest impression at a first sight. The material comprises nothing but carbon atoms ordered in a mono-layered “carpet”. Yet, what makes graphene so fascinating for scientists is its extremely high conductivity. This property is particularly useful in the development of photodetectors. These are electronic components that can detect radiation and transform it into electrical signals.

Graphene’s extremely high conductivity inspires scientists to utilize it in the design of ultra-fast photodetectors. However, until now, it was not possible to measure the optical and electronic behavior of graphene with respect to time, i.e. how long it takes between the electric stimulation of graphene and the generation of the respective photocurrent.

Alexander Holleitner and Leonhard Prechtel, scientists at the Walter Schottky Institut of the TU Muenchen and members of the Cluster of Excellence Nanosystems Initiative Munich (NIM), decided to pursue this question. The physicists first developed a method to increase the time resolution of photocurrent measurements in graphene into the picosecond range. This allowed them to detect pulses as short as a few picoseconds. (For comparison: A light beam traveling at light speed needs three picoseconds to propagate one millimeter.)

The central element of the inspected photodetectors is freely suspended graphene integrated into electrical circuits via metallic contacts. The temporal dynamics of the photocurrent were measured by means of so-called co-planar strip lines that were evaluated using a special time-resolved laser spectroscopy procedure – the pump-probe technique. A laser pulse excites the electrons in the graphene and the dynamics of the process are monitored using a second laser. With this technique the physicists were able to monitor precisely how the photocurrent in the graphene is generated.

At the same time, the scientists could take advantage of the new method to make a further observation: They found evidence that graphene, when optically stimulated, emits radiation in the terahertz (THz) range. This lies between infrared light and microwave radiation in the electromagnetic spectrum. The special thing about THz radiation is that it displays properties shared by both adjacent frequency ranges: It can be bundled like particle radiation, yet still penetrates matter like electromagnetic waves. This makes it ideal for material tests, for screening packages or for certain medical applications.

The research was funded by the German Research Foundation (DFG), the Excellence Cluster Nanosystems Initiative Munich and the Center for NanoScience (CeNS). Physicists from Universität Regensburg, Eidgenössische Technische Hochschule Zürich, Rice University and Shinshu University also contributed to the publication.

Original publication:

Time-resolved ultrafast photocurrents and terahertz generation in freely suspended grapheme
Leonhard Prechtel, Li Song, Dieter Schuh, Pulickel Ajayan, Werner Wegscheider, Alexander W. Holleitner
Nature Communications Links: DOI: 10.1038/ncomms1656http://www.nature.com/ncomms/index.html


Contacts and sources:
Dr. Andreas Battenberg
Technische Universitaet Muenchen

New Observations Of Interstellar Matter: Raw Material For The Formation Of New Stars, Planets And Even Human Beings



IBEX has directly sampled multiple heavy elements from the Local Interstellar Cloud for the first time.

Credit: NASA/Goddard Scientific Visualization Studio

A great magnetic bubble surrounds the solar system as it cruises through the galaxy. The sun pumps the inside of the bubble full of solar particles that stream out to the edge until they collide with the material that fills the rest of the galaxy, at a complex boundary called the heliosheath. On the other side of the boundary, electrically charged particles from the galactic wind blow by, but rebound off the heliosheath, never to enter the solar system. Neutral particles, on the other hand, are a different story. They saunter across the boundary as if it weren't there, continuing on another 7.5 billion miles for 30 years until they get caught by the sun's gravity, and sling shot around the star.

There, NASA's Interstellar Boundary Explorer lies in wait for them. Known as IBEX for short, this spacecraft methodically measures these samples of the mysterious neighborhood beyond our home. IBEX scans the entire sky once a year, and every February, its instruments point in the correct direction to intercept incoming neutral atoms. IBEX counted those atoms in 2009 and 2010 and has now captured the best and most complete glimpse of the material that lies so far outside our own system.

The results? It's an alien environment out there: the material in that galactic wind doesn't look like the same stuff our solar system is made of.

Neutral atoms from the galactic wind sweep past the solar system's magnetic boundary, the heliosheath, and travel some 30 years into our solar system toward the sun. NASA's Interstellar Boundary Explorer (IBEX) can observe those atoms and provide information about the mysterious neighborhood outside our home. 
Credit: NASA/Goddard Conceptual Image Lab

"We've directly measured four separate types of atoms from interstellar space and the composition just doesn't match up with what we see in the solar system," says Eric Christian, mission scientist for IBEX at NASA's Goddard Space Flight Center in Greenbelt, Md. "IBEX's observations shed a whole new light on the mysterious zone where the solar system ends and interstellar space begins."

More than just helping to determine the distribution of elements in the galactic wind, these new measurements give clues about how and where our solar system formed, the forces that physically shape our solar system, and even the history of other stars in the Milky Way.

NASA's Interstellar Boundary Explorer (IBEX) has found that there's more oxygen in our solar system than there is in the nearby interstellar material. That suggests that either the sun formed in a different part of the galaxy or that outside our solar system life-giving oxygen lies trapped in dust or ice grains unable to move freely in space. 
Credit: NASA/Goddard

In a series of science papers appearing in the Astrophysics Journal on January 31, 2012, scientists report that for every 20 neon atoms in the galactic wind, there are 74 oxygen atoms. In our own solar system, however, for every 20 neon atoms there are 111 oxygen atoms. That translates to more oxygen in any given slice of the solar system than in the local interstellar space.

"Our solar system is different than the space right outside it and that suggests two possibilities," says David McComas the principal investigator for IBEX at the Southwest Research Institute in San Antonio, Texas. "Either the solar system evolved in a separate, more oxygen-rich part of the galaxy than where we currently reside or a great deal of critical, life-giving oxygen lies trapped in interstellar dust grains or ices, unable to move freely throughout space." Either way, this affects scientific models of how our solar system – and life – formed.

Studying the galactic wind also provides scientists with information about how our solar system interacts with the rest of space, which is congruent with an important IBEX goal. Classified as a NASA Explorer Mission -- a class of smaller, less expensive spacecraft with highly focused research objectives -- IBEX's main job is to study the heliosheath, that outer boundary of the solar system's magnetic bubble -- or heliosphere -- where particles from the solar wind meet the galactic wind.

Previous spacecraft have already provided some information about the way the galactic wind interacts with the heliosheath. Ulysses, for one, observed incoming helium as it traveled past Jupiter and measured it traveling at 59,000 miles per hour. IBEX's new information, however, shows the galactic wind traveling not only at a slower speed -- around 52,000 miles per hour -- but from a different direction, most likely offset by some four degrees from previous measurements. Such a difference may not initially seem significant, but it amounts to a full 20% difference in how much pressure the galactic wind exerts on the heliosphere.

The galactic wind streams toward the sun from the direction of Scorpio and IBEX has found that it travels at 52,000 miles an hour. The speed of the galactic wind and its subsequent pressure on the outside of the solar system's boundary affects the shape of the heliosphere as it travels through space.
 Credit: NASA/Goddard Scientific Visualization Studio

"Measuring the pressure on our heliosphere from the material in the galaxy and from the magnetic fields out there," says Christian, "will help determine the size and shape of our solar system as it travels through the galaxy."

These IBEX measurements also provide information about the cloud of material in which the solar system currently resides. This cloud is called the local interstellar cloud, to differentiate it from the myriad of particle clouds throughout the Milky Way, each traveling at different speeds. The solar system and its heliosphere moved into our local cloud at some point during the last 45,000 years.

Since the older Ulysses observations of the galactic wind speed was in between the speeds expected for the local cloud and the adjacent cloud, researchers thought perhaps the solar system didn't lie smack in the middle of this cloud, but might be at the boundary, transitioning into a new region of space. IBEX's results, however, show that we remain fully in the local cloud, at least for the moment.

"Sometime in the next hundred to few thousand years, the blink of an eye on the timescales of the galaxy, our heliosphere should leave the local interstellar cloud and encounter a much different galactic environment," McComas says.

In addition to providing insight into the interaction between the solar system and its environment, these new results also hold clues about the history of material in the universe. While the big bang initially created hydrogen and helium, only the supernovae explosions at the end of a giant star's life can spread the heavier elements of oxygen and neon through the galaxy. Knowing the amounts of such elements in space can help map how the galaxy has evolved and changed over time.

NASA's Interstellar Boundary Explorer (IBEX) studies the outer boundaries of the solar system where particles from the solar wind collide with particles from the galactic wind. 
Credit: NASA

"This set of papers provide many of the first direct measurements of the interstellar medium around us," says McComas. "We've been trying to understand our galaxy for a long time, and with all of these observations together, we are taking a major step forward in knowing what the local part of the galaxy is like."

Voyager 1 could cross out of our solar system within the next few years. By combining the data from several sets of NASA instruments – Ulysses, Voyager, IBEX and others – we are on the precipice of stepping outside and understanding the complex environment beyond our own frontier for the first time.

The Southwest Research Institute developed and leads the IBEX mission with a team of national and international partners. The spacecraft is one of NASA's series of low-cost, rapidly developed missions in the Small Explorers Program. NASA's Goddard Space Flight Center in Greenbelt, Md., manages the program for the agency's Science Mission Directorate.


Contacts and sources:
Karen C. Fox
NASA's Goddard Space Flight Center, Greenbelt, MD
For more information about the IBEX mission, go to:
› http://www.nasa.gov/ibex

› Additional downloadable media

TSA Methods Make Air Travel Less Safe: Risk-Based Passenger Screening Could Make Air Travel Safer

Anyone who has flown on a commercial airline since 2001 is well aware of increasingly strict measures at airport security checkpoints. A study by Illinois researchers demonstrates that intensive screening of all passengers actually makes the system less secure by overtaxing security resources.

University of Illinois computer science and mathematics professor Sheldon H. Jacobson, in collaboration with Adrian J. Lee at the Central Illinois Technology and Education Research Institute, explored the benefit of matching passenger risk with security assets. The pair detailed their work in the journal Transportation Science.

Illinois professor Sheldon H. Jacobson developed algorithms to address risk in airline passenger populations to help determine how best to allocate airport security resources.
feature image 
Photo by L. Brian Stauffer

“A natural tendency, when limited information is available about from where the next threat will come, is to overestimate the overall risk in the system,” Jacobson said. “This actually makes the system less secure by over-allocating security resources to those in the system that are low on the risk scale relative to others in the system.”

When overestimating the population risk, a larger proportion of high-risk passengers are designated for too little screening while a larger proportion of low-risk passengers are subjected to too much screening. With security resources devoted to the many low-risk passengers, those resources are less able to identify or address high-risk passengers. Nevertheless, current policies favor broad screening.

“One hundred percent checked baggage screening and full-body scanning of all passengers is the antithesis of a risk-based system,” Jacobson said. “It treats all passengers and their baggage as high-risk threats. The cost of such a system is prohibitive, and it makes the air system more vulnerable to successful attacks by sub-optimally allocating security assets.”

In an effort to address this problem, the Transportation Security Administration (TSA) introduced a pre-screening program in 2011, available to select passengers on a trial basis. Jacobson’s previous work has indicated that resources could be more effectively invested if the lowest-risk segments of the population – frequent travelers, for instance – could pass through security with less scrutiny since they are “known” to the system.

A challenge with implementing such a system is accurately assessing the risk of each passenger and using such information appropriately. In the new study, Jacobson and Lee developed three algorithms dealing with risk uncertainty in the passenger population. Then, they ran simulations to demonstrate how their algorithms, applied to a risk-based screening method, could estimate risk in the overall passenger population – instead of focusing on each individual passenger – and how errors in this estimation procedure can be mitigated to reduce the risk to the overall system.

They found that risk-based screening, such as the TSA’s new Pre-Check program, increases the overall expected security. Rating a passenger’s risk relative to the entire flying population allows more resources to be devoted to passengers with a high risk relative to the passenger population.

The paper also discusses scenarios of how terrorists may attempt to thwart the security system – for example, blending in with a high-risk crowd so as not to stand out – and provides insights into how risk-based systems can be designed to mitigate the impact of such activities.

“The TSA’s move toward a risk-based system is designed to more accurately match security assets with threats to the air system,” Jacobson said. “The ideal situation is to create a system that screens passengers commensurate with their risk. Since we know that very few people are a threat to the system, relative risk rather than absolute risk provides valuable information.”

The National Science Foundation and the U.S. Air Force Office of Scientific Research supported this work.


Contacts and sources:
Liz Ahlberg
University of Illinois at Urbana-Champaign

The paper, “Addressing Passenger Risk Uncertainty,” is available online.

Hot Molecule Explains Cold Chemistry Among Interstellar Clouds

Researchers shed light on the mystery surrounding energy-rich molecules in interstellar clouds

Surprisingly, hydrogen cyanide and its far more energetic isomer, hydrogen isocyanide, are present in almost equal amounts in cold interstellar gas clouds. Scientists from the Max Planck Institute for Nuclear Physics have succeeded in explaining how this happens through experiments carried out in the Heidelberg ion storage ring. During interstellar synthesis hydrogen cyanide forms as a hot hybrid from which the two isomers evolve in about equal quantities.

The detector recently developed by the Max Planck researchers together with colleagues from the Weizmann Institute of Science in Rehovot, which determines both the positions and particle masses of the fragments of molecular dissociation reactions, shortly before its installation in the vacuum system of the Heidelberg ion storage ring. The arrows indicate the trajectories of the incident fragments. The diagram on the right illustrates the determination of the particle masses and points of impact on the detector surface, which consists of a crossed arrangement of silicon strips. The particle mass is given by the pulse height.

 Credit: © MPI for Nuclear Physics 

When stars form from cold gas clouds, the clouds already contain many molecules consisting of important basic elements such as hydrogen, carbon, oxygen and sulfur. Sensitive new observatories enable fingerprints of many of these molecules to be identified in the light and the radio emission of the gas clouds. These spectroscopic observations reveal that the atoms in the interstellar molecules do not always arrange in the energetically most advantageous way.

Some of the observed compounds are found in related forms (isomers) which can arise when individual atoms within a molecule interchange their positions. But such position changes go at the cost of considerable energy, equivalent to temperatures of several thousand degrees.

One of these molecules is hydrogen cyanide or prussic acid (HCN – the hydrogen atom is bound to the carbon atom), whose much more energy-rich isomer hydrogen isocyanide (HNC – the hydrogen atom is bound to the nitrogen atom) is as abundant as hydrogen cyanide itself, although the latter should largely prevail at the low temperatures in open space.

Researchers long suspected that these often highly energetic isomers are finally a consequence of the ionizing radiation that permeates space. In fact, a symmetrical precursor, the HCNH+ ion, forms through an intricate chain of reactions. Later, this HCNH+ ion can encounter an electron, by which it is neutralized and dissociated into fragments, releasing energy. This way, both isomers can be formed.

Scientists at the Max Planck Institute for Nuclear Physics have now accurately measured the properties of this elementary dissociation reaction in the laboratory – under conditions very similar to those found in interstellar clouds. In the Heidelberg ion storage ring, they made electrons and DCND+ ions (variants of HCNH+ with heavy hydrogen, D = deuterium) collide one by one and, moreover, at very low collision energies; in interstellar clouds, these energies correspond to a temperature around minus 260 degrees Celsius.

Using a recently developed large-area detector, the researchers measured both positions and particle masses of the fragments D and DCN or DNC; only by this instrument it could be ensured that dissociation into exactly these two particles was selectively observed in the experiment. This method is still unable to distinguish between the two isomers of the product molecule; but it offers the unique advantage that the kinetic energy of the fragments can be determined accurately.

Here the researchers observed kinetic energy releases that were far smaller than expected. The missing amount of energy can only be contained inside the product molecule – thus, as predicted by some theoreticians, the molecule is extremely “hot” corresponding to its high internal excitation energy. This implies, however, that in this strongly vibrating product of a cold reaction, atoms can still change positions easily and frequently.

The molecule formed in interstellar gas clouds can therefore assume both geometric forms while it gradually emits its high internal energy into the environment – like a slowly dimming light bulb. The energy-rich isomer arises here in about half of all cases. Hence – via a long detour, now experimentally demonstrated in the laboratory – the presence of this isomer in interstellar molecular clouds reflects its production mechanism, ultimately owing to ionizing radiation.

Contacts and sources:
Dr. Gertrud Hönes
Max Planck Institute for Nuclear Physics, Heidelberg 

Apl. Prof. Dr. Andreas Wolf
Max Planck Institute for Nuclear Physics, Heidelberg 

Citation: Mario B. Mendes et al.,Cold electron reactions producing the energetic isomer of hydrogen cyanide in interstellar clouds
Astrophysical Journal Letters, January 20, 2012

Study: Vast Majority Of EU Citizens Are Marginalized By Dominance Of English Language

The European Union has 27 member countries and 23 official languages, but its official business is carried out primarily in one language — English. Yet the striking findings of a new study show that barely a third of the EU’s 500 million citizens speak English.

What about the other two-thirds? They are linguistically disenfranchised, say the study’s authors.

For the EU’s non-English speakers, their native languages are of limited use in the EU’s political, legal, communal and business spheres, conclude economists Shlomo Weber, Southern Methodist University, Dallas, and Victor Ginsburgh, Free University of Brussels (ULB), the authors who conducted the study. Those who are disenfranchised have limited access to EU laws, rules, regulations and debates in the governing body — all of which may violate the basic principles of EU society, the researchers say.

Europe
File:Europe countries map en 2.png
Credit:  Wikipedia

“Language is the proxy for engagement. People identify strongly with their language, which is integral to culture and traditions,” Weber says. “Language is so explosive; language is so close to how you feel.”

Weber and Ginsburgh base their findings on a new methodology they developed to quantitatively evaluate both costs and benefits of government policies to either expand or reduce diversity. The method unifies previous approaches to measure language diversity’s impact, an area of growing interest to scholars of economics and other social sciences.

“With globalization, people feel like they’ve been left on the side of the road. If your culture, your rights, your past haven’t been respected, how can you feel like a full member of society?” says Weber. “It is a delicate balance. People must decide if they want to trade their languages to increase by a few percentage points the rate of economic growth.”

Methodology can be applied to language diversity in other nations, including the United States
Beyond the EU, the Weber-Ginsburgh methodology can evaluate linguistic policies in other nations, too, including the U.S. It builds on a body of earlier published research by Weber, Ginsburgh and other economists.

“Our analysis offers a formal framework by which to address the merits and costs of the vast number of languages spoken in various countries,” said Weber. “We formally measure linguistic similarities and subsequently the linguistic distances between groups who speak various languages.”

The methodology also can measure the impact of other kinds of diversity, whether animal and plant biodiversity or economic classes of people, say the study’s authors.

They report their findings and present the methodology in their new book, “How Many Languages Do We Need? The Economics of Linguistic Diversity” (Princeton University Press). The research is noted on the web site of the International Monetary Fund in a review by Henry Hitchings.

Quantitative analysis finds English is the language spoken by largest percentage of EU citizens
Previous researchers found that 90 percent of the EU’s official documents are drafted in English and later translated to other languages, often French and sometimes German. Previous research also has documented frustration among EU officials with the political entity’s multitude of languages, as members wonder whether they are being understood.

Against that backdrop, the Weber-Ginsburgh analysis of the EU used official data from a routinely conducted EU survey of member states carried out in 2005 and later. The data came from answers to questions that included: What is your mother tongue? Which languages are you conversant in? How do you rate your fluency on a scale of very good, good or basic?

Weber and Ginsburgh found that of all the languages, English embraces the most EU citizens, followed by German second and French third.

English, German and French fall short
Yet those languages fall far short of including all people. The economists found that many EU residents are excluded.

Nearly two-thirds of EU citizens — 63 percent — don’t speak or understand English, while 75 percent don’t readily speak or understand German, and 80 percent don’t speak or understand French.

“English is spoken almost everywhere around the world,” the authors write, “but it is still far from being spoken by almost everyone.” At the same time, many non-native speakers of English feel the onslaught of that language’s global domination, a phenomenon that wasn’t generally foreseen and that evolved only within the last 60 years.

Weber and Ginsburgh discovered one EU age group that is less marginalized by English than other groups — youth ages 15 to 29. Fewer than half the young people — 43 percent — are disenfranchised, the researchers found.

The economists also introduce the concept of “proximity” — the degree to which languages are similar to one another. People who speak similar languages are less disenfranchised from one another, they say. Similarity is a factor of pronunciation, phonetics, syntax, grammar and vocabulary, although the authors caution that even words that seem alike aren’t always related, but instead are merely similar by chance or because languages borrow words.

Language represents identity and culture
Among the world’s 271 nations, more than 6,900 languages are spoken, Weber and Ginsburgh say.

Their research has found that there is no optimal degree of language diversity for a society, but many examples throughout history demonstrate that too much linguistic diversity is expensive, detrimental and often divisive, they say.

“The story of post-colonial Africa — what’s been called Africa’s growth tragedy — offers a painful example of the heavy costs incurred by a multitude of linguistic and ethnic divisions,” Weber says.

Language and cultural differences frequently have played a role in war, underdevelopment, brutal changes of power, poor administration, corruption and slacking economic growth, say the authors. Linguistic divides also impose friction on trade between countries, as well as influence migratory flows, literary translations or votes cast in various contests.

For example, in Sri Lanka two linguistic groups fought a bloody civil war for 25 years, killing tens of thousands of people, note Weber and Ginsburgh.

More recently, the former Belgian Prime Minister became infuriated at a position taken by U.K. Prime Minister David Cameron and decided to vent his ire by hurling the supreme insult: Refusing to speak English when addressing the official EU body, and opting instead for his native Flemish.

Designating an official language must weigh costs, benefits
Can the EU ever mandate an official language that embraces its 500 million citizens? How can Nigeria manage 527 languages spoken by citizens of that country? Or Cameroon, with its 279? How does democracy function in India, where 30 languages thrive among more than 1 billion native speakers?

About one-third of the world’s nations have met these challenges by legislating official language provisions in their constitutions, the authors say. The official language typically applies to official documents, communication between institutions and citizens and debates in official bodies.

But to scientifically determine an optimal set of core languages, the authors say, nations must weigh the costs of linguistic disenfranchisement against the benefits of standardization.

“History provides many examples of political regimes that have mandated single languages for efficiency or social control reasons, many of which have proved unsustainable in the face of backlash from those disenfranchised linguistically,” Weber says. “At the other end of the spectrum, other countries have permitted, by default or design, linguistic anarchy in which dozens or even hundreds of languages exist — to the detriment of even basic efficiency. ‘How Many Languages Do We Need?’ provides a common-sense argument and quantitative methodology to evaluate both criteria for languages: efficiency and enfranchisement, which are indispensable for sustainable globalization in our fractionalized world.”


France: An example of linguistic diversity handled well
Over the course of human history, has any country handled their linguistic diversity well?

“France,” Weber says. “Two hundred years ago, France had a lot of dialects, and only 3 million of its 28 million people spoke French. That’s only 10 percent of the people. In a bloodless transition the government imposed French as the official language but allowed dialects to flourish.”

Weber is the Robert H. and Nancy Dedman Trustee Professor of Economics at SMU. He is also a PINE Foundation professor of economics at the New Economic School, Moscow.

Ginsburgh is professor of economics emeritus at ULB, member of the European Center for Advanced Research in Economics and Statistics in Brussels, and a member of the Center of Operations research and Econometrics, Louvain-la-Neuve, Belgium.

Contacts and sources:
Margaret Allen
Southern Methodist University

Facebook Can Get You Fired: UC Research Reveals The Perils Of Social Networking For School Employees

School administrators are facing a growing dilemma resulting from social networking that goes beyond preventing cyber-bullying among students. They're also faced with balancing the rights of privacy and free speech of educators with what should be the appropriate behavior of teachers as role models.

Janet Decker, a University of Cincinnati assistant professor in UC's Educational Leadership Program, reveals more on the dilemma in an article published in the January issue of Principal Navigator, a professional magazine by the Ohio Association of Elementary School Administrators.

Janet Decker 
Credit: University of Cincinnati


Decker explains that a large number of educators have been fired for Internet activity. She says that some teachers have been dismissed for behavior such as posting a picture of themselves holding a glass of wine.

"Despite the evolving issues, the courts have not provided extensive guidance for administrators," writes Decker. "Part of the difficulty is that technology advances at a quicker pace than legal precedent, leaving school employees and administrators unsure of their legal responsibilities."

Decker's article highlights cases that have landed in court as a result of school policies on social networking that "were not clear or effective." The article also examines the law surrounding sexual harassment or abuse of students and freedom of speech for public employees and employee privacy.

"In general, it is important to understand that school employees are expected to be role models both inside and outside of school – even while on Facebook," concludes Decker.

Decker's article features the following 10 recommendations as she encourages school administrators to implement technology policies for school employees:

1. Educate! It's not enough to have written policies; schools should also offer professional development about these issues. By doing so, staff is notified about the expectations and they have a chance to digest and ask questions about the content of the policies.

2. Be empathetic in policies and actions. Administrators may wish that the school's computers will only be used for educational purposes; however, an expectation such as this is unrealistic.

3. Create separate student and staff policies. Much of the law pertaining to students and staff differs greatly.

4. Involve staff in policy creation. This process will help school employees comprehend the policies and will also likely foster staff buy-in.

5. Be clear and specific. Policies should include rationales, legal support and commentary with examples.

6. Ensure your policies conform to state and federal law.

7. Include consequences for violations in policies and implement the consequences.

8. Provide an avenue for appeal and attend to employees' due process rights.

9. Implement policies in an effective and non-discriminatory manner. 10. Amend policies as the law evolves. Much of the law related to technology is in flux. What is legal today may not be tomorrow.

Decker has offered professional development for educators across the nation pertaining to social networking and the legal issues surrounding student and teacher speech. Her research and publications focus on legal and policy issues related to special education, charter schools and technology. She teaches courses on education law for the UC College of Education, Criminal Justice, and Human Services (CECH).


Contacts and sources:
Dawn Fuller
University of Cincinnati

Top '5 US Terror Hot Spots' Are Urban Counties, But Rural Areas Not Exempt

N.Y., L.A., Miami, San Francisco, D.C. Top List; Maricopa, Ariz. rising.

Nearly a third of all terrorist attacks from 1970 to 2008 occurred in just five metropolitan U.S. counties, but events continue to occur in rural areas, spurred on by domestic actors, according to a report published today by researchers in the National Consortium for the Study of Terrorism and Responses to Terrorism (START), a Department of Homeland Security Science and Technology Center of Excellence based at the University of Maryland.

The research was conducted at Maryland and the University of Massachusetts-Boston.

The largest number of events clustered around major cities:
  • Manhattan, New York (343 attacks)
  • Los Angeles County, Calif. (156 attacks)
  • Miami-Dade County, Fla. (103 attacks)
  • San Francisco County, Calif. (99 attacks)
  • Washington, D.C. (79 attacks).
While large, urban counties such as Manhattan and Los Angeles have remained hot spots of terrorist activities across decades, the START researchers discovered that smaller, more rural counties such as Maricopa County, Ariz. - which includes Phoenix - have emerged as hot spots in recent years as domestic terrorism there has increased.

The START researchers found that 65 of the nation's 3,143 counties were "hot spots" of terrorism.

They defined a "hot spot" as a county experiencing a greater than the average number of terrorist attacks, that is, more than six attacks across the entire time period (1970 to 2008).

"Mainly, terror attacks have been a problem in the bigger cities, but rural areas are not exempt," said Gary LaFree, director of START and lead author of the new report.

"The main attacks driving Maricopa into recent hot spot status are the actions of radical environmental groups, especially the Coalition to Save the Preserves. So, despite the clustering of attacks in certain regions, it is also clear that hot spots are dispersed throughout the country and include places as geographically diverse as counties in Arizona, Massachusetts, Nebraska and Texas," LaFree added.

Concentration of Fatal Terrorist Attacks in U.S., 1970 - 2008 





TYPES OF ATTACKS: LaFree, a professor of criminology at the University of Maryland, and his co-author Bianca Bersani, assistant professor of sociology at the University of Massachusetts-Boston, also assessed whether certain counties were more prone to a particular type of terrorist attack.

They found that while a few counties experienced multiple types of terrorist attacks, for most eattacks were motivated by a single ideological type. For example, Lubbock County, Texas, only experienced extreme right-wing terrorism while the Bronx, New York, only experienced extreme left-wing terrorism.

TIME TRENDS: LaFree and Bersani also found time trends in terrorist attacks.

"The 1970s were dominated by extreme left-wing terrorist attacks," Bersani said. "Far left-wing terrorism in the U.S. is almost entirely limited to the 1970s with few events in the 1980s and virtually no events after that."

Ethno-national/separatist terrorism was concentrated in the 1970s and 1980s, religiously motivated attacks occurred predominantly in the 1980s, extreme right-wing terrorism was concentrated in the 1990s and single issue attacks were dispersed across the 1980s, 1990s and 2000s, according to the new report.

To define the ideological motivations, LaFree and Bersani used START's Profiles of Perpetrators of Terrorism - United States (Miller, Smarick and Simone, 2011), which briefly describes ideological motivations as:

Extreme Right-Wing: groups that believe that one's personal and/or national "way of life" is under attack and is either already lost or that the threat is imminent (for some the threat is from a specific ethnic, racial, or religious group), and believe in the need to be prepared for an attack either by participating in paramilitary preparations and training or survivalism. Groups may also be fiercely nationalistic (as opposed to universal and international in orientation), anti-global, suspicious of centralized federal authority, reverent of individual liberty, and believe in conspiracy theories that involve grave threat to national sovereignty and/or personal liberty.

Extreme Left-Wing: groups that want to bring about change through violent revolution rather than through established political processes. This category also includes secular left-wing groups that rely heavily on terrorism to overthrow the capitalist system and either establish "a dictatorship of the proletariat" (Marxist-Leninists) or, much more rarely, a decentralized, non-hierarchical political system (anarchists).

Religious: groups that seek to smite the purported enemies of God and other evildoers, impose strict religious tenets or laws on society (fundamentalists), forcibly insert religion into the political sphere (e.g., those who seek to politicize religion, such as Christian Reconstructionists and Islamists), and/or bring about Armageddon (apocalyptic millenarian cults; 2010: 17). For example, Jewish Direct Action, Mormon extremist, Jamaat-al-Fuqra, and Covenant, Sword and the Arm of the Lord (CSA) are included in this category.

Ethno-Nationalist/Separatist: regionally concentrated groups with a history of organized political autonomy with their own state, traditional ruler, or regional government, who are committed to gaining or regaining political independence through any means and who have supported political movements for autonomy at some time since 1945.

Single Issue: groups or individuals that obsessively focus on very specific or narrowly-defined causes (e.g., anti-abortion, anti-Catholic, anti-nuclear, anti-Castro). This category includes groups from all sides of the political spectrum.

The complete report Hot Spots of Terrorism and Other Crimes in the United State, 1970 to 2008, is available online: http://ter.ps/9j.


Contacts and sources:
Neil Tickner
University of Maryland

Short-Term Memory Is Based On Synchronized Brain Oscillations

Scientists have now discovered how different brain regions cooperate during short-term memory

Holding information within one’s memory for a short while is a seemingly simple and everyday task. We use our short-term memory when remembering a new telephone number if there is nothing to write at hand, or to find the beautiful dress inside the store that we were just admiring in the shopping window. Yet, despite the apparent simplicity of these actions, short-term memory is a complex cognitive act that entails the participation of multiple brain regions. 

However, whether and how different brain regions cooperate during memory has remained elusive. A group of researchers from the Max Planck Institute for Biological Cybernetics in Tübingen, Germany have now come closer to answering this question. They discovered that oscillations between different brain regions are crucial in visually remembering things over a short period of time.

A monkey has to carry out a classic memory task: the animal is shown two consecutive images and then has to indicate whether the second image was the same as the first one.

 
© Stefanie Liebe, MPI for Biological Cybernetics

It has long been known that brain regions in the frontal part of the brain are involved in short-term memory, while processing of visual information occurs primarily at the back of the brain. However, to successfully remember visual information over a short period of time, these distant regions need to coordinate and integrate information.

To better understand how this occurs, scientists from the Max Planck Institute of Biological Cybernetics in the department of Nikos Logothetis recorded electrical activity both in a visual area and in the frontal part of the brain in monkeys. The scientists showed the animals identical or different images within short intervals while recording their brain activity. The animals then had to indicate whether the second image was the same as the first one.

The scientists observed that, in each of the two brain regions, brain activity showed strong oscillations in a certain set of frequencies called the theta-band. Importantly, these oscillations did not occur independently of each other, but synchronized their activity temporarily: “It is as if you have two revolving doors in each of the two areas. During working memory, they get in sync, thereby allowing information to pass through them much more efficiently than if they were out of sync,” explains Stefanie Liebe, the first author of the study, conducted in the team of Gregor Rainer in cooperation with Gregor Hörzer from the Technical University Graz. The more synchronized the activity was, the better could the animals remember the initial image. Thus, the authors were able to establish a direct relationship between what they observed in the brain and the performance of the animal.

In each of the two brain regions (IPF and V4) brain activity shows strong oscillations in a certain set of frequencies called the theta-band . 


© Stefanie Liebe, MPI for Biological Cybernetics

The study highlights how synchronized brain oscillations are important for the communication and interaction of different brain regions. Almost all multi-faceted cognitive acts, such as visual recognition, arise from a complex interplay of specialized and distributed neural networks. How relationships between such distributed sites are established and how they contribute to represent and communicate information about external and internal events in order to attain a coherent percept or memory is still poorly understood.


Contacts and sources

ORNL Microscopy Reveals 'Atomic Antenna' Behavior In Graphene

Atomic-level defects in graphene could be a path forward to smaller and faster electronic devices, according to a study led by researchers at the Department of Energy's Oak Ridge National Laboratory (ORNL).

With unique properties and potential applications in areas from electronics to biodevices, graphene, which consists of a single sheet of carbon atoms, has been hailed as a rising star in the materials world. Now, an ORNL study published in Nature Nanotechnology suggests that point defects, composed of silicon atoms that replace individual carbon atoms in graphene, could aid attempts to transfer data on an atomic scale by coupling light with electrons.

Electron microscopy at Oak Ridge National Laboratory has demonstrated that silicon atoms (seen in white) can act like 
Credit: ORNL

"In this proof of concept experiment, we have shown that a tiny wire made up of a pair of single silicon atoms in graphene can be used to convert light into an electronic signal, transmit the signal and then convert the signal back into light," said coauthor Juan-Carlos Idrobo, who holds a joint appointment at ORNL and Vanderbilt University.
An ORNL-led team discovered this novel behavior by using aberration-corrected scanning transmission electron microscopy to image the plasmon response, or optical-like signals, of the point defects. The team's analysis found that the silicon atoms act like atomic-sized antennae, enhancing the local surface plasmon response of graphene, and creating a prototypical plasmonic device.

"The idea with plasmonic devices is that they can convert optical signals into electronic signals," Idrobo said. "So you could make really tiny wires, put light in one side of the wire, and that signal will be transformed into collective electron excitations known as plasmons. The plasmons will transmit the signal through the wire, come out the other side and be converted back to light."

Although other plasmonic devices have been demonstrated, previous research in surface plasmons has been focused primarily on metals, which has limited the scale at which the signal transfer occurs.

"When researchers use metal for plasmonic devices, they can usually only get down to 5 - 7 nanometers," said coauthor Wu Zhou. "But when you want to make things smaller, you always want to know the limit. Nobody thought we could get down to a single atom level."

In-depth analysis at the level of a single atom was made possible through the team's access to an electron microscope that is part of ORNL's Shared Research Equipment (ShaRE) User Facility.

"It is the one of only a few electron microscopes in the world that we can use to look at and study materials and obtain crystallography, chemistry, bonding, optical and plasmon properties at the atomic scale with single atom sensitivity and at low voltages," Idrobo said. "This is an ideal microscope for people who want to research carbon-based materials, such as graphene."

In addition to its microscopic observations, the ORNL team employed theoretical first-principles calculations to confirm the stability of the observed point defects. The full paper, titled "Atomically Localized Plasmon Enhancement in Monolayer Graphene," is available online here:http://www.nature.com/nnano/journal/vaop/ncurrent/full/nnano.2011.252.html.

Coauthors are ORNL's Jagjit Nanda; and Jaekwang Lee, Sokrates Pantelides and Stephen Pennycook, who are jointly affiliated with ORNL and Vanderbilt. The research was supported by DOE's Office of Science, which also sponsors ORNL's ShaRE User Facility; by the National Science Foundation; and by the McMinn Endowment at Vanderbilt University. The study used resources of the National Energy Research Scientific Computer Center, which is supported by DOE'S Office of Science.

Contacts and sources:
Morgan McCorkle
DOE/Oak Ridge National Laboratory

Perfect Carbon Nanotubes Shine Brightest

Rice University researchers show how length, imperfections affect carbon nanotube fluorescence

carbon nanotubes fluorescing 
Credit: Rice University  

A painstaking study by Rice University has brought a wealth of new information about single-walled carbon nanotubes through analysis of their fluorescence.

The current issue of the American Chemical Society journal ACS Nano features an article about work by the Rice lab of chemist Bruce Weisman to understand how the lengths and imperfections of individual nanotubes affect their fluorescence – in this case, the light they emit at near-infrared wavelengths.

A video produced by the Rice University lab of chemist Bruce Weisman shows a selection of nanotubes fluorescing as they twist and turn in a solution. New work at Rice revealed how the fluorescent properties of specific types of nanotubes are influenced by the length of the tube and any imperfections. Weisman said those properties may be important to medical imaging and industrial applications. 
Credit: Jason Streit/Rice University

The researchers found that the brightest nanotubes of the same length show consistent fluorescence intensity, and the longer the tube, the brighter. "There's a rather well-defined limit to how bright they appear," Weisman said. "And that maximum brightness is proportional to length, which suggests those tubes are not affected by imperfections."

But they found that brightness among nanotubes of the same length varied widely, likely due to damaged or defective structures or chemical reactions that allowed atoms to latch onto the surface.

The study first reported late last year by Weisman, lead author/former graduate student Tonya Leeuw Cherukuri and postdoctoral fellow Dmitri Tsyboulski detailed the method by which Cherukuri analyzed the characteristics of 400 individual nanotubes of a specific physical structure known as (10,2).

"It's a tribute to Tonya's dedication and talent that she was able to make this large number of accurate measurements," Weisman said of his former student.

The researchers applied spectral filtering to selectively view the specific type of nanotube. "We used spectroscopy to take this very polydisperse sample containing many different structures and study just one of them, the (10,2) nanotubes," Weisman said. "But even within that one type, there's a wide range of lengths."

Weisman said the study involved singling out one or two isolated nanotubes at a time in a dilute sample and finding their lengths by analyzing videos of the moving tubes captured with a special fluorescence microscope. The movies also allowed Cherukuri to catalog their maximum brightness.

"I think of these tubes as fluorescence underachievers," he said. "There are a few bright ones that fluoresce to their full potential, but most of them are just slackers, and they're half as bright, or 20 percent as bright, as they should be.

"What we want to do is change that distribution and leave no tube behind, try to get them all to the top. We want to know how their fluorescence is affected by growth methods and processing, to see if we're inflicting damage that's causing the dimming.

"These are insights you really can't get from measurements on bulk samples," he said.

Graduate student Jason Streit is extending Cherukuri's research. "He's worked up a way to automate the experiments so we can image and analyze dozens of nanotubes at once, rather than one or two. That will let us do in a couple of weeks what had taken months with the original method," Weisman said.
###

The research was supported by the Welch Foundation, the National Science Foundation and Applied NanoFluorescence.

Read the ACS Nano article "How Nanotubes Get Their Glow" here: http://pubs.acs.org/doi/full/10.1021/nn2050328

Read the abstract here: http://pubs.acs.org/doi/abs/10.1021/nn2043516

See a video of fluorescent carbon nanotubes here: http://youtu.be/4ceWLcOMxz0



Contacts and sources:
David Ruth
Rice University

Online News Portals Get Credibility Boost From Trusted Sources

People who read news on the web tend to trust the gate even if there is no gatekeeper, according to Penn State researchers.

When readers access a story from a credible news source they trust through an online portal, they also tend to trust the portal, said S. Shyam Sundar, Distinguished Professor of Communications and co-director of the Media Effects Research Laboratory. Most of these portals use computers, not people, to automatically sort and post stories.

Sundar said this transfer of credibility provides online news portals -- Yahoo News and Google News -- with most of the benefits, but with little of the costs associated with online publishing.

"A news portal that uses stories from a credible source gets a boost in credibility and might even make money through advertising," said Sundar. "However, if there is a lawsuit for spreading false information, for example, it's unlikely that the portal will be named in the suit."

Sundar said the flow of credibility did not go both ways. He said that reading a low-credibility story on a high-credibility portal did not make the original source more trustworthy.

The researchers, who reported their findings in Journalism and Mass Communication Quarterly, asked a group of 231 students to read online news stories. After reading the stories, the students rated the credibility of the original source and the portal.

The researchers placed banners from Google News, which served as a high credibility portal, and the Drudge Report, which served as a low-credibility portal, on the pages. They also added banners to identify the New York Times -- the high-credibility source -- and the National Enquirer -- the low-credibility source.

The students were significantly more likely to consider a portal credible if the source of the story was trustworthy. The credibility of the portal suffered if the source lacked trustworthiness.

Sundar said that attention to sources depended on the involvement of the reader. When readers were particularly interested in the story, they tended to more thoroughly evaluate all the sources involved in the production and distribution of that news. People who are not interested in the story base their judgments on the credibility of the portal, which is the most immediately visible source.

Sundar, who worked with Hyunjin Kang and Keunmin Bae, both doctoral students in communications, and Shaoke Zhang, doctoral student in information sciences and technology, said that the way credibility is transferred from site to site shows the complexity of the relationship between online news readers and sources.

Evaluating credibility is difficult on the web because there are often chains of news sources for a story, Sundar said. For example, a person may find a story on an online news portal, forward the information to another friend through email, who then posts it on a social network. The identity of the original source may or may not be carried along this chain to the final reader.

"With traditional media it's fairly clear who the source is," Sundar said. "But in online media, it gets very murky because there are so many sources."

The Korea Science and Engineering Foundation of South Korea supported this work.
Contacts and sources:
Matt Swayne
Penn State

Protein Study Gives Fresh Impetus In Fight Against Superbugs

Methicillin-resistant Staphylococcus aureus (MRSA) is a bacteriumresponsible for several difficult-to-treat infections in humans. It is alsocalled multidrug-resistant Staphylococcus aureus and oxacillin-resistantStaphylococcus aureus (ORSA). MRSA is any strain of Staphylococcus aureus thathas developed resistance to beta-lactam antibiotics, which include thepenicillins (methicillin, dicloxacillin, nafcillin, oxacillin, etc.) and thecephalosporins. The development of such resistance does not cause the organismto be more intrinsically virulent than strains of Staphylococcus aureus thathave no antibiotic resistance, but resistance does make MRSA infection moredifficult to treat with standard types of antibiotics and thus more dangerous.

Scientists have shed new light on the way superbugs such as MRSA are able to become resistant to treatment with antibiotics

Scientists have shed new light on the way superbugs such as MRSA are able to become resistant to treatment with antibiotics.

Researchers have mapped the complex molecular structure of an enzyme found in many bacteria. These molecules – known as restriction enzymes – control the speed at which bacteria can acquire resistance to drugs and eventually become superbugs.

MRSA 
File:MRSA7820.jpg
Credit: Wikipedia

The study, carried out by an international team including scientists from the University of Edinburgh, focused on E. coli, but the results would apply to many other infectious bacteria.

After prolonged treatment with antibiotics, bacteria may evolve to become resistant to many drugs, as is the case with superbugs such as MRSA.

Bacteria become resistant by absorbing DNA – usually from other bugs or viruses – which contains genetic information enabling the bacteria to block the action of drugs. Restriction enzymes can slow or halt this absorption process. Enzymes that work in this way are believed to have evolved as a defence mechanism for bacteria.

The researchers also studied the enzyme in action by reacting it with DNA from another organism. They were able to model the mechanism by which the enzyme disables the foreign DNA, while safeguarding the bacteria's own genetic material. Restriction enzymes' ability to sever genetic material is widely applied by scientists to cut and paste strands of DNA in genetic engineering.

The study was carried out in collaboration with the Universities of Leeds and Portsmouth with partners in Poland and France. It was supported by the Biotechnology and Biological Sciences Research Council and the Wellcome Trust and published in Genes and Development journal.

Dr David Dryden, of the University of Edinburgh's School of Chemistry, who led the study, said: "We have known for some time that these enzymes are very effective in protecting bacteria from attack by other species. Now we have painted a picture of how this occurs, which should prove to be a valuable insight in tackling the spread of antibiotic-resistant superbugs."

Contacts and sources:
Catriona Kelly
University of Edinburgh

Ancient DNA Holds Clues To Climate Change Adaptation

Thirty-thousand-year-old bison bones discovered in permafrost at a Canadian goldmine are helping scientists unravel the mystery about how animals adapt to rapid environmental change.

The bones play a key role in a world-first study, led by University of Adelaide researchers, which analyses special genetic modifications that turn genes on and off, without altering the DNA sequence itself. These 'epigenetic' changes can occur rapidly between generations – without requiring the time for standard evolutionary processes.

These are thirty-thousand-year-old permafrost bison bones from the Yukon region of Canada.
 
Credit: The University of Adelaide

Such epigenetic modifications could explain how animal species are able to respond to rapid climate change.

In a collaboration between the University of Adelaide's Australian Centre for Ancient DNA (ACAD) and Sydney's Victor Chang Cardiac Research Institute, researchers have shown that it is possible to accurately measure epigenetic modifications in extinct animals and populations.

The team of researchers measured epigenetic modifications in 30,000-year-old permafrost bones from the Yukon region in Canada, and compared them to those in modern-day cattle, and a 30-year-old mummified cow from New Zealand.

Project leader Professor Alan Cooper, Director of ACAD, says: "Epigenetics is challenging some of our standard views of evolutionary adaptation, and the way we think about how animals use and inherit their DNA. In theory, such systems would be invaluable for a wide range of rapid evolutionary adaptation but it has not been possible to measure how or whether they are used in nature, or over evolutionary timescales."

Epigenetics specialist and co-investigator Dr Catherine Suter, from the Victor Chang Institute, has been studying the role of epigenetics in adaptation in laboratory animals. She jumped at the chance to test epigenetic methods in ancient DNA, which had never previously been attempted.


 These are thirty-thousand-year-old permafrost bison bones from the Yukon region of Canada.
 
Credit: The University of Adelaide

"This is the first step towards testing the idea that epigenetics has driven evolution in natural populations," Dr Suter says.

Professor Cooper says: "The climate record shows that very rapid change has been a persistent feature of the recent past, and organisms would need to adapt to these changes in their environment equally quickly. Standard mutation and selection processes are likely to be too slow in many of these situations."

"Standard genetic tests do not detect epigenetic changes, because the actual DNA sequence is the same," says lead author, ACAD senior researcher Bastien Llamas, an Australian Research Council (ARC) Fellow. "However, we were able to use special methods to show that epigenetic sites in this extinct species were comparable to modern cattle.

"There is growing interest in the potential evolutionary role of epigenetic changes, but to truly demonstrate this will require studies of past populations as they experience major environmental changes," he says.

This work has been published in the online peer-reviewed journal PLoS ONE.


Contacts and sources:
Professor Alan Cooper, University of Adelaide
University of Adelaide

Many Bodies Make 1 Coherent Burst Of Light

In a flash, the world changed for Tim Noe – and for physicists who study what they call many-body problems.

The Rice University graduate student was the first to see, in the summer of 2010, proof of a theory that solid-state materials are capable of producing an effect known as superfluorescence.

That can only happen when "many bodies" – in this case, electron-hole pairs created in a semiconductor – decide to cooperate.

Pumping laser pulses into a stack of quantum wells created an effect physicists had long sought but not seen until now: superfluorescence in a solid-state material. The Rice University lab of physicist Junichiro Kono reported the results in Nature Physics. 
Credit: Tim Noe/Rice University

Noe, a student of Rice physicist Junichiro Kono, and their research team used high-intensity laser pulses, a strong magnetic field and very cold temperatures to create the conditions for superfluorescence in a stack of 15 undoped quantum wells. The wells were made of indium, gallium and arsenic and separated by barriers of gallium-arsenide (GaAs). The researchers' results were reported this week in the journal Nature Physics.

Noe spent weeks at the only facility with the right combination of gear to carry out such an experiment, the National High Magnetic Field Laboratory at Florida State University. There, he placed the device in an ultracold (as low as 5 kelvins) chamber, pumped up the magnetic field (which effectively makes the "many body" particles – the electron-hole pairs – more sensitive and controllable) and fired a strong laser pulse at the array.

"When you shine light on a semiconductor with a photon energy larger than the band gap, you can create electrons in the conduction band and holes in the valence band. They become conducting," said Kono, a Rice professor of electrical and computer engineering and in physics and astronomy. "The electrons and holes recombine – which means they disappear – and emit light. One electron-hole pair disappears and one photon comes out. This process is called photoluminescence."

The Rice experiment acted just that way, but pumping strong laser light into the layers created a cascade among the quantum wells. "What Tim discovered is that in these extreme conditions, with an intense pulse of light on the order of 100 femtoseconds (quadrillionths of a second), you create many, many electron-hole pairs. Then you wait for hundreds of picoseconds (mere trillionths of a second) and a very strong pulse comes out," Kono said.

In the quantum world, that's a long gap. Noe attributes that "interminable" wait of trillionths of a second to the process going on inside the quantum wells. There, the 8-nanometer-thick layers soaked up energy from the laser as it bored in and created what the researchers called a magneto-plasma, a state consisting of a large number of electron-hole pairs. These initially incoherent pairs suddenly line up with each other.

"We're pumping (light) to where absorption's only occurring in the GaAs layers," Noe said. "Then these electrons and holes fall into the well, and the light hits another GaAs layer and another well, and so on. The stack just increases the amount of light that's absorbed." The electrons and holes undergo many scattering processes that leave them in the wells with no coherence, he said. But as a result of the exchange of photons from spontaneous emission, a large, macroscopic coherence develops.

Like a capacitor in an electrical circuit, the wells become saturated and, as the researchers wrote, "decay abruptly" and release the stored charge as a giant pulse of coherent radiation.

"What's unique about this is the delay time between when we create the population of electron-hole pairs and when the burst happens. Macroscopic coherence builds up spontaneously during this delay," Noe said.

Kono said the basic phenomenon of superfluorescence has been seen for years in molecular and atomic gases but wasn't sought in a solid-state material until recently. The researchers now feel such superfluorescence can be fine-tuned. "Eventually we want to observe the same phenomenon at room temperature, and at much lower magnetic fields, maybe even without a magnetic field," he said.

Even better, Kono said, it may be possible to create superfluorescent pulses with any desired wavelength in solid-state materials, powered by electrical rather than light energy.

The researchers said they expect the paper to draw serious interest from their peers in a variety of disciplines, including condensed matter physics; quantum optics; atomic, molecular and optical physics; semiconductor optoelectronics; quantum information science; and materials science and engineering.

There's much work to be done, Kono said. "There are several puzzles that we don't understand," he said. "One thing is a spectral shift over time: The wavelength of the burst is actually changing as a function of time when it comes out. It's very weird, and that has never been seen."

Noe also observed superfluorescent emission with several distinct peaks in the time domain, another mystery to be investigated.


Contacts and sources:
David Ruth
Rice University

The paper's co-authors include Rice postdoctoral researcher Ji-Hee Kim; former graduate student Jinho Lee and Professor David Reitze of the University of Florida, Gainesville; researchers Yongrui Wang and Aleksander Wojcik and Professor Alexey Belyanin of Texas A&M University; and Stephen McGill, an assistant scholar and scientist at the National High Magnetic Field Laboratory at Florida State University, Tallahassee.

Support for the research came from the National Science Foundation, with support for work at the National High Magnetic Field Laboratory from the state of Florida.

Read the abstract at http://www.nature.com/nphys/journal/vaop/ncurrent/abs/nphys2207.html
 

Surprise Finding Redraws 'Map' Of Blood Cell Production

A study of the cells that respond to crises in the blood system has yielded a few surprises, redrawing the ‘map’ of how blood cells are made in the body.

The finding, by researchers from the Walter and Eliza Hall Institute, could have wide-ranging implications for understanding blood diseases such as myeloproliferative disorders (that cause excess production of blood cells) as well as used to develop new ways of controlling how blood and clotting cells are produced.

Drs Maria Kauppi (left) and Ashley Ng from the institute's Cancer and Haematology division study blood 'progenitor' cells, which expand and mature in times of stress to replace lost or damaged blood cells 
Drs Maria Kauppi (left) and Ashley Ng from the institute's Cancer and Haematology division study blood 'progenitor' cells, which expand and mature in times of stress to replace lost or damaged blood cells.

The research team, led by Drs Ashley Ng and Maria Kauppi from the institute’s Cancer and Haematology division, investigated subsets of blood ‘progenitor’ cells and the signals that cause them to expand and develop into mature blood cells. Their results are published today in the journal Proceedings of the National Academy of Sciences of the United States of America.

Dr Ng describes blood progenitor cells as the ‘heavy lifters’ of the blood system. “They are the targets for blood cell hormones, called cytokines, which Professor Don Metcalf and colleagues have shown to be critical for regulating blood cell production,” Dr Ng said. “In times of stress, such as bleeding, during infection or after chemotherapy, it is really the progenitor cells that respond by replacing lost or damaged blood cells.”

Dr Kauppi said the research team was particularly interested in myeloid progenitor cells, which produce megakaryocytes, a type of bone marrow cell that gives rise to blood-clotting platelets. “We used a suite of cell surface markers specific to these progenitor cells that allowed us to isolate and characterise the cells,” she said.

The researchers were surprised to find that progenitor cells believed only to be able to produce megakaryocytes were also able to develop into red blood cells.

“We were able to clearly demonstrate that these mouse megakaryocyte progenitor cells have the potential to develop into either megakaryocytes or red blood cells in response to cytokines such as thrombopoietin and erythropoietin, which was quite unexpected,” Dr Ng said. “In addition, we discovered that other progenitor populations thought to really only make neutrophils and monocytes [other immune cells], were capable of making red blood cell and platelets really well. In effect, we will have to redraw the map as to how red cells and platelets are made in the bone marrow.”

Dr Kauppi said the researchers found they could regulate whether the progenitor cell became a megakaryocyte or a red blood cell by using different combinations of cytokines. “Now that we have properly identified the major cells and determined how they respond to cytokine signals involved in red blood cell and platelet production, the stage is set for understanding how these progenitors are affected in health and disease,” she said. “We can also better understand, for instance, how genetic changes may lead to the development of certain blood diseases. “

Dr Ng said the findings would also help researchers discover new ways in which the progenitor cells can be controlled.

“This research is the first step in the future development of treatments for patients with blood diseases,” Dr Ng said. “This may occur either by limiting blood cell production when too many are being made, as with myeloproliferative disorders, or stimulating blood production when the blood system is compromised, such as during cancer treatment or infection.” Dr Ng said.

The research was supported by the National Health and Medical Research Council of Australia, the Leukaemia Foundation, Cure Cancer Australia Foundation, Cancer Council Victoria, Haematology Society of Australia & New Zealand, Amgen, Australian Cancer Research Foundation and the Victorian Government.


Contacts and sources:
Liz Williams
Walter and Eliza Hall Institute