Saturday, December 31, 2011

Changes In The Path Of Brain Development Make Human Brains Unique

How the human brain and human cognitive abilities evolved in less than six million years has long puzzled scientists. A new study conducted by scientists in China and Germany, and published December 6 in the online, open-access journal PLoS Biology, now provides a possible explanation by showing that activity levels of genes in the human brain during development changed substantially compared to chimpanzees and macaques. What’s more, these changes might be caused by a handful of key regulatory molecules called microRNAs.

Credit; CAS

The authors studied gene activity in human, chimpanzee and macaque brains across their lifetimes. Starting from newborns, they investigated two brain regions, the cerebellum, which is responsible for motor activity, and the prefrontal cortex, which has roles in more complex behavior such as social interactions or abstract thinking. They first studied the simple gene activity differences between species that are seen at all ages. Although many genes show such simple differences, there was no disparity in numbers of these differences between the human and the chimpanzee evolutionary lineages. 

Moreover, most of these differences were observed in both of the brain regions studied, and the genes involved are not thought to be specifically involved in brain function. In the opinion of Mehmet Somel (CAS-MPG Partner Institute for Computational Biology (PICB), Shanghai Institutes for Biological Sciences), the lead author of the study, these differences represent evolutionary “white noise” and have little importance for human brain evolution.

The authors then looked for changes in gene activity during development, comparing the activity of genes in newborns and adults. In general, brain developmental patterns tend to be quite similar in humans, other primate species, and even mice. Nevertheless, the authors found that for hundreds of genes, humans display unique developmental patterns, with profiles that were different in shape and/or timing from those found in chimpanzees and macaques. 

Such human-specific developmental gene activity patterns were particularly widespread in the prefrontal cortex, where genes showing human-specific changes outnumbered genes showing chimpanzee-specific changes by four-fold. Developmental patterns in the cerebellum, by contrast, were much less human-specific. Furthermore, many genes displaying these human-specific patterns in the prefrontal cortex were known to have specific neural functions, implying roles in human cognitive development.

Looking for possible causes of this widespread developmental remodeling in the human prefrontal cortex, the authors stumbled upon an unexpected signal. Developmental patterns of genes that encode microRNAs (tiny but powerful regulators that target many other genes and processes) showed even greater excess of human-specific changes in the prefrontal cortex than those of ordinary genes did. Several of these changes in microRNA activity could be directly linked to human-specific changes in activity of their target genes. Since each microRNA may regulate the activity of hundreds of other genes, this finding provides a possible explanation to how hundreds of genes changed their activity patterns (in a coordinated way) during human brain development.

This result further implies that the evolution of human cognitive abilities might be traced back to a small number of mutations in key developmental regulators. Philipp Khaitovich, the senior author of the study, suggests that "identifying the exact genetic changes that made us think and act like humans might be easier than we previously imagined”. This said it is likely to require much more work with a focus on the dynamics of brain development and wider use of transgenic mice, and even primate models.

Further to this, the authors point out that identification of the key human-specific DNA mutations could help us to determine how close the Neanderthals’ cognitive abilities were to ours. “If Neanderthals’ brain development was similar to that of chimpanzees and macaques, it would be no wonder that they became extinct when confronted by Modern Humans,” says Mehmet Somel.


Contacts and sources:
Philipp Khaitovich
CAS-MPG Partner Institute for Computational Biology, Shanghai Institutes for Biological Sciences,
Chinese Academy of Sciences, Shanghai, China
PLoS Biology



New Report Highlights Need For Action On Health In The Aftermath Of War

Issue of noncommunicable diseases in post-conflict countries must be addressed

Countries recovering from war are at risk of being left to their own devices in tackling non communicable diseases, leaving an "open door" for exploitation by alcohol, tobacco and food companies, health experts warn.

Writing in the Bulletin of the World Health Organization, Bayard Roberts and Martin McKee, of the London School of Hygiene & Tropical Medicine, and Preeti Patel, of King's College London, argue that the post-conflict environment risks increases of mental health problems and other NCDs, such as high blood pressure, diabetes and cancer.

After exposure to violent and traumatic events, people may be prone to developing harmful health behaviours, such as excessive drinking and smoking, which exacerbate the problem of NCDs in the long-term. This is why the lack of a strong will from the authorities to restore the health system leaves an open door for commercial ventures to influence health policy to their advantage.

The authors write: "This toxic combination of stress, harmful health behaviours and aggressive marketing by multinational companies in transitional settings requires an effective policy response but often the state has limited capacity to do this."

Afghanistan has no national policy or strategy towards NCDs and, apart from the European Commission, none of its partners has given priority to introduce and support them. High blood pressure is largely untreated in Iraq, three times as many people die prematurely from NCDs in Libya than from infectious diseases and similar patterns can be found in other countries recovering from conflict.

"This policy vacuum provides an open door for multinational companies to influence policies in ways that undermine efforts to control tobacco and alcohol use or improve unhealthy diets in transitional countries," the experts say.

Little attention is paid in reconstruction and humanitarian efforts to helping countries emerging from conflict deal with their present or future burden of NCDs – with the topic virtually ignored during the United Nations high-level meeting on NCDs in September 2011. The authors argue that this gap must be filled, pointing out that the post-conflict period can provide an opportunity to completely rewrite strategies and undertake reforms to better address the health needs of a population and lay the foundations for a more efficient health system.

Dr Roberts, a lecturer in the European Centre on Health of Societies in Transition at LSHTM, says: "While great attention is rightly paid to infectious diseases, noncommunicable diseases should also be given attention –especially as the post-conflict environment can provide the perfect breeding ground for unhealthy activities like smoking, drinking and poor diet. We are making the argument that if the authorities do not step up to lead the way in developing policies which will benefit public health, then they leave the route clear for companies to step in and serve their own interests."

Contacts and sources:
Paula Fentiman
London School of Hygiene & Tropical Medicine

Citation: Noncommunicable diseases and post-conflict countries – Bulletin of the World Health Organization, January 2012http://www.who.int/bulletin/volumes/90/1/11-098863

First Of NASA's GRAIL Spacecraft Enters Moon Orbit

The first of two NASA spacecraft to study the moon in unprecedented detail has entered lunar orbit.

NASA's Gravity Recovery And Interior Laboratory (GRAIL)-A spacecraft successfully completed its planned main engine burn at 2 p.m. PST (5 p.m. EST) today. As of 3 p.m. PST (6 p.m. EST), GRAIL-A is in a 56-mile (90-kilometer) by 5,197-mile (8,363-kilometer) orbit around the moon that takes approximately 11.5 hours to complete.

Artist concept of GRAIL mission.
Artist's concept of GRAIL missionImage credit: NASA/JPL-Caltech

"My resolution for the new year is to unlock lunar mysteries and understand how the moon, Earth and other rocky planets evolved," said Maria Zuber, GRAIL principal investigator at the Massachusetts Institute of Technology in Cambridge. "Now, with GRAIL-A successfully placed in orbit around the moon, we are one step closer to achieving that goal."

The next mission milestone occurs tomorrow when GRAIL-A's mirror twin, GRAIL-B, performs its own main engine burn to place it in lunar orbit. At 3 p.m. PST (6 p.m. EST) today, GRAIL-B was 30,018 miles (48,309 kilometers) from the moon and closing at a rate of 896 mph (1,442 kph). GRAIL-B’s insertion burn is scheduled to begin tomorrow at 2:05 p.m. PST (5:05 p.m. EST) and will last about 39 minutes.

"With GRAIL-A in lunar orbit we are halfway home," said David Lehman, GRAIL project manager at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, Calif. "Tomorrow may be New Year's everywhere else, but it's another work day around the moon and here at JPL for the GRAIL team."

Once both spacecraft are confirmed in orbit and operating, science work will begin in March. The spacecraft will transmit radio signals precisely defining the distance between them as they orbit the moon in formation. As they fly over areas of greater and lesser gravity caused by both visible features, such as mountains and craters, and masses hidden beneath the lunar surface, the distance between the two spacecraft will change slightly.

Scientists will translate this information into a high-resolution map of the moon's gravitational field. The data will allow scientists to understand what goes on below the lunar surface. This information will increase knowledge of how Earth and its rocky neighbors in the inner solar system developed into the diverse worlds we see today.

After firing its main engine for 39 minutes, the GRAIL-A spacecraft was captured into lunar orbit.

Credit: NASA

JPL manages the GRAIL mission for NASA's Science Mission Directorate at the agency's headquarters in Washington. The GRAIL mission is part of the Discovery Program managed at NASA's Marshall Space Flight Center in Huntsville, Ala. Lockheed Martin Space Systems in Denver built the spacecraft.

Source:
NASA

For more information about GRAIL, visit:



http://www.nasa.gov/grail

 

Friday, December 30, 2011

The Truth About 2012: Doom, Danger, Disasters And Nonsense

Much has been written about the purported end of the world in 2012. None of the rumors are true, but an extraordinary number of people are concerned. In this public talk, NASA helps separate myth from reality.

Speaker: Dr. Don Yeomans, Manager, NASA Near-Earth Object Program Office at JPL

Loch Ness Used To Track The Tilt Of The World

That the rise and fall of the tide is primarily driven by the gravitational pull of the moon and the Sun is common knowledge, but not all tides are controlled by such a standard mechanism. Researchers working on Loch Ness in Scotland find that rather than the loch's tide being driven directly by this so-called astronomical tide, it is also controlled by a process known as ocean tidal loading. 

Loch Ness
File:LochNessUrquhart.jpg
Credit: Wikipedia

Loch Ness lies just 13 kilometers (8 miles) inshore from the North Sea. The astronomical tide redistributes the ocean to such an extent that the changing mass of water along the coast deforms the seafloor. As the ocean tide ebbs and flows, the surface of the Earth rises and falls.

Through a series of pressure sensors distributed throughout Loch Ness that measured the height of the water, and by ruling out other potential sources, Pugh et al. find that this local shift in the shape of the Earth-like a bowl of water on an unstable table-controls the loch's tide. 

They find that the tide has a magnitude of 1.5 millimeters (0.06 inches), a measurement made to an accuracy of just 0.1 mm (0.004 in) over the loch's 35 km (22 mi) length. The authors suggest that this sensitivity in measuring the effects of tidal loading surpasses even that possible using Global Positioning Satellite receivers. The authors hope that similar experiments conducted at suitable lakes worldwide could be used to better understand oceanic tidal loading.

Source: Journal of Geophysical Research-Oceans, doi:10.1029/2011JC007411, 2011http://dx.doi.org/10.1029/2011JC007411

Title: Lunar Tides in Loch Ness, Scotland

Authors: David T. Pugh and Philip L. Woodworth: National Oceanography Centre, Liverpool, United Kingdom;

Machiel S. Bos: CIMAR/CIIMAR, University of Porto, Porto, Portugal.

UC3M Collaborates In The Largest Experiment In Real Time On Cooperation In Society

A total of 1,303 high school students in Aragon have participated in an online scientific-social experiment to determine the problems and conflicts arising from cooperation in present day society. This experiment, organized by the Instituto Biocomputación y Física de Sistemas Complejos (The Institute for Biocomputation and Physics of Complex Systems) (BIFI) at the Universidad de Zaragoza, together with the Fundación Ibercivis and Universidad Carlos III de Madrid (UC3M), is the largest one of its kind carried out in real time in this field until now.

Credit:  Carlos III University of Madrid 

The study starts with the hypothesis that the structure of the population determines the level of cooperation among its individuals. This experiment made it possible to carry out interaction between students from 42 secondary schools, based on the prototype of social conflict known as “The Prisoner’s Dilemma”. To be precise, this game shows that the greatest benefit for persons who interact is produced when both sides collaborate; however, if one person collaborates and the other does not, the latter obtains more benefits. This, on occasion, triggers the possibility of taking advantage of the collaboration of others. But if this tendency grows, in the end, nobody cooperates and nobody obtains benefits.

Experiment in real time

The significance of this experiment is that it is carried out in real time, in the space of three hours, among students from secondary schools throughout Aragon. In addition, this experiment featured a high degree of participation, with the largest existing results to date having been gathered by groups at Harvard University (120 participants) and UC3M (169 participants). In the latter case, the conclusion reached by the researchers from this Madrid university is that a situation where the majority of people collaborate is never attained. This is due to the fact that a significant portion of people never cooperate or do so depending on the decision of their neighbors, or their moods at the time, according to this experimental study. Another of the notable conclusions obtained is that there are different types of people: people who always try to help their neighbors (around 5 percent), some that never do so (35 percent) and others who cooperate according to their mood or depending on what their neighbors did previously (60 percent).

The presentation and live monitoring this experiment took place this past December 20 in the Aragon capital city at the La Espacio Zaragoza Activa, which became a monitoring and real time visualization room for the results of this experiment. During this event, in which Francisco Marcellán, Head of the UC3M Mathematics Department was present, the following also participated: Miguel Ángel García, Head of Research and Innovation for the Aragon Government; Ricardo Cavero, Head of Science and Technology for the Zaragoza Municipal Government; Maria Luisa Borao, Director of the Ibercaja Cultural Centers in Zaragoza; Alfonso Tarancón, Director of BIFI and Yamir Moreno, BIFI Scientific Secretary and experiment coordinator, together with the UC3M Full Professors of Mathematics, José A. Cuesta and Anxo Sánchez, both from the Grupo Interdisciplinar de Sistemas Complejos (GISC) (Interdisciplinary Group of Complex Systems). Professor Moreno presented the preliminary conclusions the following day.

More cooperation among girls

Thanks to the preliminary results of this study initial analysis has been able to prove that in certain parameters there are differences regarding the level of cooperation. For example, in relation to the sex of the participants, girls cooperated 10 percent more than did boys. A clear difference was also observed according to the type of secondary school program studied, with students of humanities and social sciences obtaining a level of cooperation 4 percent higher than those studying scientific technology. However, there were no important differences regarding the number of family members of the students (if they were only children or had more siblings) nor were there differences according to their geographical origins (if they were from rural or urban areas). In global terms, a 35 percent rate of cooperation was observed in the participants, which means that approximately one out of three students cooperated.

These results seem to confirm, according to the researchers, that the structure of the interaction network influences in a medium level of cooperation. That is, different levels of cooperation in the regular network have been observed in which all the users are connected with the same number of classmates/neighbors, and the heterogeneous, the so-called “scale-free” one, in which some persons are very connected (with many neighbors) and others very little. The researchers in charge of the experiment continue to extract data for more detailed analysis and with that will be able to obtain more results leading to in scientific journals publications.

Two types of tests

The experiment mainly consisted of two types of tests, each one made up of a different network. The first was a regular network, in which all the users were connected with the same number of schoolmates/neighbors. The second test used a heterogeneous network, the so-called “scale-free” types, in which some persons are very connected, that is, with many neighbors and others with very few. In both cases, the different behavior of the participants is compared when they always interact with the same neighbors, comparing that to what they do when after each interaction the structure of the population changes in a random way, and as such, with different neighbors.

Credit: Carlos III University of Madrid

If the hypothesis that the structure of the population determines the level of cooperation is true, different behavior will be observed when the neighbors are the same and when they change, and what is more, different levels of cooperation in the regular and heterogeneous network will be observed. If this is so, the hypothesis will be confirmed. On the contrary, the door will be left open to rule out the hypothesis, making it then be necessary to search for new alternatives to understand the main issue which is “the urgency of the cooperation”. 

Contacts and sources: 
Ana Herrera
Carlos III University of Madrid

This study has involved the participation of the following schools and institutes in Aragón. In the capital city of Zaragoza the following institutes participated: Goya, Pedro de Luna, Miralbueno, Ramón Pignatelli, Miguel Servet, Jerónimo Zurita, Miguel Catalán, José Manuel Blecua, Andalán, Pablo Gargallo, Avempace, Francisco Grande Covián, Fundación San Valero, Luis Buñuel, Ramón y Cajal, Pablo Serrano and Azucarera, and the schools: Liceo Europa, Teresiano del Pilar, Sansueña, O.D Santo Domingo de Silos, Sagrado Corazón, Escuelas Pías, The British Institute of Aragon, San Alberto Magno, El Pilar Maristas and La Salle Gran Vía. In other towns in the province the following institutes participated: Pirámide andSierra de Guara in Huesca, Conde de Aranda in Alagón, Bajo Cinca in Fraga, Salvador Victoria in Monreal del Campo, Matarraña in Valderrobres, Zaurín in Ateca, Biello Aragón in Sabiñánigo, Gallicum in Zuera, Río Arba in Tauste,Rodanas in Épila, Ángel Sanz Briz in Casetas, Valle de Jilocain Calamocha and Benjamín Jarnés in Fuentes de Ebro and the school Santa Rosa-Altoaragón in Huesca.

The Aragon Government program Ciencia Viva also collaborated as the main organizer for all the participating public schools together with Hewlett Packard, Obra Social Ibercaja, the Zaragoza Municipal Government, El Corte InglesCultural Area and the Aragon Government as sponsors of the event.

Graphene Offers Protection From Intense Laser Pulses

Researchers from Singapore and the UK have jointly announced a new benchmark in broadband, non-linear optical-limiting behavior using single-sheet graphene dispersions in a variety of heavy-atom solvents and film matrices.

Single-sheet graphene dispersion when substantially spaced apart in liquid cells or solid film matrices can exhibit novel excited state absorption mechanism that can provide highly effective broadband optical limiting well below the onset of microbubble or microplasma formation.

The new optical-induced absorption mechanisms [a] Photoexcitation of a dispersed graphene single sheet gives long-lived electron-hole pairs. Further excitation causes the appearance of localized states such as (i) excitons (neutral excited state) or (ii) polarons (charged excited state) due to interactions. [b] For comparison, graphite gives on electron-hole gas that is very short-lived due to fast cooling and re-combination.
 
Credit: National University of Singapore

Graphenes are single sheets of carbon atoms bonded into a hexagonal array. In nature, they tend to stack to give graphite.

In a breakthrough, researchers from the National University of Singapore (NUS), DSO National Laboratories and University of Cambridge have developed a method to prevent the re-stacking of these sheets by attaching alkyl surface chains to them, while retaining the integrity of the nano-graphene pockets on the sheets.

This method in turn produced a material that can be processed in a solution and dispersible into solvents and film matrices. As a consequence, the researchers observed a new phenomenon. They found that the dispersed graphenes exhibit a giant non-linear optical-absorption response to intense nanosecond laser pulses over a wide spectral range with a threshold that was much lower than that found in carbon black suspensions and carbon nanotubes suspensions. This set a new record in energy limiting onset of 10 mJ/cm^2 for a linear transmittance of 70%.

The mechanism for this new phenomenon is outlined in Figure 1 in which the initially delocalized electron-hole gas localizes at high-excitation densities in the presence of heavy atoms, to produce strong absorbing excitons. The resultant excited-state absorption mechanism can be very effective.

These optical limiting materials can now be used for protection of sensitive sensors and devices from laser damage, and for optical circuits. They can be also used in anti-glare treated devices.

The principal investigator of the NUS Organic Nano Device Laboratory's graphene team, Professor Lay-Lay Chua who is also from the NUS Department of Chemistry and Department of Physics, says: "We found from ultrafast spectroscopy measurements that dispersed graphene sheets switch their behavior from induced optical transparency which is well-known, to induced optical absorption depending on its environment. This is a remarkable finding that shows graphene can still surprise!"

The principal investigator of the graphene team at DSO National Laboratories, Professor Geok-Kieng Lim who is also an Adjunct Professor at NUS Department of Physics, says: "This is an important first step in the development of practical graphene nano-composite films for applications where the graphene sheets remain fully dispersed. The induced change in their non-linear optical behavior is amazing and highly practical!"


Contacts and sources:
Lay-Lay Chua
National University of Singapore

Citation: 'Giant broadband nonlinear optical absorption response in dispersed graphene single sheets' by Geok-Kieng Lim, Zhi-Li Chen, Jenny Clark, Roland G.S. Goh, Wee-Hao Ng, Hong-Wee Tan, Richard H. Friend, Peter K. H. Ho and Lay-Lay Chua was published on 21 August 2011 in Nature Photonics and is available at www.nature.com/nphoton(doi:10.1038/nphoton.2011.177).

About National University of Singapore (NUS)

A leading global university centred in Asia, the National University of Singapore (NUS) is Singapore's flagship university which offers a global approach to education and research, with a focus on Asian perspectives and expertise.

NUS has 16 faculties and schools across three campuses. Its transformative education includes a broad-based curriculum underscored by multi-disciplinary courses and cross-faculty enrichment. Over 36,000 students from 100 countries enrich the community with their diverse social and cultural perspectives.

NUS has three Research Centres of Excellence (RCE) and 21 university-level research institutes and centres. It is also a partner in Singapore's 5th RCE. NUS shares a close affiliation with 16 national-level research institutes and centres. Research activities are strategic and robust, and NUS is well-known for its research strengths in engineering, life sciences and biomedicine, social sciences and natural sciences. It also strives to create a supportive and innovative environment to promote creative enterprise within its community.

How Do Tsunamis Off The Coast Of South Italy Develop?

From late December 2011 till Mid January 2012 earth scientists from the Leibniz Institute of Marine Sciences (IFM-GEOMAR) and from the Cluster of Excellence "The Future Ocean" in Kiel, Germany will examine the causes of earthquakes and tsunamis along the continental margins of the Mediterranean Sea off the coast of South Italy. The enhanced understanding of those processes is supposed to help to better estimate the risks of those natural hazards in the future.

The research area of the expedition. 
Graphic: S. Krastel, IFM-GEOMAR/future ocean

On the 28th of December 1908 the earth in Italy trembled. Shortly afterwards a Tsunami wave overhauled the Southern Coast of Italy. Over 80,000 people died in the region around the seaport town Messina. However the earthquake in Messina is only one of many examples for natural hazards occurring in this region regularly. According to estimations around 10 percent of the tsunamis worldwide occur in the Mediterranean Sea. The continental margins of the Mediterranean Sea off the coast of Southern Italy are situated along tectonic plates which move towards each other. 

This results in frequent volcanic eruptions and earth quakes. It is, however, still unclear, whether the tsunami of 1908 was triggered by a sudden vertical movement along a major fault during the earthquake or as a result of a giant marine slide initiated by the earthquake. “To better understand the causes of those catastrophes we aim to examine the region thoroughly and collect data on seismic activities and the structure of the sediment in the area”, Prof. Dr. Sebastian Krastel from the Leibniz Institute of Marine Sciences (IFM-GEOMAR) outlines the goals of an expedition with the German research vessel METEOR. The expedition takes place in the region from late December 2011 till Mid January 2012. Prof. Dr. Krastel is also head of the research group “submarine hazards “of the Cluster of Excellence “The Future Ocean”.

RV METEOR. 
Photo: P. Linke, IFM-GEOMAR

During the expedition the scientists are investigating the sea floor in three major regions: The Strait of Messina, the area off the coast of Sicilia and the Gioia Basin. To do so the scientists use a seismic-system developed by the IFM-GEOMAR. The system is able to create 3D images of the structures beneath the seafloor. Additionally gravity corers will be used to obtain sediment cores. Using the collected data the main objectives will be to map and characterize volcanic and non-volcanic submarine slides and to identify fractured rocks, which are often accompanied by earthquakes. 

The work done by Prof. Dr. Krastel and his team is supported by Italian cooperation partners who for example provided topographic information on the sea floor in this region, which have been collected in the so-called MAGIC-project (MArine Geohazards Along the Italian Coasts). This information was decisive for the selection of the three research areas.

“The impression that natural hazards such as tsunamis are only occurring at the other end of the world is commonly created. However there is a tremendous hazard potential in Europe which we have to gain more knowledge about. After all 160 million people live along the Mediterranean Coast and every year 140 million people visit this area for their holidays”, the geoscientist from Kiel emphasizes.

Expedition at a glance:
FS Meteor expedition: M86/2
Chief scientist: Sebastian Krastel
Expedition period: 27.12.2011-17.01.2012
Starting harbour: Cartagena (Spain)
Research area: Strait of Messina, Coast of Sicily
Final destination: Brindisi (Italy)

Contacts and sources:
Prof, Dr, Sebastian Krastel
Jan Steffen (Öffentlichkeitsarbeit) 

Links:
www.ifm-geomar.de Leibniz Institute of Marine Sciences (IFM-GEOMAR)
from the January 1, 2012: www.geomar.de GEOMAR | Helmholtz Centre for Ocean Research Kiel
www.futureocean.de Cluster of Excellence „The Future Ocean“
 

Thursday, December 29, 2011

Fastest X-Ray Images Of Tiny Biological Crystals: Protein Nanocrystals Form A Time Switch For Diffraction

Protein Nanocrystals Form a Time Switch for Diffraction

An international research team headed by DESY scientists from the Center for Free-Electron Laser Science (CFEL) in Hamburg, Germany, has recorded the shortest X-ray exposure of a protein crystal ever achieved. The incredible brief exposure time of 0.000 000 000 000 03 seconds (30 femtoseconds) opens up new possibilities for imaging molecular processes with X-rays. This is of particular interest to biologists, but can be employed in many fields, explain lead authors Dr. Anton Barty and Prof. Henry Chapman from the German accelerator centre Deutsches Elektronen-Synchrotron DESY. CFEL is a joint venture of DESY, the Max Planck Society and the University of Hamburg.

The molecular structure of proteins is inferred by measurements of patterns of X-rays scattered from crystals formed from those proteins. The regular array of molecules in the crystal gives rise to strong peaks needed for measurement, shown here as balls in a three-dimensional space. The size and color of each ball represents the strength of diffraction, which encodes the three-dimensional molecular structure of the protein. Ultra-intense X-rays from a free-electron laser causes the crystal to explode, but not before high-quality data can be recorded. The paper from Barty et al shows that the explosion simply causes these diffraction peaks to terminate, without compromising the quality of data. The streaks emanating from each diffraction peak represent the time that the peak accumulates at the detector, with the high angle peaks (which encode high-resolution structural data) turning off first. 
Credit: Thomas White, CFEL/DESY

From X-ray diffraction the molecular structure of proteins can be determined. The shorter the X-ray pulse and the higher its intensity, the better the structural information gained. With the free-electron laser Linac Coherent Light Source (LCLS) at the US SLAC National Accelerator Laboratory the research team fired the most intense X-ray beam at a protein crystal to date: The tiny crystal was bombarded with a whamming 100 000 trillion watts per square centimeter - sunlight for comparison comes in at a mere 0.1 watts per square centimeter on average. "This way we get the most information out of the smallest crystals", Chapman explains. Having small crystals is important, as especially many biological substances aren't easily crystallized.

Under illumination from an ultra-intense X-ray pulse from an X-ray free-electron laser, a small molecule is ionized and explodes due to the Coulomb repulsion of the ions. The temperature of the ions (proportional their velocity distribution) also increases with time, indicated by as colours from blue to red. Crystalline samples of such molecules explode in a similar fashion. These crystals diffract the X-rays to give Bragg peaks shown in the background, from which the molecular structure is determined. During the explosion the atomic disordering turns off these Bragg peaks so that the diffracted X-ray spots reaching the detector are of shorter duration than the impinging pulse. The high-angle diffraction turns off first (short streaks) whereas the low-angle (and low resolution) diffraction peaks last longer (long streaks). Even if pulses are much longer than the explosion timescale, the measurement corresponds to the undamaged molecule. 
Credit: Jörg Harms, MPSD/University of Hamburg 

The emerging technology of X-ray free-electron lasers (XFELs) promises to deliver scientific breakthroughs in many areas. One of these areas is biology, where ultra-short X-ray pulses of unprecedented power from XFEL sources open up the possibility to obtain the three-dimensional structures of entire classes of proteins that could not be previously measured.

Structural biologists widely rely on the technique of protein X-ray crystallography, where X-rays scattered from protein crystals form a so-called diffraction pattern. Scientists use these patterns to reconstruct a detailed atomic picture of the protein molecule. However, radiation damage is a major concern when working with very intense X-rays. The research team now reports that these crystals produced high-quality diffraction patterns although the crystals’ lifetime in the XFEL beam was much shorter than the used XFEL pulses (Nature Photonics, Advanced Online Publication, 18 December 2011, DOI 10.1038/NPHOTON.2011.297). The use of nanosized crystals also shows how to overcome one of the largest bottlenecks in structural biology, which is the long and often unfruitful process of growing high-quality protein crystals of sufficient size.
Completely vaporized

XFELs produce X-ray pulses that are so intense that any sample in the focused beam is completely vaporised. This explosion begins within several femtoseconds. However, researchers were able to obtain diffraction patterns from protein nanocrystals using significantly longer pulse lengths. “It was not at all obvious how this was possible,” says Henry Chapman from CFEL. “All the theories predicted that we should use pulse lengths of about 10 femtoseconds. But even though we increased the pulse lengths to over 300 femtoseconds and deposited over a hundred times more radiation dose than assumed acceptable, we could still see high-quality diffraction patterns.”

The molecular structure of proteins is inferred by X-ray diffraction from crystals composed of building blocks of identical protein molecules. This image shows the amount of atomic disordering that occurs in such a protein crystal when illuminated by an ultra-intense pulse from an X-ray free-electron laser. The disordering increases with time (depicted by colour changing from blue to red). The crystal becomes an amorphous soup of atoms by the end of the pulse, which no longer gives strong diffraction peaks. The diffraction peaks at high resolution turn off early in the pulse, whereas low-resolution diffraction lasts longer. Even if pulses are much longer than the explosion timescale, the measurement corresponds to the undamaged molecule. 
Credit: Carl Caleman and Anton Barty, CFEL/DESY 

At first, these results appear to be counter-intuitive. “The key to understanding our unexpected results is crystallinity,” explains Chapman. In a crystal, the same structural element, the unit cell, repeats many times in several directions. Due to this structure, X-rays going through a crystal do not scatter uniformly in all directions, rather intense bundles of light emerge from the crystal only under specific angles – forming the diffraction pattern. “If the correlation between unit cells is lost during the explosion of the sample in the XFEL beam, then diffraction no longer occurs,” says Chapman. In other words, diffraction terminates once the crystal is destroyed, and any X-rays hitting the sample after this time do not contribute to the diffraction pattern initially recorded. Remarkably, the apparent measurement pulse is shorter than the incident pulse length: it is as if nanocrystals form a time switch that turns diffraction off once enough useable data is collected.

A protein crystal consists of a regularly ordered array of protein molecules. The molecular structure is determined from the pattern of X-ray light (called a diffraction pattern) that is scattered from this periodic array. Unider illumination from an ultra-intense X-ray pulse from a free-electron laser the crystal explodes. This initiates as an atomic disordering of all the constituent molecules. As this disorder increases in time (which flows down the picture) the crystal loses definition initially at the highest resolution, and later at lower resolution, causing a corresponding termination of the detected diffraction. Even if pulses are much longer than the explosion timescale, the measurement corresponds to the undamaged molecule. 
Credit: Carl Caleman, CFEL/DESY 

If the explanation is so straight-forward, you may ask, why has this effect not been observed before? Anton Barty from CFEL explains that the mechanisms of X-ray damage at conventional synchrotron and novel XFEL light sources are completely different. “At a synchrotron, everything happens comparatively slowly and there is time for chemical damage to occur. At an XFEL, however, the sample turns into plasma within a few femtoseconds. This is faster than any bond breaking mechanisms previously seen in synchrotron experiments. ”Indeed, during the first phases of the explosion the atoms virtually stay in their place, held in place by their own inertia, and a high-quality diffraction pattern can be recorded. Theoretical calculations performed by fellow CFEL scientist Carl Caleman agree amazingly well with the experimental observations.

Key technique

The outcome of this research benefits developments in X-ray crystallography, which is a key technique to determine protein structures. Knowledge of these structures is essential for an understanding of life’s functions and malfunctions. For biopharmaceutical companies, for example, X-ray crystallography is an important contributor to drug development. However, certain proteins, such as membrane-embedded proteins, are difficult to crystallize or they only yield crystals too small to investigate at a synchrotron. “Our ultra-fast, ultra-intense XFEL experiments on very small protein crystals show that, in the very near future, we will obtain atomic resolution structures of systems that have been very challenging so far,” says Henry Chapman.

The future is bright. With the European XFEL currently being built in Hamburg, the world’s most powerful XFEL will start its operation in 2015. “With our technique we need about one million X-ray pulses to determine a protein structure,” Chapman says. “Currently, this is a matter of hours. At the European XFEL facility it will be a matter of minutes.”

Animation of atomic displacement in a lysergic diethylamide crystal in an XFEL beam. Displacement based on a plasma simulation. Photon pulse: 2keV photons, 1017 W/cm2, 70fs flat top. 
Credit: Carl Caleman, CFEL/DESYAbout LCLS

Contacts and sources: 

The Linac Coherent Light Source is a Department of Energy Office of Science-funded facility located at SLAC National Accelerator Laboratory. LCLS is the world's first hard X-ray free-electron laser, allowing researchers to see atomic-scale detail on ultrafast timescales. The LCLS enables groundbreaking research in physics, chemistry, structural biology, energy science and many other diverse fields.

Targeted Immune Stimulation Based On DNA Nanotechnology

DNA is usually known as the genetic code for protein synthesis in all living organisms. The application of DNA as a molecular building block on the other hand, allows for the construction of sophisticated nanoscopic shapes that are built entirely from DNA. 

DNA carrier systems entering a cell
Credit:  Nanosystems Initiative Munich

In particular the recent invention of the so-called DNA origami method facilitates the fabrication of almost all imaginable 3D shapes. Here a phage-based DNA strand is used as a scaffold that is woven into shape by hundreds of short staple oligonucleotides. The outstanding advantage of DNA-based self-assembly is that during a single fabrication process billion exact copies of the designed DNA nanostructure are produced in parallel.

Now Prof. Tim Liedl, a member of NIM, and his team developed a DNA origami construct that serves as a carrier system to selectively stimulate immune responses of living cells. Together with the group of Prof. Carole Bourquin from the Klinikum der Universität München (KUM) the biophysicists investigated the systematic immune stimulatory effect and the potential cytotoxicity of these DNA nanostructures.

Our innate immune system can detect invasive organisms via a specific DNA motif, the so called CpG sequences (“Cytosine – phosphate – Guanine”) which are prevalent in viruses and bacteria. When these sequences are internalized by certain immune cells, they are recognized by endosomal receptors like the Toll-Like Receptor 9 (TLR-9) which subsequently activate the immune system. The Toll-Like Receptors became famous at the latest in 2011, when Bruce Beutler and Jules Hoffmann received the Nobel Prize for their research on these kinds of receptors.

Verena Schüller from the Liedl group and her colleagues decorated a DNA origami construct with artificial CpG sequences and used it as an efficient non-toxic carrier system into cells. Together with the team of Carole Bourquin they demonstrated a selective immune stimulating effect of the DNA complexes by measuring the interleukin secretion of the cells as an indicator for immune activation. Such artificial nanostructures could act in future applications as target-selective delivery vehicles for the development of novel and non-toxic vaccine adjuvants or carrier systems in tumor immunotherapy.

Contacts and sources:
Dr. Birgit Gebauer
Outreach Manager Nanosystems Initiative Munich
Link to Liedl group

Citation: Cellular Immunostimulation by CpG-Sequence-Coated DNA Origami Structures. Verena Schüller, Simon Heidegger, Nadja Sandholzer, Philipp Nickels, Nina Suharta, Stefan Endres, Carole Bourquin and Tim Liedl. ACS Nano, 2011

Healing Faster With Spider Silk Bandages, The Romans Did It Too

RWTH researchers are developing a bandage made from spider silk to promote faster healing. 

In relation to the diameter of its threads, a spider web is five times stronger than steel. To describe the thread's resilience, biochemist Artem Davidenko from DWI Interactive Materials Research at RWTH says, "A thread with a diameter of 2 centimeters could pull a whole airplane."

Davidenko has worked for three years on a project aimed at creating an effective use of spider silk. In addition to the mechanics of the material, he, Prof. Doris Klee, the project head and Vice-Rector for Human Resources Management and Development, and Prof. Martin Möller, Scientific Director of DWI, are interested in how the material aids in wound healing. Even the Romans covered a wound with silk to decrease the number of days it took to heal.

Spiders are the focus of a research project on aiding in wound healing at DWI Interactive Materials Research at RWTH Aachen University.
Spinnen für die Wundheilung
Credit:  Photo: Peter Winandy/RWTH

Spider silk consists of proteins, which are generally made up 20 building blocks, the so-called amino acids. Amino acid chains are, among other things, the foundation for enzymes. Proteins regulate most biochemical reactions ' from digestion to muscle movement to cell repair processes. Amino acid chains are also in antibodies and hormones. Structural proteins are a particular class of proteins that help in cell and tissue construction, thanks to their elongated shape. Davidenko illustrates: "They are, for example, the keratin fibers in hair, the actin in muscles, or the collagen in skin cells."

Cannibalism Complicates Biotechnical Production

"Worldwide, there are 40,000 types of spiders, but it takes great effort to commercially produce silk using natural methods. This is due to the cannibalism in most types of spiders. The female sometimes eats the male after mating," explains the researcher. For this reason, the structural proteins in the spider silk have to be made artificially.

At RWTH, work is concentrated on what the silk is made up of. A spider is capable of producing seven different types of silk for various uses: from the cocoon on the outside to glue on the inside. "The proteins at the end of the web are particularly intriguing for our project," says Davidenko. The supporting structure is not sticky, is much firmer than the inside components, and is distinguished through its high stability and elasticity. It stretches well without tearing.

Spider silk proteins are disassembled by enzymes. It's an ideal condition for using such proteins as wound coverage. For the biotechnical production of spider silk, genes have to be modified and proteins unified. The adapted genes are placed into host bacteria. The bacteria then synthesize the protein.

"The particular challenge is not to overstrain the bacteria, since its capacity is limited to 5,000 DNA bases, which outline the genetic material for protein synthesis. If this boundary is exceeded, it could lead to errors," emphasizes Davidenko. The protein can be implemented, if it "disintegrates" in the wound. The body's own enzymes help in this. Investigations into solubility, degradability, and the release of the spider silk proteins show promising results: natural wound fluid can break down the generated proteins and aid the wound healing.

Researchers at DWI are now developing a bandage that has spider silk proteins attached to the side touching the skin.

Contacts and sources:
Dr. Brigitte Küppers
DWI Interactive Materials Research at RWTH Aachen University  

On The Edge Of Friction: Surfaces Without Friction Will Be Thinkable

Precise insight into how two microscopic surfaces slide over one another could help in the manufacture of low-friction surfaces

The problem exists on both a large and a small scale, and it even bothered the ancient Egyptians. However, although physicists have long had a good understanding of friction in things like stone blocks being pulled by workers into the shape of a pyramid, they have only now been able to explain friction in microscopic dimensions in any degree of detail. 

Researchers from the University of Stuttgart and the Stuttgart-based Max Planck Institute for Intelligent Systems arranged an elaborate experiment in which they pulled a layer of regularly ordered plastic spheres over an artificial crystal made of light. This enabled them to observe in detail how the layer of spheres slid over the light crystal. Contrary to what one might imagine, the spheres do not all move in unison. In fact, it's only ever some of them that move, while the others stay where they are. This observation confirms theoretical predictions and also explains why friction between microscopic surfaces depends on their atomic structure.

Friction by region: 

When two microscopic surfaces with the same structure slide over one another, not all particles move at the same time. In fact, the particles in some areas slide (blue spheres), thus distorting their configuration. The other particles (green) stay where they are in the hollows of the surface. 
 
 © Thomas Bohlein/Ingrid Schofron

Friction causes the economy enormous losses - but without friction, absolutely nothing would work: the cost of machine parts rubbing against each other as a result of wear, for instance, is estimated to amount to around eight per cent of the German GDP – some 200 billion euros. And that does not even take into account the fact that tectonic plates rubbing together is what causes earthquakes. If a car's tyres or the soles of your shoes did not grip the ground, neither wheels nor feet would be able to move forward. The dominant factors in these examples of friction between large objects have been well understood by physicists for some time now; the countless small irregularities that all surfaces exhibit are instrumental here. They are what lie behind the fact that two large surfaces only touch each other at certain points.

The situation is quite different when two microscopically small surfaces rub against each other. Providing they have been accurately produced, they touch each other with all the atoms of their surface. The researchers from Stuttgart have now observed for the first time how friction takes place on this atomic level. Their experiment also enables them to understand why surfaces with the same structure create more friction when they rub against each other than those with differing structures. "In this way, we are creating the basis for the construction of micro- and nano-machines that are as low in friction as possible," says Clemens Bechinger, Professor at the University of Stuttgart and Fellow at the Max Planck Institute for Intelligent Systems.

Distortions of the surface create movements

Using laser light and electrically charged plastic spheres in a water bath, his team created a two-dimensional model of two surfaces rubbing against each other. As the spheres suspended in the water repel each other electrically, they arrange themselves in a periodically ordered layer. They form a surface. The scientists create the other surface below the layer of spheres using intensive laser beams. They overlap the electromagnetic waves from the beams one above the other so that a light crystal, a type of optical egg carton, is formed. 

"Using a surface created by light has enabled us to observe the processes that take place on rubbing surfaces directly with a camera – for the very first time," says Thomas Bohlein, who conducted the experiment as part of his doctoral studies. "That’s not possible in experiments with three-dimensional objects, because the boundary layer is not visible."

Thomas Bohlein started by precisely calibrating the distance between the hollows in the optical egg carton against the distance between the plastic spheres. One might think that the surfaces would jerkily separate and then snap back into place, one on top of the other, just like two egg cartons would do if you tried to pull one over the other.

But what the experiment showed was a totally different mechanism. When the team drew the plastic spheres across the optical surface, not all of the spheres began to slide simultaneously. In fact, some of the particles moved only within certain areas. In these areas, the spheres left their comfortable hollows and also moved slightly closer together. This phenomenon is possible because the spheres, and also the atoms in a surface, do not sit next to each other immovably – they always have a little room to manoeuvre. And the distortions in the layer of spheres or atoms that occur when they are pulled mean that they do not quite fit back into the surface of the optical crystal. This makes it much easier to pull the particles out of their hollows.

Friction is much reduced in surfaces with different structures

As the researchers pull the particle layer, the compressed zones move through the layer of spheres, with only the particles in these zones able to get out of their hollows. "For the overall layer, it is more efficient to let a distortion zone move through the layer successively rather than to move all of the spheres from one hollow to the next simultaneously," says Clemens Bechinger. The compressed areas that migrated towards the pulling force over the optical surface became ever larger as the team pulled the layer of plastic spheres more strongly.

In the next experiment, the Stuttgart-based physicists pushed the hollows in the optical egg carton slightly closer together so that they did not correspond well with the alignment of the plastic spheres from the start. "As a result, fewer particles find a space in a hollow, and the distortion zones move over the surface much more easily," says Thomas Bohlein.

Physicists had already suspected that local distortions – which they call kinksand antikinks – played the crucial role in the friction between microscopic surfaces. "We have observed these changes in the surface experimentally for the first time," says Clemens Bechinger. "As such, we have confirmed the theoretical predictions about the way friction works in atomic dimensions."

Surfaces without friction will be thinkable

However, the scientists in Stuttgart went one step further. Physicists had hardly any idea what happened in terms of friction between a crystalline surface and a quasi-crystalline surface. Quasicrystals, for whose discovery Shechtman received the Nobel Prize in Chemistry this year, exhibit small areas with a strict order. But this is not repeated regularly in larger dimensions, like in a real crystal.

Thomas Bohlein formed a quasicrystal beneath the crystalline layer of plastic spheres by again skilfully superimposing the laser beams. The plastic spheres came to rest in the hollows of the quasicrystalline surface only at rare intervals, and the friction was drastically reduced compared with that of two crystalline surfaces. "Our experiment provides the proof that one of the reasons why friction on quasicrystalline surfaces is so low is because the structures are incommensurate," says Thomas Bohlein.

The discovery of how friction works on a micro-scale could also have practical consequences. "Above all, the combination of a crystalline and a quasicrystalline surface offers the possibility to reduce the friction in micro- and nano-systems," says Clemens Bechinger. "But it is also conceivable to design surfaces that slide over one another with virtually no friction."

Contact and sources:
Prof. Dr. Clemens Bechinger
Universität Stuttgart
Max Planck Institute for Intelligent Systems, Stuttgart, Stuttgart
 
Citation: Thomas Bohlein, Jules Mikhael und Clemens BechingerObservation of kinks and antikinks in colloidal monolayers driven across ordered surfaces
Nature Materials, published online: 18. Dezember 2011; DOI: 10.1038/NMAT3204

Scientists Succeed In Making Spinal Cord Transparent

In the event of the spinal cord injury, the long nerve cell filaments, the axons, may become severed. For quite some time now, scientists have been investigating whether these axons can be stimulated to regenerate. Such growth takes place on a scale of only a few millimetres.

To date, changes like this could be determined only by cutting the tissue in question into wafer-thin slices and examining these under a microscope. However, the two-dimensional sections provide only an inaccurate picture of the spatial distribution and progression of the cells. Together with an international team, scientists at the Max Planck Institute for Neurobiology in Martinsried have now developed a new method by virtue of which single nerve cells can be both examined in intact tissue and portrayed in all three dimensions.

A spinal cord as if made of glass: The new method enables scientists to see nerve cell in the intact cellular network.
© MPI of Neurobiology / Ertürk

The spinal cord is the most important pathway for relaying information from the skin, muscles and joints to the brain and back again. Damage to nerve cells in this region usually results in irreversible paralysis and loss of sensation. For many years, scientists have been doing their best to ascertain why nerve cells refuse to regenerate. They search for ways to stimulate these cells to resume their growth.

To establish whether a single cell is growing, the cell must be visible in the first place. Up to now, the procedure has been to cut the area of the spinal cord required for examination into ultra-thin slices. These are then examined under a microscope and the position and pathway of each cell is reconstructed. In exceptional cases, scientists could go to the trouble of first digitizing each slice and then reassembling the images, one by one, to produce a virtual 3D model. 

However, this is a very time-consuming endeavour, requiring days and sometimes even weeks to process the results of just one examination. Even worse, mistakes can easily creep in and falsify the results: The appendages of individual nerve cells might get squashed during the process of slicing, and the layers might be ever so slightly misaligned when set on top of each other. 

As Frank Bradke explains: "Although this might not seem dramatic to begin with it prevents us from establishing the length and extent of growth of single cells." Bradke and his team at the Max Planck Institute of Neurobiology have investigated the regeneration of nerve cells following injuries to the spinal cord. Since July he has been working at the German Centre for Neurodegenerative Diseases (DZNE) in Bonn. "However, since changes on this crucial scale are precisely what we need to see, we worked meticulously until we came up with a better technique", he continues.

The new technique is based on a method known as ultramicroscopy, which was developed by Hans Ulrich Dodt from the Technical University of Vienna. The Max Planck neurobiologists and an international team of colleagues have now taken this technique a step further. The principle is relatively straightforward. Spinal cord tissue is opaque due to the fact that the water and the proteins contained in it refract light differently. Thus, the scientists removed the water from a piece of tissue and replaced it by an emulsion that refracts light in exactly the same way as the proteins. This left them with a completely transparent piece of tissue.

 "It's the same effect as if you were to spread honey onto textured glass", Ali Ertürk, the study's first author adds. The opaque pane becomes crystal clear as soon as the honey has compensated for the surface irregularities.

The new method is a leap forward in regeneration research. By using fluorescent dyes to stain individual nerve cells, scientists can now trace their path from all angels in an otherwise transparent spinal cord section. This enables them to ascertain once and for all whether or not these nerve cells recommenced their growth following injury to the spine – an essential prerequisite for future research. 

"The really great thing is the fact that this method can also be easily applied to other kinds of tissue", Frank Bradke relates. For example, the blood capillary system or the way a tumour is embedded in tissue could be portrayed and analysed in 3D.

Contact and sources:
Dr. Katrin Weigmann
Presse- und Öffentlichkeitsarbeit
Deutsches Zentrum für Neurodegenerative Erkrankungen
Max Planck Institute

Citation: Ali Ertürk, Christoph P. Mauch, Farida Hellal, Friedrich Förstner, Tara Keck, Klaus Becker, Nina Jährling, Heinz Steffens, Melanie Richter, Mark Hübener, Edgar Kramer, Frank Kirchhoff, Hans Ulrich Dodt, Frank Bradke3D imaging of the unsectioned adult spinal cord to assess axon regeneration and glial responses after injury
Nature Medicine, online publication, December 25, 2011

Scientists Control Behavior With Light, Remote Control Of Animals And Insects

Optogenetics – Combination switch turns neurons on and off, Max Planck scientists control neurons using two linked light channels.

Flies that display courtship behaviour at the press of a button, worms made to wriggle by remote control: since the dawn of optogenetics, scientists can turn nerve cells on and off using pulses of light. A research team at the Max Planck Institute of Biophysics in Frankfurt am Main has developed a molecular light switch that makes it possible to control cells more accurately than ever before. The combination switch consists of two different light-sensitive membrane proteins – one for on, the other for off. The method used by the scientists to connect the two components can be used with different protein variants, making it highly versatile.

Molecular combination switch: two light-sensitive membrane proteins - here red and purple - are linked via a connecting piece (green) and anchored into the cell wall (left). When the cell is illuminated with blue light, it allows positively charged ions in. Orange light has the opposite effect, allowing negatively charged ions into the cell. The cell is activated or deactivated, respectively (right).

 Zoom image
 
© MPI of Biophysics

Optogenetics is a new field of research that aims to control cells using light. To this end, scientists avail of light-sensitive proteins that occur naturally in the cell walls of certain algae and bacteria. They introduce genes with the building instructions for these membrane proteins into the DNA of target cells. Depending on which proteins they use, they can fit cells with on and off switches that react to light of different wavelengths.

For accurate control, it is important that the cell function can be switched off and on equally well. This was exactly the problem until now: when the genes are introduced separately, the cell produces different numbers of copies of each protein and one type ends up dominating.

A group of scientists headed by Ernst Bamberg at the Max Planck Institute of Biophysics has now developed a solution that is both elegant and versatile: they have located the genes for the on and off proteins on the same portion of DNA, along with an additional gene containing the assembly instructions for a connection piece. This interposed protein links the two switch proteins and anchors them firmly in the cell membrane. “In this way, we can ensure that the on and off switches are built into the cell wall side by side, and always in a ratio of 1:1. This allows us to control the cell with great accuracy”, explains Ernst Bamberg.

The combination light switch conceived by the researchers consists of the membrane proteins channelrhodopsin-2 and halorhodopsin. Channelrhodopsin-2 originally comes from the single-celled green alga Chlamydomonas reinhardtii. It reacts to blue light by making the cell wall permeable to positively charged ions. The resulting influx of ions triggers a nerve impulse that activates the cell. Halorhodopsin, isolated by scientists from the bacterium Natromonas pharaonis, has the opposite effect: when the cell is illuminated with orange light, it allows negatively charged ions in, suppressing nerve impulses.

Since channelrhodopsin-2 and halorhodopsin react to light of different wavelengths, together they comprise a useful tool for switching cells on and off at will. The scientists have shown that the method they used to connect the two molecules is also suitable for use with other proteins. “By linking different proteins as required, we will be able to control cells with much greater accuracy in future”, affirms Bamberg.



Contact and sources:
Prof. Dr. Ernst Bamberg
Max Planck Institute of Biophysics, Frankfurt am Main

Citation:
Sonja Kleinlogel, Ulrich Terpitz, Barbara Legrum, Deniz Gökbuget, Edward S Boyden, Christian Bamann, Philip G Wood & Ernst Bamberg
A gene-fusion strategy for stoichiometric and co-localized expression of light-gated membrane proteins
Nature Methods, Vol. 8(12); DOI:10.1038/NMETH.1766

Peru's Misti Volcano: Understanding The Past To Assess Future Hazards

Misti volcano’s last Plinian eruption happened ca. 2 ka (kiloannum, or 1000 years) ago , emplacing voluminous tephra-fall, pyroclastic-flow, and lahar deposits. Arequipa, located at the foot of the volcano, has a population of over 800,000 people and growing. Misti will erupt explosively again, and it is important to understand the past Plinian eruption.

This Geological Society of America (GSA) Special Paper first provides a detailed description and analysis of the lahar deposits from the 2 ka eruption and the flows that emplaced them.

Credit: GSA

Because Misti is located in an arid region, the authors have also included a detailed discussion of the paleoclimate conditions that provided the water for voluminous mudflows. The authors further delineate the complete eruption sequence for the pyroclastic-flow and tephra-fall deposits, providing a narrative of the eruption progression and dynamics.

Finally, the book discusses the 2 ka eruption in the context of hazards from a future Plinian eruption and provides hazards maps for the different phenomena.

Contacts and sources:
Geological Society of America

Publication title: The 2 ka Eruption of Misti Volcano, Southern Peru—The Most Recent Plinian Eruption of Arequipa’s Iconic Volcano
Author: Christopher J. Harpel, Shanaka de Silva, and Guido Salas (editors)
Publication type: Book (Paperback)
Publication date: 29 December 2011
Number of pages: 70
ISBN number: 978-0-8137-2484-3
Price: 40.00 USD US Dollars

Individual copies of the volume may be purchased through the Geological Society of America online bookstore, http://www.geosociety.org/bookstore/default.asp?oID=0&catID=9&pID=SPE484, or by contacting GSA Sales and Service.

Book editors of earth science journals/publications may request a review copy by contacting April Leo 

Environmental Temperature Has Effects On Sex Determination

The environmental temperature has effects on sex determination. There are species, such as the Atlantic silverside fish, whose sex determination depends mainly on temperature. And there are other species whose sex determination is written within its DNA but still temperature can override this genetic ‘instruction’.

Atlantic silverside fish 
File:Atlantic silverside.jpg
Credit: Wikipedia

Previous studies with the European sea bass, a fish whose sex determination depends on a combination of genetic and environmental factors, had shown that starting with a normal sex ratio population –equal proportions of male and females, it was possible to obtain an all-male group just through an increase in water temperature during a critical period of early development.

The most intriguing observation was that effects of temperature were maximum at a moment when gonads were not differentiated nor had they even started to form. Why was this happening, what makes temperature override the genetic component and so early was, until now, a long-standing puzzle.

Now, a research lead by the Spanish National Research Council (CSIC) has found out the answer. The team, lead by Francesc Piferrer, a CSIC professor at the Institute of Marine Sciences, in Barcelona, describes the mechanism which is induced by increased temperatures and triggers aromatase gene silencing.

Aromatase is an enzyme that transforms androgens into estrogens, which are essential for the development of ovaries in all non-mammalian vertebrates. If there is no aromatase there are no estrogens, and without estrogens the development of ovaries is not possible. The research, that has been realized with the contribution of the Center for Genomic Regulation, in Barcelona, is being published this week in PLos Genetics.

Early effects

In the experiment, scientists exposed two groups of European seabass larvae at different temperatures, normal and high temperature, during their first weeks of life.

Results show that high temperature increases the DNA methylation of the gonadal aromatase promoter (cyp19a), which, in turn drives its silencing as its transcriptional activation is inhibited. In the group exposed to high temperature there were genetic females that were only partially affected and yet developed as females. However, there were other genetic females with the highest level of DNA methylation that therefore developed as males because aromatase was fully inhibited.

This is the first time that an epigenetic mechanism linking an environmental factor to a cellular mechanism related to the sexual determination has been described in any animal. Previously, only a similar mechanism had been described in some plants.

As researcher Francesc Piferrer points out, ‘animals are affected very soon, before differences between females and males become visible in histological samples, which happens on the 150th day of life, and even before the gonads start to form, which happens on the 35th day of life’.

This work explains why a few degrees of temperature rise masculinize these animals, something relevant in a context of global change.

It also explains why many fishes raised on farms are males, since farmers raise larvae in warmer waters in order to accelerate their growth. Piferrer adds that ‘sex determination by temperature is very common in reptiles. It will be interesting to see if a similar mechanism to the one described exists in this group of vertebrates’.

Contacts and sources:
Centre for Genomic Regulation

Citation: Navarro-Martín L, Viñas J, Ribas L, Díaz N, Gutiérrez A, Di Croce L. “DNA methylation of the gonadal aromatase (cyp19a) promoter is involved in temperature-dependent sex ratio shifts in the European sea bass.” PLoS Genetics, Dec 29 2011.

Are You A Martian? We All Could Be, Scientists Say — And An MIT-Developed Instrument Might Someday Provide The Proof

Are we all Martians? According to many planetary scientists, it's conceivable that all life on Earth is descended from organisms that originated on Mars and were carried here aboard meteorites. If that's the case, an instrument being developed by researchers at MIT and Harvard could provide the clinching evidence.

In order to detect signs of past or present life on Mars — if it is in fact true that we're related — then a promising strategy would be to search for DNA or RNA, and specifically for particular sequences of these molecules that are nearly universal in all forms of terrestrial life. That's the strategy being pursued by MIT research scientist Christopher Carr and postdoctoral associate Clarissa Lui, working with Maria Zuber, head of MIT's Department of Earth, Atmospheric and Planetary Sciences (EAPS), and Gary Ruvkun, a molecular biologist at the Massachusetts General Hospital and Harvard University, who came up with the instrument concept and put together the initial team. Lui presented a summary of their proposed instrument, called the Search for Extra-Terrestrial Genomes (SETG), at the IEEE Aerospace Conference in March 2011 in Big Sky, Mont.

Are you a Martian?
Graphic: Christine Daniloff

The idea is based on several facts that have now been well established. First, in the early days of the solar system, the climates on Mars and the Earth were much more similar than they are now, so life that took hold on one planet could presumably have survived on the other. Second, an estimated one billion tons of rock have traveled from Mars to Earth, blasted loose by asteroid impacts and then traveling through interplanetary space before striking Earth's surface. Third, microbes have been shown to be capable of surviving the initial shock of such an impact, and there is some evidence they could also survive the thousands of years of transit through space before arriving at another planet.

So the various steps needed for life to have started on one planet and spread to another are all plausible. Additionally, orbital dynamics show that it's about 100 times easier for rocks to travel from Mars to Earth than the other way. So if life got started there first, microbes could have been carried here and we might all be its descendants.

So what?

If we are descendants from Mars, there might be important lessons to be learned about our own biological origins by studying biochemistry on our neighbor planet, where biological traces erased long ago here on Earth might have been preserved in the Martian deep freeze.

The MIT researchers' device would take samples of Martian soil and isolate any living microbes that might be present, or microbial remnants (which can be preserved for about up to a million years and still contain viable DNA), and separate out the genetic material in order to use standard biochemical techniques to analyze their genetic sequences.

"It's a long shot," Carr concedes, "but if we go to Mars and find life that's related to us, we could have originated on Mars. Or if it started here, it could have been transferred to Mars." Either way, "we could be related to life on Mars. So we should at least be looking for life on Mars that's related to us."

Even a few years ago, that might have seemed like more of a long shot, but recent Mars orbiter and rover missions have clearly shown that Mars once had abundant water, and many of the conditions thought to be needed to support life. And although the surface of Mars today is too cold and dry to support known life forms, there is evidence that liquid water may exist not far below the surface. "On Mars today, the best place to look for life is in the subsurface," Carr says.

So the team has been developing a device that could take a sample of Martian soil from below the surface — perhaps dredged up by a rover equipped with a deep drill — and process it to separate out any possible organisms, amplify their DNA or RNA using the same techniques used for forensic DNA testing on Earth, and then use biochemical markers to search for signs of particular, genetic sequences that are nearly universal among all known life forms.

The researchers estimate that it could take two more years to complete the design and testing of a prototype SETG device. Although the proposed device has not yet been selected for any upcoming Mars mission, a future mission with a lander or rover equipped with a drill could potentially carry this life-detection instrument.

No instrument has been sent to Mars specifically to look for evidence of life since NASA's twin Viking landers in 1976, which produced tantalizing but ambiguous results. An instrument on the Mars Science Lander to be launched in the fall will investigate chemistry relevant to life. The instrument from the MIT-Harvard team directly addresses Earth-like molecular biology.

Christopher McKay, an astrobiologist at NASA-Ames Research Center in California who specializes in research related to the possibility of life on Mars, says this work is "very interesting and important." He says, "it is not implausible that life on Mars will be related to life on Earth and therefore share a common genetics. In any case it would be important to test this hypothesis." But he adds that there is another motive for doing this research as well: "From an astronaut health and safety point of view and from a return-sample point of view, there is more to worry about" if there are organisms closely related to us on Mars, since a microbe that is similar is much more likely to be infectious to terrestrial life forms than would a totally alien microbe — so it is very important to be able to detect such life forms if they are present on Mars. In addition, this method could also detect any biological contamination on Mars that has been brought by spacecraft from Earth.

This kind of test is something we have the ability to do, he says, and therefore, although such an experiment has not yet been formally approved, "it seems improbable to me that we will do a serious search for life on Mars and not do this test."

Contacts and sources:
David L. Chandler, MIT News Office

Breast Cancer Survivors Benefit From Practicing Mindfulness-Based Stress Reduction

Women recently diagnosed with breast cancer have higher survival rates than those diagnosed in previous decades, according to the American Cancer Society. However, survivors continue to face health challenges after their treatments end. Previous research reports as many as 50 percent of breast cancer survivors are depressed. Now, University of Missouri researchers in the Sinclair School of Nursing say a meditation technique can help breast cancer survivors improve their emotional and physical well-being.

Yaowarat Matchim, a former nursing doctoral student; Jane Armer, professor of nursing; and Bob Stewart, professor emeritus of education and adjunct faculty in nursing, found that breast cancer survivors’ health improved after they learned Mindfulness-Based Stress Reduction (MBSR), a type of mindfulness training that incorporates meditation, yoga and physical awareness.

Jane Armer and other MU researchers found that breast cancer survivors’ health improved after they completed mindfulness training that incorporates meditation, yoga and physical awareness.
Credit: University of Missouri 

“MBSR is another tool to enhance the lives of breast cancer survivors,” Armer said. “Patients often are given a variety of options to reduce stress, but they should choose what works for them according to their lifestyles and belief systems.”

The MBSR program consists of group sessions throughout a period of eight to ten weeks. During the sessions, participants practice meditation skills, discuss how bodies respond to stress and learn coping techniques. The researchers found that survivors who learned MBSR lowered their blood pressure, heart rate and respiratory rate. In addition, participants’ mood improved, and their level of mindfulness increased after taking the class. Armer says, for best results, participants should continue MBSR after the class ends to maintain the positive effects.

“Mindfulness-based meditation, ideally, should be practiced every day or at least on a routine schedule,” Armer said. “MBSR teaches patients new ways of thinking that will give them short- and long-term benefits.”

Armer says the non-pharmaceutical approach works best as a complement to other treatment options such as chemotherapy, radiation and surgery.

“Post diagnosis, breast cancer patients often feel like they have no control over their lives,” Armer said. “Knowing that they can control something—such as meditation—and that it will improve their health, gives them hope that life will be normal again.”

The study, “Effects of Mindfulness-Based Stress Reduction (MBSR) on Health Among Breast Cancer Survivors,” was published in the Western Journal of Nursing Research.
Contacts and sources:
Jesslyn Tenhouse
University of Missouri-Columbia

Coal Plants Without Scrubbers Account For A Majority Of U.S. SO2 Emissions

Coal-fired electric power plants make up the largest source of national sulfur dioxide (SO2) emissions. The Cross-State Air Pollution Rule (CSAPR) calls for a 53% reduction in SO2 emissions from the electric power sector by 2014. To meet this goal, plant owners can implement one of or a combination of three main strategies: use lower sulfur coal in their boilers, retire plants without emissions controls, or install emissions control equipment—primarily flue gas desulfurization (FGD) scrubbers. Plants with FGD equipment generated 58% of the total electricity generated from coal in 2010, while producing only 27% of total SO2 emissions. 


Source: U.S. Energy Information Administration, based on Form EIA-860, EPA Continuous Emissions Monitoring System, Ventyx Energy Velocity.
Note: Circles denotes plants with capacity greater than 25 megawatts. Red circles are unscrubbed coal plants, green circles indicate coal plants with scrubbers, and blue circles indicate coal plants that plan to add scrubbers.

SO2 is formed during the combustion of coal. The amount of SO2 produced depends on the sulfur content of the coal burned in a boiler. FGD scrubbers remove the SO2 from a boiler's post-combustion exhaust (flue gas) by passing it through an alkaline solution. This process is also effective in removing acid gases, such as hydrochloric acid. Acid gases are expected to be regulated under EPA's Air Toxics Rule.

FGD scrubber SO2 removal rates vary based on characteristics such as the specific equipment type, age, and the sulfur content of the coal. New systems have the potential for removal efficiencies of up to 98% according toEPA estimates.

The sulfur content of coal varies by rank. Generally, bituminous coal and lignite coal have higher sulfur content than subbituminous coal, but this can vary by region. Bituminous coal is concentrated in the eastern half of the U.S, while subbituminous coal can be found in the west. Lignite production is concentrated in Texas, Louisiana, and North Dakota.


Source: U.S. Energy Information Administration, based on EPA CEMS 2010 data.
Note: Graph includes generation and emissions from plants with capacity greater than 25 megawatts.
Download CSV Data

Subbituminous coal has the lowest sulfur content of the three main coal types, so plants that burn subbituminous coals have been less likely to add scrubbers. Of the plants without scrubbers, the ones burning subbituminous coal generated 69% of the electricity while only emitting 48% of the associated emissions in 2010 (see chart). Even though lignite-burning plants accounted for 16% of SO2 emissions from scrubbed plants in 2010, they generated only 8% of the electricity from scrubbed plants.

I Know Something You Don't Know -- And I Will Tell You!

Many animals produce alarm calls to predators, and do this more often when kin or mates are present than other audience members. So far, however, there has been no evidence that they take the other group members' knowledge state into account. 

Researchers from the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and the University of St. Andrews, Great Britain, set up a study with wild chimpanzees in Uganda and found that chimpanzees were more likely to alarm call to a snake in the presence of unaware than in the presence of aware group members, suggesting that they recognize knowledge and ignorance in others. Furthermore, to share new information with others by means of communication represents a crucial stage in the evolution of language. This study thus suggests that this stage was already present when our common ancestor split off from chimps 6 million years ago.

Chimpanzees fear snakes. This one took refuge on a tree.
 
Credit: Roman Wittig/MPI f. Evolutionary Anthropology

The ability to recognize another individuals' knowledge and beliefs may be unique to humankind. Tests of a "theory of mind" in animals have been mainly conducted in captivity and have yielded conflicting results: Some non-human primates can read others' intentions and know what others see, but they may not understand that, in others, perception can lead to knowledge. When there are negative results, however, the question remains whether chimpanzees really cannot do the task or whether they simply do not understand it. "The advantage of addressing these questions in wild chimpanzees is that they are simply doing what they always do in an ecologically relevant setting", says Catherine Crockford, a researcher at the University of St. Andrews.

Catherine Crockford, Roman Wittig and colleagues set up a study with wild chimpanzees in Budongo Forest, Uganda. They presented them with models of dangerous venomous snakes, two gaboon vipers and one rhinoceros viper. "As these highly camouflaged snakes sit in one place for weeks, it pays for the chimp who discovers it to inform other community members about the danger", says Crockford.

The researchers have monitored the behavior of 33 different chimpanzees, who saw one of three snake models and found that alarm calls were produced more when the caller was with group members who had either not seen the snake or had not been present when alarm calls were emitted. "Chimpanzees really seem to take another's knowledge state into account and voluntarily produce a warning call to inform the others of a danger that they [the others] do not know about", says Roman Wittig of the Max Planck Institute for Evolutionary Anthropology and the University of St. Andrews. "In contrast, chimpanzees were less likely to inform audience members who already know about the danger."

This study shows that these are not only intentionally produced alert calls, but that they are produced more when the audience is ignorant of the danger. "It is as if the chimpanzees really understand that they know something the audience does not AND they understand that by producing a specific vocalization they can provide the audience with that information", concludes Wittig. 

Some scientists suggest that providing group members with missing information by means of communication is a crucial stage in the evolution of language: why inform audience members if you do not realize they need the information? Until now it was not clear at what point in hominoid or hominid evolution this stage evolved. It has been assumed that it was more likely to be during hominid evolution. This study suggests, however, that it was already present when our common ancestor split off from chimps 6 million years ago.


Contacts and sources: