Saturday, June 30, 2012

How Tattoos ‘Move’ With Age, Ink Particles Wander Over The Years

The dyes which are injected into the skin to create tattoos move with time – permanently altering the look of a given design. In this April’s Mathematics Today Dr Ian Eames, a Reader in Fluid Mechanics at UCL, published a mathematical model enabling us to estimate the movement of these ink particles and predict how specific tattoo designs will look several years in the future.


Credit: University College London

“Tattoos are incredibly popular worldwide with more than a third of 18-25 year olds in the USA sporting at least one design,” says Dr Eames. “A great deal of work has already been done on the short term fate of ink particles in the skin, tracking them over periods of just a few months – but much less is known about how these particles move over longer periods of time.

“This paper provides a mathematical framework enabling us to predict how ink particles move over 20 year periods. It helps pave the way towards assessing whether there are any long-term health implications with tattoos – in addition to giving people an idea of how their chosen design could look several years down the line.”

Tattoo inks are a suspension of particles which are insoluble in water. Heavy metals such as mercury, lead, cadmium, nickel, zinc and iron are used for colours and the tattoos are created by locally puncturing the dermis level of the skin, while simultaneously applying ink.

The damage to the skin leads to an initial immune response and white blood cells arrive to clear the debris. During this process, some of the ink particles are removed from the body via the lymphatic system, while the remainder are engulfed in fibroblast cells and sealed below the surface of the skin. The dispersal of the ink particles occurs over time as the cells which contain them either divide, or die and exit the body.

“Skin type, age, size, exposure to the sunlight and the type of ink which is used all influence how a tattoo disperses with time,” says Dr Eames. “But broadly speaking, what my paper shows is that the small details in a tattoo are lost first, with thicker lines being less affected. Although finely detailed tattoos might look good when they are first done, they tend to lose their definition after 15 years - depending on how fine the lines are.”

Source: University College London

Astronomy Triple Treat: 3 Eclipses in June and July, 2 Solar and 1 Lunar



Partial Solar Eclipse of June 01
The next partial solar eclipse occurs at the Moon's descending node in Taurus. The event is visible from high latitudes in the Northern Hemisphere

The eclipse begins at sunrise in Siberia and northern China where the penumbral shadow first touches Earth at 19:25:18 UT. Two hours later, greatest eclipse occurs at 21:16:11 UT. At that time, an eclipse of magnitude 0.601 will be visible from the Arctic coast of western Siberia as the midnight Sun skirts the northern horizon. Although most of Alaska and northern Canada will witness the partial eclipse, the southern limit of the penumbra falls along a curve from south of Fairbanks to central New Brunswick and Nova Scotia.

Reykjavik, Iceland receives a 0.462 magnitude eclipse just before sunset. Northern most Norway, Sweden and Finland also get a midnight Sun eclipse with the event hanging above the northern horizon. The partial eclipse ends at 23:06:56 UT when the penumbra leaves Earth just north of Newfoundland in the Atlantic Ocean.

Eclipse times and local circumstances for major cities in North America, Europe and Asia are given in Table 2. The Sun's altitude, azimuth, the eclipse magnitude and obscuration are given at the instant of maximum eclipse.

This is the 68th eclipse of Saros 118. The family began with a group of 8 partial eclipses from the years 803 to 929. The Saros ends with a small partial eclipse in 2083. Complete details for the entire series of 72 eclipses (in the order: 8 partial, 40 total, 2 hybrid, 15 annular and 7 partial) spanning 1280 years can be found at:

eclipse.gsfc.nasa.gov/SEsaros/SEsaros118.html

Total Lunar Eclipse of June 15
The first lunar eclipse of 2011 occurs at the Moon's ascending node in southern Ophiuchus about 7° west of the Lagoon Nebula (M8). The Moon passes deeply through Earth's umbral shadow during this rather long event. The total phase itself lasts 100 minutes. The last eclipse to exceed this duration was in July 2000. The Moon's contact times with Earth's umbral and penumbral shadows are listed below.

Penumbral Eclipse Begins: 17:24:34 UT
Partial Eclipse Begins: 18:22:56 UT
Total Eclipse Begins: 19:22:30 UT
Greatest Eclipse: 20:12:37 UT
Total Eclipse Ends: 21:02:42 UT
Partial Eclipse Ends: 22:02:15 UT
Penumbral Eclipse Ends: 23:00:45 UT

At the instant of greatest eclipse [5] the umbral eclipse magnitude [6] will reach 1.6998 as the Moon's centre passes within 5.3 arc-minutes of the shadow axis. The Moon's southern limb will lay 54.2 arc-minutes from the edge of the umbra while the northern limb will lay 22.3 arc-minutes from the umbra's edge. Thus, the northern regions of the Moon will probably appear brighter than the southern regions that lie deeper in the shadow. Since the Moon samples a large range of umbral depths during totality, its appearance will change dramatically with time. It is difficult to predict the exact brightness distribution in the umbra so observers are encouraged to estimate the Danjon value at different times during totality (see Danjon Scale of Lunar Eclipse Brightness). Note that it may also be necessary to assign different Danjon values to different portions of the Moon (i.e. - north vs. south).

Nearly 30 years ago (1982 Jul 06), the author watched another total lunar eclipse with the Moon in the same part of the sky. I was amazed at how brilliantly the summer Milky Way glowed since it was all but invisible during the partial phases. Observers will have a similar opportunity during June's eclipse. In this case, the totally eclipsed Moon will lie in southern Ophiuchus just 8° northwest of the brightest Sagittarian star clouds. The summer constellations are well placed for viewing so a number of bright stars can be used for magnitude comparisons with the totally eclipsed Moon.

Antares (mv = +0.92v) is 15° to the west, Shaula (mv = +1.63) is 14° south, Epsilon Sgr (mv = +1.85) is 15° southeast, Arcturus (mv = -0.05) stands 55° to the northwest, and Altair (mv = +0.77) is 46° northeast of the Moon.

Figure 3 shows the path of the Moon through the penumbra and umbra as well as a map of Earth showing the regions of eclipse visibility. The entire event will be seen from the eastern half of Africa, the Middle East, central Asia and western Australia. Observers throughout Europe will miss the early stages of the eclipse because they occur before moonrise. Fortunately, totality will be seen throughout the continent except for northern Scotland and northern Scandinavia. Eastern Asia, eastern Australia, and New Zealand will miss the last stages of eclipse because they occur after moonset. Again, the total phase will be seen from most of these regions. Even observers in eastern Brazil, Uruguay and Argentina will witness totality. However, none of the eclipse will be visible from North America. At mid-eclipse, the Moon is near the zenith for observers from Reunion and Mauritius.

Table 3 lists predicted umbral immersion and emersion times for 20 well-defined lunar craters. The timing of craters is useful in determining the atmospheric enlargement of Earth's shadow (see Crater Timings During Lunar Eclipses).

The June 15 total lunar eclipse is the 34th member of Saros 130, a series of 71 eclipses occurring in the following order: 8 penumbral, 20 partial, 14 total, 22 partial, and 7 penumbral lunar eclipses (Espenak and Meeus, 2009a) spanning 1262 years. Complete details for Saros 130 can be found at:

eclipse.gsfc.nasa.gov/LEsaros/LEsaros130.html



Partial Solar Eclipse of July 01
Just one lunation after the previous one, the third solar eclipse of the year takes place at the Moon's descending node in western Gemini. This Southern Hemisphere event is visible from a D-shaped region in the Antarctic Ocean south of Africa (Figure 4). Such a remote and isolated path means that it may very well turn out to be the solar eclipse that nobody sees. At greatest eclipse (08:38:23 UT), the magnitude is just 0.097.

This event is the first eclipse of Saros 156. The family will produce 8 partial eclipses, followed by 52 annular eclipses and ending with 9 more partials. Complete details for the entire series of 69 eclipses spanning the years 2011 through 3237 can be found at:

eclipse.gsfc.nasa.gov/SEsaros/SEsaros156.html

Source: NASA

Kids Should Not Consume Energy Drinks, And Rarely Need Sports Drinks,Warns AAP


Sports and energy drinks are heavily marketed to children and adolescents, but in most cases kids don’t need them – and some of these products contain substances that could be harmful to children. 

Variety of energy drinks on fridge display
Image: Wikipedia

In a new clinical report, the American Academy of Pediatrics (AAP) outlines how these products are being misused, discusses their ingredients, and provides guidance to decrease or eliminate consumption by children and adolescents. The report, “Sports Drinks and Energy Drinks for Children and Adolescents: Are They Appropriate?” is published in the June 2011 issue of Pediatrics (published online May 30).

“There is a lot of confusion about sports drinks and energy drinks, and adolescents are often unaware of the differences in these products,” said Marcie Beth Schneider, MD, FAAP, a member of the AAP Committee on Nutrition and co-author of the report. “Some kids are drinking energy drinks – containing large amounts of caffeine – when their goal is simply to rehydrate after exercise. This means they are ingesting large amounts of caffeine and other stimulants, which can be dangerous.”

Sports drinks and energy drinks are different products, said Holly J. Benjamin, MD, FAAP, a member of the executive committee of the AAP Council on Sports Medicine and Fitness, and a co-author of the report. Sports drinks, which contain carbohydrates, minerals, electrolytes and flavoring, are intended to replace water and electrolytes lost through sweating during exercise.

Sports drinks can be helpful for young athletes engaged in prolonged, vigorous physical activities, but in most cases they are unnecessary on the sports field or the school lunchroom.

“For most children engaging in routine physical activity, plain water is best,” Dr. Benjamin said.

“Sports drinks contain extra calories that children don’t need, and could contribute to obesity and tooth decay. It’s better for children to drink water during and after exercise, and to have the recommended intake of juice and low-fat milk with meals. Sports drinks are not recommended as beverages to have with meals.”

Energy drinks contain substances not found in sports drinks that act as stimulants, such as caffeine, guarana and taurine. Caffeine – by far the most popular stimulant – has been linked to a number of harmful health effects in children, including effects on the developing neurologic and cardiovascular systems. Energy drinks are never appropriate for children or adolescents, said Dr. Schneider and Dr. Benjamin. In general, caffeine-containing beverages, including soda, should be avoided.

The report contains tables listing specific products available today and their contents.

“In many cases, it’s hard to tell how much caffeine is in a product by looking at the label,” Dr. Schneider said. “Some cans or bottles of energy drinks can have more than 500 mg of caffeine, which is the equivalent of 14 cans of soda.”

AAP recommendations include:
  • Pediatricians should highlight the difference between sports drinks and energy drinks with patients and their parents, and talk about the potential health risks.
  • Energy drinks pose potential health risks because of the stimulants they contain, and should never be consumed by children or adolescents.
  • Routine ingestion of carbohydrate-containing sports drinks by children and adolescents should be avoided or restricted, because they can increase the risk of overweight and obesity, as well as dental erosion.
  • Sports drinks have a limited function for pediatric athletes; they should be ingested when there is a need for rapid replenishment of carbohydrates and/or electrolytes in combination with water during prolonged, vigorous physical activity.
  • Water, not sports drinks, should be the principal source of hydration for children and adolescents.

Z-Man Technology Enables Soldier Spidermen To Climb Vertical Walls Using Microspines and Magnets

The Z-Man program will develop biologically inspired climbing aids to enable soldiers to scale vertical walls constructed from typical building materials, without using ropes or ladders. Geckos, spiders and small animals are the inspiration behind these climbing aids. These creatures scale vertical surfaces with unique systems that exhibit strong reversible adhesion via van der Waals forces or hook-into-surface asperities.

Z-Man seeks to build synthetic versions of those biological systems, optimize them for efficient human climbing, and use them as novel climbing aids. The overall goal is to enable a soldier to scale a vertical surface while carrying a full combat load using Z-Man technologies.

Credit: DARPA

In  2010, DARPA demonstrated a fully loaded soldier (300 lb) wearing reattachable pads (magnets and microspines) scaling a series of 25-foot walls built from mission-relevant materials using Z-Man technology.

In 2011, DARPA began the transition  of Z-Man prototype technologies (magnets and microspines) to the Armed Services.

According to DARPA's 2012 budget, the plans are to integrate nanoparticle enabled space propulsion technology and Z-MAN adhesion technologies for operationally relevant space applications such as orbital debris cleanup, and intelligence, surveillance, and reconnaissance (ISR).

Astonishing Flying Microdrones Coming! DARPA UAVForge Challenge for $100,000 Prize to Design Next Generation Small UAVs

Small unmanned aerial vehicles (UAVs) play a critical role in modern military operations. The next generation of these aerial robotic systems needs to have enhanced takeoff and landing capabilities, better endurance, require less support equipment and be adaptable to mission needs in varying conditions.

The Defense Advanced Research Projects Agency (DARPA) and Space and Naval Warfare Systems Center Atlantic (SSC Atlantic) call on innovators of every kind; scientists, engineers, citizen scientists and dreamers to collaborate on the UAVForge Challenge and win $100,000 USD.

/uploadedImages/Content/NewsEvents/Releases/2011/uavpressrealeaseimageBig.png
Credit: DARPA

The UAVForge challenge uses crowdsourcing to build small UAVs through an exchange of ideas and design practices. The goal is to build and test a user-intuitive, backpack-portable UAV that can quietly fly in and out of critical environments to conduct sustained surveillance for up to three hours.

According to Jim McCormick, DARPA program manager, “The UAVForge crowd-sourced approach seeks to capture and mature novel ideas and systems integration methods from communities outside the traditional DoD acquisition process.”

Self-selected teams will participate in a series of peer-reviewed milestones where participant rating will identify the top ten teams that advance to the UAVForge Fly-Off Competition. During the competition, vehicles will be tested in a simulated high-stress surveillance mission.

“This is a fascinating challenge and the solution space is wide open,” explained McCormick. “We’re excited to see what innovative ideas emerge, so we’re trying to give individuals and teams lots of time to develop their concepts prior to the initial design submission date planned for late this fall.”

The winning team will be awarded $100,000 and the opportunity to showcase its design in an overseas military exercise. Additionally, the winning team will work with a government-selected UAV manufacturer to produce a limited quantity of systems for future warfighter experimentation.


This video demonstrates how the Draganflyer X8 fromhttp://www.draganfly.com can fold up to be compact for easy transport.



Quadrocopter Microdrone


HG3 Willy

Visioflys dirigible airships, equipped for High Definition video and photography, give you an exceptional aerial vantage point. Their technical design makes them extremely simple to put into action. Visiofly airships are so simple to use that you can master the necessary piloting skills in very little time.


Introducing the VideoZoom10x payload for the Aeryon Scout. Targeted at real-time reconnaissance and identification applications, the VideoZoom10x custom payload by Aeryon adds a stabilized ten times optical zoom video capability to the Scout's family of payloads. At a mere 200 grams, the VideoZoom10x is the world's lightest all-digital gimballed optical zoom camera. The VideoZoom10x produces the highest quality real time video available in any Micro UAV -- for example, an operator can determine if someone is holding a gun versus a shovel from a distance of greater than 300 meters

ArduCopter Prototype#1 Test Fly

Watch ideas take flight and learn more by visiting www.UAVForge.net.

Source: DARPA

Auckland Is Built On Volcanic Field: New Research Reveals "Forgotten" Ancient Eruptions

New research on Auckland’s volcanic field has uncovered a volcano which had been all but forgotten, and this work will better define what is most likely to happen when the next volcano forms in Auckland. The research is part of the Devora project, a seven-year study to better understand the volcanic history of Auckland and help prepare the city for a future eruption.

By analysing records from boreholes drilled for foundations of buildings and roads or for water supply, scientists have been able to identify a previously little-known volcano now hidden beneath the suburb of Grafton, close to the University Medical School.

Geologist Bruce Hayward of Geomarine Research put together the volcanic puzzle by linking lava flows between boreholes and measuring changes in the thickness of the lava flows and volcanic ash it is possible to identify a buried volcanic crater.

“The crater is about 1km across and filled with solidified lava flows,” Dr Hayward said.

Auckland is built on the Auckland Volcanic Field, a group of about 50 volcanoes that have erupted over the last 250,000 years. Scientists believe that most of the volcanoes erupted only for a few months or years and then became inactive. However, our knowledge of exactly when each volcano erupted, and how future eruptions might progress is incomplete.

Close inspection of the second oldest geological map of the Auckland Volcanic Field, published by early explorer and geologist Ferdinand von Hochstetter in 1864, shows four volcanic vents in the vicinity of the Domain.

Image showing location and size of buried and 'forgotten' volcanic crater in the central Auckland suburb of Grafton.
Image showing location and size of buried and 'forgotten' volcanic crater in the central Auckland suburb of Grafton. Photo: Dr Bruce Hayward, Geomarine Research
Photo: Dr Bruce Hayward, Geomarine Research

“So although this find is exciting, it is clear that Hochstetter recognised the presence of a volcano in this locality before it was covered in houses,” Dr Hayward said.

“It would appear that this Grafton volcano erupted just before the neighbouring Domain Volcano, more than 50,000 years ago.” A thick layer of volcanic ash from the Domain eruptions buried and hid the Grafton Volcano until recent boreholes have shown its full extent and nature.

Over the last few years, geophysicists at The University of Auckland have also been studying the rocks under Auckland using gravity and magnetic measurements. Because lava is very magnetic, airborne surveys of the city have revealed where lava is present underground, even where it is not visible at the surface.

One area of high gravity and magnetism is in the same area that Dr Hayward has identified the new Grafton volcano. Bringing together these different methods has helped to confirm the findings.

Joint Project Leader, Jan Lindsay from The University of Auckland, said the new information shows that we still have a lot to learn about the past volcanic activity in Auckland.

Scientists hope to bring the borehole data from across the city into a central database where it can be used to model where past eruptions have occurred.

The Devora project is led jointly by GNS Science and the University of Auckland in collaboration with Massey University and brings together data and researchers from many different areas of study. The project is funded jointly by the Earthquake Commission, the Auckland Council, the Ministry of Science and Innovation and the University of Auckland. Project Devora, which stands for Determining Volcanic Risk in Auckland, started in late 2008.

Source:
GNS ScienceUniversity of Auckland

Stars Help To Track Space Junk

A team of researchers from the Royal Institute and Observatory of the Navy (ROA) in Cádiz (Spain) has developed a method to track the movement of geostationary objects using the position of the stars, which could help to monitor space debris. The technique can be used with small telescopes and in places that are not very dark.

This is the Fabra-ROA Telescope Montsec (TFRM).
Credit: ROA/RACAB

Objects or satellites in geostationary orbit (GEO) can always be found above the same point on the Equator, meaning that they appear immobile when observed from Earth. By night, the stars appear to move around them, a feature that scientists have taken advantage of for decades in order to work out the orbit of these objects, using images captured by telescopes, as long as these images contain stars to act as a reference point.

This method was abandoned when satellites started to incorporate transponders (devices that made it possible to locate them using the data from emitted and reflected signals). However, the classic astrometric techniques are now combing back into vogue due to the growing problem of space waste, which is partly made up of the remains of satellites engines without active transponders.

"Against this backdrop, we developed optical techniques to precisely observe and position GEO satellites using small and cheap telescopes, and which could be used in places that are not particularly dark, such as cities", Francisco Javier Montojo, a member of the ROA and lead author of a study published in the journal Advances in Space Research, tells SINC.

The method can be used for directly detecting and monitoring passive objects, such as the space junk in the geostationary ring, where nearly all communications satellites are located. At low orbits (up to around 10,000 km) these remains can be tracked by radar, but above this level the optical technique is more suitable.

Montojo explains that the technique could be of use for satellite monitoring agencies "to back up and calibrate their measurements, to check their manoeuvres, and even to improve the positioning of satellites or prevent them from colliding into other objects".

"The probability of collisions or interferences occurring between objects is no longer considered unappreciable since the first collision between two satellites on 10 February 2009 between America's Iridium33 and the Russians' Cosmos 2251", the researcher points out.

Image software and 'double channel'

The team has created software that can precisely locate the centre of the traces or lines that stars leave in images (due to photograph time exposure). The main advantage of the programme is that it "globally reduces" the positions of the object to be followed with respect to the available stellar catalogues. To do this, it simultaneously uses all the stars and all the photographs taken by the telescope's CCD camera on one night. It does not matter if there are not sufficient reference stars in some shots, because they are all examined together as a whole.

Optical observation allows the object to be located at each moment. Using these data and another piece of (commercial) software, it is possible to determine the orbit of the GEO object, in other words to establish its position and speed, as well as to predict its future positions. The method was validated by tracking three Hispasat satellites (H1C, H1D and Spainsat) and checking the results against those of the Hispasat monitoring agency.

"As an additional original application, we have processed our optical observations along with the distances obtained using another technique known as 'double channel' (signals the travel simultaneously between two clocks or oscillators to adjust the time)", says Montojo. The Time Section of the ROA uses this methodology to remotely compare patterns and adjust the legal Spanish time to International Atomic Time.

Incorporating these other distance measurements leads to a "tremendous reduction" in uncertainty about the satellite's position, markedly improving the ability to determine its orbit.

Data from the ROA's veteran telescope in San Fernando (Cádiz) were used to carry out this study, but in 2010 the institution unveiled another, more modern one at the Montsec Astronomical Observatory in Lleida, co-managed by the Royal Academy of Sciences and Arts of Barcelona. This is the Fabra-ROA Telescope at Montsec (TFRM), which makes remote, robotic observations.

"The new telescope has features that are particularly well suited to detecting space junk, and we hope that in the near future it will play an active part in international programmes to produce catalogues of these kinds of orbital objects", the researcher concludes.


Contacts and sources:


Citation: Montojo, F. J.; López Moratalla, T.; Abad, C. "Astrometric positioning and orbit determination of geostationary satellites". Advances in Space Research 47 (6): 1043-1053, 2011. DOI: 10.1016/j.asr.2010.11.025.

Matter-Matter Entanglement At A Distance

European researchers have made new ground in the field of quantum mechanical entanglement of remote quantum systems.

The team, from the Max Planck Institute of Quantum Optics in Germany, was able to demonstrate how two remote atomic quantum systems can be prepared in a shared 'entangled' state. This means that one system is a single atom trapped in an optical resonator, and the other one is a Bose-Einstein condensate (BEC) consisting of hundreds of thousands of ultracold atoms.

A single atom and a BEC in two separate laboratories serve as nodes in a basic quantum network. To prepare entanglement between these systems, a laser pulse is used to stimulate the atom to emit a single photon which is entangled with the single atom. The photon is used to transport the entanglement through an optical fibre into a neighbouring laboratory. Here, the photon is stored in the BEC. This procedure establishes entanglement between the single atom and the BEC. After some delay, the photon is retrieved from the BEC and the state of the single atom is mapped onto a second photon. The observation of entanglement between these two photons proves that all steps of the experiment were performed successfully.
PR_11_05_26_web
Credit: Max Planck Institute of Quantum Optics

A milestone in quantum network development has been reached as a result of the hybrid system of two remote, entangled, stationary nodes the team generated as part of the study, which was given a boost of EUR 530,0000 as part of the AQUTE ('Atomic quantum technologies') project funded under the 'Information and communication technologies' Theme of the EU's Seventh Framework Programme (FP7).

It was Albert Einstein who first labelled the quantum mechanical phenomenon of entanglement 'spooky action at a distance' due to its strange consequences. Physicists have been trying for years to develop concepts that could use this phenomenon for practical purposes such as safe data transmission, where the entanglement which is generated in a local process has to be distributed among remote quantum systems.

In addition, such networks might also help in the development of a universal quantum computer in which quantum bits can be exchanged with photons between nodes designed for information storage and processing.

In the quantum mechanical phenomenon of entanglement, two quantum systems are grouped together in such a way that their properties become strictly correlated, which requires the particles to be in close contact. However, for many applications in a quantum network, it is necessary that entanglement is shared between two remote nodes called 'stationary' quantum bits. One way to achieve this is to use photons or 'flying' quantum bits for transporting the entanglement.

In many ways this is similar to classical telecommunication where light is used to transmit information between computers or telephones. In the case of a quantum network, however, this task is much more difficult as entangled quantum states are extremely fragile and can only survive if the particles are well isolated from their environment.

The German team who worked on the study has moved things forward by preparing two atomic quantum systems located in two different laboratories in an entangled state. This can be viewed on the one hand as a single rubidium atom trapped inside an optical resonator formed by two highly reflective mirrors, and on the other hand an ensemble of hundreds of thousands of ultracold rubidium atoms which form a BEC. In a BEC, all particles have the same quantum properties so that they all act as a single 'superatom'.

'A BEC is very well suited as a quantum memory because this exotic state does not suffer from any disturbances caused by thermal motion,' explains Matthias Lettner, one of the study's authors. 'This makes it possible to store and retrieve quantum information with high efficiency and to conserve this state for a long time. The exchange of quantum information between photons and atomic quantum systems requires a strong light-matter interaction. For the single atom, we achieve this by multiple reflections between the two resonator mirrors, whereas for the BEC the light-matter interaction is enhanced by the large number of atoms.'

The overall objectives of the AQUTE project are to develop quantum technologies based on atomic, molecular and optical (AMO) systems for scalable quantum computation and entanglement-enabled technologies like metrology and sensing. In addition, the project hopes to establish and exploit new interdisciplinary connections, coming from AMO physics, but also including concepts and experimental settings of solid state systems, in order to reinforce interdisciplinary links at the frontiers of quantum information science, and other fields of physics or science in general as well as achieve novel hybrid systems that couple in a coherent way physically different quantum degrees of freedom.

Contacts and sources:


Max Planck Institute of Quantum Optics

Dr. Olivia Meyer-StrengPress & Public Relations

Max Planck Institute of Quantum Optics

Citation: Lettner, M., et al. (2011) Remote Entanglement between a Single Atom and a Bose-Einstein Condensate. Physical Review Letters.

Thursday, June 21, 2012

Climate Played Big Role In Vikings' Disappearance From Greenland

Greenland's early Viking settlers were subjected to rapidly changing climate. Temperatures plunged several degrees in a span of decades, according to research from Brown University. A reconstruction of 5,600 years of climate history from lakes near the Norse settlement in western Greenland also shows how climate affected the Dorset and Saqqaq cultures. Results appear in Proceedings of the National Academy of Sciences.

The end of the Norse settlements on Greenland likely will remain shrouded in mystery. While there is scant written evidence of the colony’s demise in the 14th and early 15th centuries, archaeological remains can fill some of the blanks, but not all.

William D'Andrea, right, and Yongsong Huang took cores from two lakes in Greenland to reconstruct 5,600 years of climate history near the Norse Western Settlement.

Credit: William D'Andrea/Brown University

What climate scientists have been able to ascertain is that an extended cold snap, called the Little Ice Age, gripped Greenland beginning in the 1400s. This has been cited as a major cause of the Norse’s disappearance. Now researchers led by Brown University show the climate turned colder in an earlier span of several decades, setting in motion the end of the Greenland Norse. Their findings appear in Proceedings of the National Academy of Sciences.

The Brown scientists’ finding comes from the first reconstruction of 5,600 years of climate history from two lakes in Kangerlussuaq, near the Norse “Western Settlement.” Unlike ice cores taken from the Greenland ice sheet hundreds of miles inland, the new lake core measurements reflect air temperatures where the Vikings lived, as well as those experienced by the Saqqaq and the Dorset, Stone Age cultures that preceded them.

“This is the first quantitative temperature record from the area they were living in,” said William D’Andrea, the paper’s first author, who earned his doctorate in geological sciences at Brown and is now a postdoctoral researcher at the University of Massachusetts–Amherst. “So we can say there is a definite cooling trend in the region right before the Norse disappear.”

“The record shows how quickly temperature changed in the region and by how much,” said co-author Yongsong Huang, professor of geological sciences at Brown, principal investigator of the NSF-funded project, and D’Andrea’s Ph.D. adviser. “It is interesting to consider how rapid climate change may have impacted past societies, particularly in light of the rapid changes taking place today.”

D’Andrea points out that climate is not the only factor in the demise of the Norse Western Settlement. The Vikings’ sedentary lifestyle, reliance on agriculture and livestock for food, dependence on trade with Scandinavia and combative relations with the neighboring Inuit, are believed to be contributing factors.

Still, it appears that climate played a significant role. The Vikings arrived in Greenland in the 980s, establishing a string of small communities along Greenland’s west coast. (Another grouping of communities, called the “Eastern Settlement” also was located on the west coast but farther south on the island.) The arrival coincided with a time of relatively mild weather, similar to that in Greenland today. However, beginning around 1100, the climate began an 80-year period in which temperatures dropped 4 degrees Celsius (7 degrees Fahrenheit), the Brown scientists concluded from the lake readings. While that may not be considered precipitous, especially in the summer, the change could have ushered in a number of hazards, including shorter crop-growing seasons, less available food for livestock and more sea ice that may have blocked trade.

“You have an interval when the summers are long and balmy and you build up the size of your farm, and then suddenly year after year, you go into this cooling trend, and the summers are getting shorter and colder and you can’t make as much hay. You can imagine how that particular lifestyle may not be able to make it,” D’Andrea said.

Archaeological and written records show the Western Settlement persisted until sometime around the mid-1300s. The Eastern Settlement is believed to have vanished in the first two decades of the 1400s.

The researchers also examined how climate affected the Saqqaq and Dorset peoples. The Saqqaq arrived in Greenland around 2500 B.C. While there were warm and cold swings in temperature for centuries after their arrival, the climate took a turn for the bitter beginning roughly 850 B.C., the scientists found. “There is a major climate shift at this time,” D’Andrea said. “It seems that it’s not as much the speed of the cooling as the amplitude of the cooling. It gets much colder.”

The Saqqaq exit coincides with the arrival of the Dorset people, who were more accustomed to hunting from the sea ice that would have accumulated with the colder climate at the time. Yet by around 50 B.C., the Dorset culture was waning in western Greenland, despite its affinity for cold weather. “It is possible that it got so cold they left, but there has to be more to it than that,” D’Andrea said.

Contributing authors include Sherilyn Fritz from the University of Nebraska–Lincoln and N. John Anderson from Loughborough University in the United Kingdom. The National Science Foundation funded the work.
Contacts and sources:

Tuesday, June 19, 2012

Finding An Edge Just Got 50,000 Times Easier For A Robot

Determining the boundaries of objects is one of the central problems in computer vision. It's something humans do with ease: We glance out the window and immediately see cars as distinct from the sidewalk and street and the people walking by, or lampposts as distinct from the facades of the buildings behind them. But duplicating that facility in silicon has proven remarkably difficult.


Courtesy of Jason Chang

One of the best ways for a computer to determine boundaries is to make lots of guesses and compare them; the boundaries that most of the guesses agree on are likeliest to be accurately drawn. Until now, that process has been monstrously time consuming. But Jason Chang, a graduate student in the Department of Electrical Engineering and Computer Science, and John Fisher, a principal research scientist at MIT's Computer Science and Artificial Intelligence Lab (CSAIL), have figured out how to make it at least 50,000 times more efficient. Their findings could help improve systems for medical imaging, for tracking moving objects and for 3-D object-recognition, among others.

Courtesy of Jason Chang

One reason that boundary determination — or as it's more commonly known, image segmentation — is such a hard problem is that there's no one right answer. Ask 10 people to trace the boundaries of objects in a digital image, and you'll likely get 10 different responses. "We want an algorithm that's able to segment images like humans do," Chang says. "But because humans segment images differently, we shouldn't come up with one segmentation. We should come up with a lot of different segmentations that kind of represent what humans would also segment."

Populating the field

To generate its set of candidate segmentations, Chang and Fisher's algorithm strikes different balances between two measures of segmentation quality. One measure is the difference between the parts of the image on opposite sides of each boundary. The most obvious way to gauge difference is by color value: A segmentation with blue pixels on one side of the boundary and red pixels on the other would be better than one that featured slightly different proportions of the same 30 shades of blue on both sides. But researchers have devised other, more subtle measures of difference, and Chang and Fisher's algorithm can use any of them.

The other measure of a segmentation's quality is its simplicity. If a computer is trying to segment a street scene that features a car, for instance, you probably don't want it to draw boundaries around every separate gleam of different-colored light on the car's hood. Simplicity and difference in appearance tend to be competing measures: It's easy to maximize color difference, for instance, if you draw a boundary around every pixel that's different from its neighbors, but such a segmentation would be ridiculously complex.

Chang and Fisher's algorithm assigns each segmentation a total score based on both simplicity and difference in appearance. But different segmentations could have roughly equivalent total scores: A segmentation that's a little too complex but has superb color difference, for instance, could have the same score as a segmentation that's pretty good on both measures. Chang and Fisher's algorithm is designed to find candidates with very high total scores. That ensures that none of the candidates will be outrageously bad, but it also makes the computation that much more complicated.

Other researchers have adopted the same general approach, but to generate their candidates, they adapted algorithms originally designed to find the one segmentation with the highest total score. Chang and Fisher realized that, because they were considering so many suboptimal possibilities anyway, they could use a less precise algorithm that runs much more efficiently. Although it's not essential to their approach that they find the highest-scoring segmentation, it's still likely that the many candidates they produce will include a few that are very close to it.

"There are a lot of competing methodologies out there, so it's hard for me to say that this is going to revolutionize segmentation," says Anthony Yezzi, a professor of electrical and computer engineering at the Georgia Institute of Technology. But, Yezzi says, the way in which Chang and Fisher's algorithm represents images is "an interesting new vehicle that I think could get a lot of mileage, even beyond segmentation." The same technique, Yezzi says, could be applied to problems of object tracking — whether it's the motion of an object in successive frames of video or changes in a tumor's size over time — and pattern matching, where the idea is to recognize the similarity of objects depicted from slightly different angles or under different lighting conditions.

Contacts and sources:
Story by Larry Hardesty, MIT News Office
MIT

Monday, June 18, 2012

Earth: D-Day's Legacy Sands

Next week marks the 67th anniversary of D-Day, when the Allies stormed the beaches at Normandy, France, and changed the face of World War II. Not much evidence of the war remains in Normandy: a few dilapidated relics, a cemetery, a war memorial. But something else was left behind that cannot be seen by the naked eye: shrapnel and iron and glass beads left over from the D-Day invasions in 1944.

"Landing on the coast of France under heavy Nazi machine gun fire are these American soldiers, shown just as they left the ramp of a Coast Guard landing boat." CPhoM. Robert F. Sargent, June 6, 1944. 26-G-2343.

Two geologists visited Omaha Beach in 1988 and collected samples of the sand. Upon returning to their labs, they examined the sand under microscopes and discovered the remnants of the war. As they explain in the June feature "D-Day's Legacy Sands," it is not surprising that shrapnel was initially added to the sands at Omaha Beach, but it is surprising that it has survived this long.

"Crossed rifles in the sand are a comrade's tribute to this American soldier who sprang ashore from a landing barge and died at the barricades of Western Europe." 1944. 26-G-2397
Learn more about Omaha Beach's sand surprises, and read other stories on topics such as what scientists are learning from the Japan and New Zealand earthquakes, what researchers are doing to get ahead of the mysterious disease that's killing bats by the millions, and how NASA's MESSENGER mission to Mercury is bringing much-needed good news to the space agency, all in the June issue. Plus, don't miss the story about the new rover that will be exploring beneath Antarctica's ice.


Contacts and sources:


These stories and many more can be found in the June issue of EARTH, now available digitally (http://www.earthmagazine.org/digital/) or in print on your local newsstands.

For further information on the June featured article, go to http://www.earthmagazine.org/earth/article/451-7db-5-1b .

Keep up to date with the latest happenings in earth, energy and environment news with EARTH magazine, available on local newsstands or online at http://www.earthmagazine.org/. Published by the American Geological Institute, EARTH is your source for the science behind the headlines.

The Mysteries Of Materials And Dynamics In The Earth’s Deep Interior Revealed

Since it is not possible to access and bring up materials from depths greater than 200 km, the deep Earth is still largely uncertain.

Here, Kei Hirose describes the development of the experimental device called laser-heated diamond-anvil cell using a pair of single crystal diamonds, which generates the high-pressure and -temperature conditions that exist inside the Earth.

Kei Hirose
Credit: Tokyo Technology Institute

Hirose and colleagues recently succeeded in producing more than 364 gigapascals and 5000 degrees Celsius inside this device—conditions corresponding to those expected at the center of the Earth.

Recently, such apparatus enabled the important discovery of the so-called ‘post-perovskite’—a new mineral—above 120 gigapascals at the Earth’s lowermost mantle. “Our discovery is the first evidence for materials constituting this enigmatic part of the Earth.” says Hirose. “Its existence enhances the solid-state convection in the mantle and resulting volcanic activity at the surface.”

The discovery of post-perovskite is a tremendous advance in the understanding of the Earth’s mantle. In the future Hirose and his colleagues plan to investigate properties of the lumps of iron constituting the Earth’s core below the mantle.



Source:
Kei Hirose
Tokyo Institute of Technology Bulletin

Risk Of Blood Clots In Veins Hereditary: VTE 3rd Most Common Cardiovascular Disease

Venous thromboembolism (VTE) is the third most common type of cardiovascular disease after coronary heart disease and stroke. Researchers at the Centre for Primary Health Care Research in Malmö have mapped the significance of hereditary factors for venous thromboembolism in the entire Swedish population by studying the risk of VTE in children of parents with VTE compared with the children of parents who have not had VTE.

“Previously, hereditary factors for venous thromboembolism have only been studied on a small scale. We based our study on the entire Swedish population”, says Bengt Zöller, researcher at the Centre for Primary Health Care Research, Malmö. Using the national multi-generation register and hospital discharge register, the researchers examined the risk of being affected if one or both parents have had venous thromboembolism. During the period 1987 to 2007, a total of 45 362 people suffered from venous thromboembolism, of whom 4 865 had hereditary VTE and thus a higher risk of being affected.

The study shows that hereditary factors are of most significance at a younger age – between 10 and 50 – and occur in both men and women. The highest relative risk was seen in the 10–19 age group. After the age of 50, other factors appear to play a greater role than hereditariness. Blood clots in the very young, under the age of 10, are rare, but strangely enough, hereditary factors do not appear to be the most significant in this age group. The highest risk occurs if both parents have had venous thromboembolism.

“The findings are an important guide to the importance of hereditary factors for VTE. In conclusion, a parental history of venous thromboembolism is an important risk factor that should be included in the clinical medical history and examination”, says Bengt Zöller.


Credit: Lund University

Citation:
Article: Zöller, B., Li, X., Sundquist, J., Sundquist, K. Parental history and venous thromboembolism: a nationwide study of age-specific and sex-specific familial risks in Sweden. J Thromb Haemost. 2011;9:64-70.

Reviewed by: Ragni, M. V. Coming of Age and Thrombosis: It’s All in the Family. The Hematologist 2011;8:9.
http://www.hematology.org/Publications/Hematologist/2011/6605.aspx

Graphene Can Polarize Light

Publication in Nature Photonics from the OPERA Photonique Department : Graphene can polarize light.

Graphene, an ultra-flat monolayer of carbon atoms in a hexagonal crystal lattice, has attracted a strong wave of research interest due to its unique electrical and photonic properties.

Graphene
Image: Wikipedia

As the first two dimensional material in the world, two UK Scientists were awarded the 2010 Nobel Prize in physics since it completely changes how we look at things. Now, Dr. Han Zhang at the Service OPERA-photonique – Applied Science Faculty, ULB - in collaboration with Prof. Loh at the National University of Singapore demonstrates the world's thinnest polarizer, which relies on the coupling, guiding and polarizing of electromagnetic waves by graphene. They claim that this breakthrough will someday allow the integration on all-photonic circuits for high-speed optical communications.

Optical polarizers are elementary components of coherent and quantum optical communications by splitting the polarization state of an optical signal. Nowadays, there are rising demands for high-speed optical communications based on mobiles, calling for the miniaturization of optoelectronic devices.

However, conventional optical polarizers (sheet, prism and Brewster-angle polarizer) are expensive, bulky, and discrete and may require additional alignment.

Thanks to graphene’s ultra-broadband optical property induced by its exceptional energy band structure, as-demonstrated graphene polarizer shows very broad operation bandwidth, at least from visible to mid-infrared. By fabricating graphene polarizer, with combined advantages of low cost (down to several euros), compact footprint, ultra-fast relaxation time and broad operation range, they anticipate that this device will enable new architectures for on-chip high-speed optical communications.

In addition to the industrial potentials, this research published in Nature Photonic, on May 30th is of fundamental importance.

It tackles how light propagates along an ultra-thin two dimensional surface. By the virtue of fiber based optical channel, now we can readily uncover how graphene guides and interacts with electromagnetic waves, with polarizing effect attributed to the differential attenuation of two polarization modes.

This new conceptual finding will definitely lead to new physics, for example, localized waves or surface plasmon in graphene lattice. In the following years, researchers from the photonics, plasmonics and nano-science research communities may find in this graphene polarizer structure as a new testing ground for the ideas and methods they have been researching on their own fields, paving the way for all-carbon photonic-plasmonics devices.

Source: Université Libre de Bruxelles

A Sweet Sugary Defense Against Lethal Bacteria

Synthesising a potential vaccine candidate for an antibiotic-resistant pathogen causing infections in hospitalised patients

There is now a promising vaccine candidate for combating the pathogen which causes one of the most common and dangerous hospital infections. An international team of scientists from the Max Planck Institute of Colloids and Interfaces in Potsdam has developed a vaccine based on a carbohydrate against the Clostridium difficile bacterium, which is known to cause serious gastrointestinal diseases mainly in hospitals.

The sugar-based vaccine elicited a specific and effective immune response in mice. Moreover, the scientists have also discovered strong indications that the substance can stimulate the human immune system to form antibodies against the bacterium.

Stimulating the immune system: on the basis of a hexasaccharide, scientists from Potsdam developed a vaccine against the Clostridium difficile bacterium, which causes serious gastrointestinal diseases in hospitals.
standard
Credit: © MPI of Colloids and Interfaces

Clostridium difficile bacterium can turn into a life-threatening condition: a highly virulent and antibiotic-resistant strain of the spore-forming pathogen Clostridium difficile bacterium appeared in the USA and certain Western European countries some eight years ago. Since then it has been posing a major risk for hospitalised patients, in particular, who are being treated with antibiotics or have a weak immune system, such as cancer or HIV patients.

Whereas no more than four per cent of healthy humans have C. difficile in their gastrointestinal system, the bacterium colonises the intestines of 20 to 40 per cent of hospitalised patients. If other bacteria in the intestinal flora are repressed by antibiotics, the rod-shaped bacterium can reproduce extremely fast. It produces toxins which cause diarrhoea and gastrointestinal inflammation, often with a lethal outcome. Surviving patients require a very costly aftercare. This new, highly virulent pathogen can produce around 20 times more toxins and significantly more spores than previously identified pathogens.

However, a carbohydrate in the bacterial cell wall now provides the team of scientists led by Peter H. Seeberger at the Max Planck Institute of Colloids and Interfaces in Potsdam a “point of attack” for a potential vaccine. “Initial testing of the sugar-based antigen synthesised by the team has already produced very promising results”, says Peter H. Seeberger, Director at the Max Planck Institute in Potsdam.

The chemists in the team first developed a synthesis for the essential component of the antigen: the hexasaccharide. To assemble the oligosaccharide, they used four different monosaccharide building blocks. An efficient and convergent approach created the exact molecule with the required arrangement of the monosaccharides. “Synthesizing complex polysaccharides is still a challenge, not least because sugar molecules can bind in several different places”, Peter H. Seeberger says. However, the chemists were able to block other reaction sites so that they could exactly control where the original saccharides bound.

The scientists then conjugated the hexasaccharide to the CRM 197 protein, which is used in many vaccines, as sugar alone, as antigen, does not elicit an effective immune response. In order to defend itself successfully against a C. difficile infection, the immune system must also use another antigen. The chemical glycoprotein conjugate triggered a very effective immune response in two mice which were injected with the substance three times, at 2-week intervals.

“The fact that mice are producing antibodies against the carbohydrates is in itself a success”, Peter H. Seeberger says. “Not all carbohydrates trigger the production of antibodies.” Furthermore, the antibodies produced by the mice bound exclusively to the sugar. Thus, the antigen cannot cause an autoimmune disease.

Additionally, the scientists proved that the antibodies developed against the hexasaccharide are also part of the human immune response; in the stool of hospital patients infected with C. difficile, they found antibodies against the sugar.

“We can therefore expect to see that the human immune system produces antibodies against the sugar when vaccinated”, Seeberger concludes. What is more, “since the natural sugar already elicits the production of a small number of antibodies, we hope that the synthetic glycoprotein conjugate will trigger a more effective response.”

The vaccine candidate must now be subjected to further testing. First, it must be established whether it can effectively prevent infection in animals. “If these tests are successful, it will probably still take one or two years before the vaccine is tested on humans”, explains Peter H. Seeberger.

The vaccine candidate against C. difficile does not contain the only immunologically effective sugar from Seeberger's laboratory. Together with his colleagues, the chemist is developing sugar-based vaccines against numerous pathogens.

“The current work is therefore also a proof of the progress made in glycochemistry and glycobiology”, according to Seeberger, who was awarded the 2007 Körber European Science Award for his development of a sugar synthesiser.

The number of biological sugar molecules that can be produced by chemists in the laboratory is on the increase, which gives the biologists and medical scientists the opportunity to investigate their specific impacts. This fills Peter H. Seeberger with optimism: “These advances will lead to quantum leaps in related research areas, such as immunology, biology and medicine.”

Contacts and sources:


Citation: Matthias A. Oberli, Marie-Lyn Hecht, Pascal Bindschädler, Alexander Adibekian, Thomas Adam and Peter H. Seeberger: A Possible Oligosaccharide-Conjugate Vaccine Candidate for Clostridium difficile Is Antigenic and Immunogenic
Chemistry & Biology, 26 May 2011; DOI: 10.1016/j.chembiol.2011.03.009


New Malaria Protein Structure Upends Theory Of How Cells Grow And Move

Researchers from the Walter and Eliza Hall Institute have overturned conventional wisdom on how cell movement across all species is controlled, solving the structure of a protein that cuts power to the cell 'motor'. The protein could be a potential drug target for future malaria and anti-cancer treatments.

Researchers (from left) Dr Jake Baum. Mr Wilson Wong and Dr Jacqui Gulbis from the Walter and Eliza Hall Institute in Melbourne, Australia, have upended the theory of how cells grow and move, solving the structure of a protein that cuts power to the cell "motor". The protein could be a potential drug target for future malaria and anti-cancer treatments.
Credit: Walter and Eliza Hall Institute

By studying the structure of actin-depolymerising factor 1 (ADF1), a key protein involved in controlling the movement of malaria parasites, the researchers have demonstrated that scientists' decades-long understanding of the relationship between protein structure and cell movement is flawed.

Dr Jake Baum and Mr Wilson Wong from the institute's Infection and Immunity division and Dr Jacqui Gulbis from the Structural Biology division, in collaboration with Dr Dave Kovar from the University of Chicago, US, led the research, which appears in today's edition of the Proceedings of the National Academy of Sciences USA.

Dr Baum said actin-depolymerising factors (ADFs) and their genetic regulators have long been known to be involved in controlling cell movement, including the movement of malaria parasites and movement of cancer cells through the body. Anti-cancer treatments that exploit this knowledge are under development.

A protein diagram shows the structure of the malaria parasite protein ADF1 (actin-depolymerising factor 1) (left) compared to a human ADF (right). The noticeable lack of the 'finger' in malaria parasite ADF1 led researchers from the Walter and Eliza Hall Institute in Melbourne, Australia, to upend the conventional theory of how the protein controlled cell movement.
Credit: Dr Jake Baum and Mr Wilson Wong, Walter and Eliza Hall Institute.
"ADFs help the cell to recycle actin, a protein which controls critical functions such as cell motility, muscle contraction, and cell division and signaling," Dr Baum said. "Actin has unusual properties, being able to spontaneously form polymers which are used by cells to engage internal molecular motors – much like a clutch does in the engine of your car. A suite of accessory proteins control how the clutch is engaged, including those that dismantle or 'cut' these polymers, such as ADF1.

"For many years research in yeast, plants and humans has suggested that the ability of ADFs to dismantle actin polymers – effectively disengaging the clutch – required a small molecular 'finger' to break the actin in two," Dr Baum said. "However, when we looked at the malaria ADF1 protein, we were surprised to discover that it lacked this molecular 'finger', yet remarkably was still able to cut the polymers. We discovered that a previously overlooked part of the protein, effectively the 'knuckle' of the finger-like protrusion, was responsible for dismantling the actin; we then discovered this 'hidden' domain was present across all ADFs."

Mr Wong said that the Australian Synchrotron was critical in providing the extraordinary detail that helped the team pinpoint the protein 'knuckle'. "This is the first time a 3D image of the ADF protein has been captured in such detail from any cell type," Mr Wong said. "Imaging the protein structure at such high resolution was critical in proving beyond question the segment of the protein responsible for cutting actin polymers. Obtaining that image would have been impossible without the synchrotron facilities."

A protein diagram shows the structure of the malaria parasite protein ADF1 (actin-depolymerising factor 1). The noticeable lack of the 'finger' led researchers from the Walter and Eliza Hall Institute in Melbourne, Australia, to upend the conventional theory of how the protein controlled cell movement.
Credit: Dr Jake Baum and Mr Wilson Wong, Walter and Eliza Hall Institute.
Dr Baum said the new knowledge will give researchers a much clearer understanding of one of the fundamental steps governing how cells across all species grow, divide and, importantly, move. "Knowing that this one small segment of the protein is singularly responsible for ADF1 function means that we need to focus on an entirely new target not only for developing anti-malarial treatments, but also other diseases where potential treatments target actin, such as anti-cancer therapeutics," Dr Baum said. "Malaria researchers are normally used to following insights from other biological systems; this is a case of the exception proving the rule: where the malaria parasite, being so unusual, reveals how all other ADFs across nature work."

More than 250 million people contract malaria each year, and almost one million people, mostly children, die from the disease. The malaria parasite has developed resistance to most of the therapeutic agents available for treating the disease, so identifying novel ways of targeting the parasite is crucial.

Dr Baum said that the discovery could lead to development of drugs entirely geared toward preventing malaria infection, without adverse effects on human cells. "One of the primary goals of the global fight against malaria is to develop novel drugs that prevent infection and transmission in all hosts, to break the malaria cycle," Dr Baum said. "There is a very real possibility that, in the future, drugs could be developed that 'jam' this molecular 'clutch', meaning the malaria parasite cannot move and continue to infect cells in any of its conventional hosts, which would be a huge breakthrough for the field."

This project was funded by the National Health and Medical Research Council (NHMRC).


Contacts and sources: