Tuesday, May 31, 2011

Study: Biodegradable Products May Be Bad For The Environment


 Research from North Carolina State University shows that so-called biodegradable products are likely doing more harm than good in landfills, because they are releasing a powerful greenhouse gas as they break down.

“Biodegradable materials, such as disposable cups and utensils, are broken down in landfills by microorganisms that then produce methane,” says Dr. Morton Barlaz, co-author of a paper describing the research and professor and head of NC State’s Department of Civil, Construction, and Environmental Engineering. “Methane can be a valuable energy source when captured, but is a potent greenhouse gas when released into the atmosphere.”

And the U.S. Environmental Protection Agency (EPA) estimates that only about 35 percent of municipal solid waste goes to landfills that capture methane for energy use. EPA estimates that another 34 percent of landfills capture methane and burn it off on-site, while 31 percent allow the methane to escape.

“In other words,” Barlaz says, “biodegradable products are not necessarily more environmentally friendly when disposed in landfills.”

This problem may be exacerbated by the rate at which these man-made biodegradable materials break down. Federal Trade Commission (FTC) guidelines call for products marked as “biodegradable” to decompose within “a reasonably short period of time” after disposal. But such rapid degradation may actually be environmentally harmful, because federal regulations do not require landfills that collect methane to install gas collection systems for at least two years after the waste is buried. If materials break down and release methane quickly, much of that methane will likely be emitted before the collection technology is installed. This means less potential fuel for energy use, and more greenhouse gas emissions.

As a result, the researchers find that a slower rate of biodegradation is actually more environmentally friendly, because the bulk of the methane production will occur after the methane collection system is in place. Some specific biodegradable products such as bags that hold yard waste and are always sent to composting or anaerobic digestion facilities were not included in the study.

“If we want to maximize the environmental benefit of biodegradable products in landfills,” Barlaz says, “we need to both expand methane collection at landfills and design these products to degrade more slowly – in contrast to FTC guidance.”

The paper, “Is Biodegradability a Desirable Attribute for Discarded Solid Waste? Perspectives from a National Landfill Greenhouse Gas Inventory Model,” was co-authored by Barlaz and NC State Ph.D. student James Levis, and was published online May 27 by the journal Environmental Science & Technology. The research was supported by Procter & Gamble and the Environmental Research and Education Foundation.

Contacts and sources:

Nanoscale Waveguide For Future Photonics

The creation of a new quasiparticle called the "hybrid plasmon polariton" may throw open the doors to integrated photonic circuits and optical computing for the 21st century. Researchers with the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab) have demonstrated the first true nanoscale waveguides for next generation on-chip optical communication systems.

The hybrid plasmon polariton (HPP) nanoscale waveguide consists of a semiconductor strip separated from a metallic surface by a low dielectric gap. Schematic shows HPP waveguide responding when a metal slit at the guide’s input end is illuminated.
Credit: courtesy of Xiang Zhang group

"We have directly demonstrated the nanoscale waveguiding of light at visible and near infrared frequencies in a metal-insulator-semiconductor device featuring low loss and broadband operation," says Xiang Zhang, the leader of this research. "The novel mode design of our nanoscale waveguide holds great potential for nanoscale photonic applications, such as intra-chip optical communication, signal modulation, nanoscale lasers and bio-medical sensing."

Zhang, a principal investigator with Berkeley Lab's Materials Sciences Division and director of the University of California at Berkeley's Nano-scale Science and Engineering Center (SINAM), is the corresponding author of a paper published by Nature Communications that describes this work titled "Experimental Demonstration of Low-Loss Optical Waveguiding at Deep Sub-wavelength Scales." Co-authoring the paper with Zhang were Volker Sorger, Ziliang Ye, Rupert Oulton, Yuan Wang, Guy Bartal and Xiaobo Yin.

In this paper, Zhang and his co-authors describe the use of the hybrid plasmon polariton, a quasi-particle they conceptualized and created, in a nanoscale waveguide system that is capable of shepherding light waves along a metal-dielectric nanostructure interface over sufficient distances for the routing of optical communication signals in photonic devices. The key is the insertion of a thin low-dielectric layer between the metal and a semiconductor strip.

This 3-D image overlap of the deep sub-wavelength HPP mode signal (red spot) indicates the devices' potential to create strong light-matter-interaction for compact and highly functional photonic components.
Credit: courtesy of Xiang Zhang group
"We reveal mode sizes down to 50-by-60 square nanometers using Near-field scanning optical microscopy (NSOM) at optical wavelengths," says Volker Sorger a graduate student in Zhang's research group and one of the two lead authors on the Nature Communications paper. "The propagation lengths were 10 times the vacuum wavelength of visible light and 20 times that of near infrared."

The high-technology world is eagerly anticipating the replacement of today's electronic circuits in microprocessors and other devices with circuits based on the transmission of light and other forms of electromagnetic waves. Photonic technology, or "photonics," promises to be superfast and ultrasensitive in comparison to electronic technology.

"To meet the ever-growing demand for higher data bandwidth and lower power consumption, we need to reduce the energy required to create, transmit and detect each bit of information," says Sorger. "This requires reducing physical photonic component sizes down beyond the diffraction limit of light while still providing integrated functionality."

Until recently, the size and performance of photonic devices was constrained by the interference that arises between closely spaced light waves. This diffraction limit results in weak photonic-electronic interactions that can only be avoided through the use of devices much larger in size than today's electronic circuits. A breakthrough came with the discovery that it is possible to couple photons with electrons by squeezing light waves through the interface between a metal/dielectric nanostructure whose dimensions are smaller than half the wavelengths of the incident photons in free space.

From left, Berkeley Lab's Xiang Zhang, Ziliang Ye and Volker Sorger have demonstrated the first true nanoscale waveguides for next generation on-chip optical communication systems.
Credit: Photo by Roy Kaltschmidt, Berkeley Lab Public Affairs
Directing waves of light across the surface of a metal nanostructure generates electronic surface waves – called plasmons - that roll through the metal's conduction electrons (those loosely attached to molecules and atoms). The resulting interaction between plasmons and photons creates a quasi-particle called a surface plasmon polariton(SPP) that can serve as a carrier of information. Hopes were high for SPPs in nanoscale photonic devices because their wavelengths can be scaled down below the diffraction limit, but problems arose because any light signal loses strength as it passes through the metal portion of a metal-dielectric interface.

"Until now, the direct experimental demonstration of low-loss propagation of deep sub-wavelength optical modes was not realized due to the huge propagation loss in the optical mode that resulted from the electromagnetic field being pushed into the metal," Zhang says. "With this trade-off between optical confinement and metallic losses, the use of plasmonics for integrated photonics, in particular for optical interconnects, has remained uncertain."

To solve the problem of optical signal loss, Zhang and his group proposed the hybrid plasmon polariton (HPP) concept. A semiconductor (high-dielectric) strip is placed on a metal interface, just barely separated by a thin oxide (low-dielectric) layer. This new metal-oxide-semiconductor design results in a redistribution of an incoming light wave's energy. Instead of being concentrated in the metal, where optical losses are high, some of the light wave's energy is squeezed into the low dielectric gap where optical losses are substantially less compared to the plasmonic metal.

"With this design, we create an HPP mode, a hybrid of the photonic and plasmonic modes that takes the best from both systems and gives us high confinement with low signal loss," says Ziliang Ye, the other lead authors of the Nature Communications paper who is also a graduate student in Zhang's research group. "The HPP mode is not only advantageous for down-scaling physical device sizes, but also for delivering novel physical effects at the device level that pave the way for nanolasers, as well as for quantum photonics and single-photon all-optical switches."

The HPP waveguide system is fully compatible with current semiconductor/CMOS processing techniques, as well as with the Silicon-on-Insulator (SOI) platform used today for photonic integration. This should make it easier to incorporate the technology into low-cost, large-scale integration and manufacturing schemes. Sorger believes that prototypes based on this technology could be ready within the next two years and the first actual products could be on the market within five years.

"We are already working on demonstrating an all-optical transistor and electro-optical modulator based on the HPP waveguide system," Sorger says. "We're also now looking into bio-medical applications, such as using the HPP waveguide to make a molecular sensor."


 Contacts and sources:

This research was supported by the National Science Foundation's Nano-Scale Science and Engineering Center.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 12 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov.


From Seawater To Freshwater With A Nanotechnology Filter

In this month's Physics World, Jason Reese, Weir Professor of Thermodynamics and Fluid Mechanics at the University of Strathclyde, describes the role that carbon nanotubes (CNTs) could play in the desalination of water, providing a possible solution to the problem of the world's ever-growing population demanding more and more fresh drinking water.

Global population projections suggest that worldwide demand for water will increase by a third before 2030.

But with more than a billion people already experiencing drinking-water shortages, and with a potential 3𔃄 oC increase in temperature and subsequent redistribution of rainfall patterns, things are likely to get even worse.

CNTs – essentially sheets of one-atom thick carbon rolled into cylinders – have been investigated by Reese and his research group, using computer simulations, as a new way of addressing this challenge and transforming abundant seawater into pure, clean drinking water.

Their technique is based on the process of osmosis – the natural movement of water from a region with low solute concentration across a permeable membrane to a region with high concentration. But just as with most existing water-desalination plants, Reese's technique actually uses the opposite process of "reverse osmosis" whereby water moves in the opposite direction, leaving the salty water clean.

One can imagine a large tank of water, separated into two sections by a permeable membrane, with one half containing fresh water and the other half containing seawater. The natural movement of water would move from the fresh water side to the seawater side to try and dilute the seawater and neutralize the concentrations.

But in reverse osmosis a large amount of pressure is applied to the seawater side of the tank, which reverses the process, making water move into the fresh-water side and leave the salt behind.

Although this process can remove the necessary salt and mineral content from the water, it is incredibly inefficient and producing the high pressures is expensive.

Reese has, however, shown that CNTs can realistically expect to have water permeability 20 times that of modern commercial reverse-osmosis membranes, greatly reducing the cost and energy required for desalination. Additionally, CNTs are highly efficient at repelling salt ions, more so because specific chemical groups can be attached to them to create a specific "gatekeeper" function.

As Reese writes, "The holy grail of reverse-osmosis desalination is combining high water-transport rates with efficient salt-ion rejection. While many questions still remain, the exciting potential of membranes of nanotubes to transform desalination and water-purification processes is clear, and is a very real and socially progressive use of nanotechnology."


Contacts and sources:


Also in the June edition:

Jonathan Mather from Sharp Laboratories Europe describes the technology behind glasses-free 3D TV
Bruce Drinkwater from the University of Bristol, UK, describes how acoustic tweezers may one day allow researchers to make human tissue and fabricate tiny structures by manipulating cells with sound waves

Please mention Physics World as the source of these items and, if publishing online, please include a hyperlink to: http://www.physicsworld.com

 Physics World is the international monthly magazine published by the Institute of Physics. For further information or details of its editorial program, please contact the editor, Dr Matin Durrani, on tel +44 (0)117 930 1002. The magazine's website physicsworld.com is updated regularly and contains physics news, views and resources.

How Vitamins & Minerals Can Prevent Age Related Diseases

New research in the FASEB Journal demonstrates need for public health initiatives aimed at identifying, treating and taking seriously modest vitamin and mineral deficiencies

Current Issue Cover

Bethesda, MD—Severe deficiency of the vitamins and minerals required for life is relatively uncommon in developed nations, but modest deficiency is very common and often not taken seriously. A new research published online in the FASEB Journal, however, may change this thinking as it examines moderate selenium and vitamin K deficiency to show how damage accumulates over time as a result of vitamin and mineral loss, leading to age-related diseases.

"Understanding how best to define and measure optimum nutrition will make the application of new technologies to allow each person to optimize their own nutrition a much more realistic possibility than it is today." said Joyce C. McCann, Ph.D., a co-author of the study from the Nutrition and Metabolism Center at Children's Hospital Oakland Research Institute in Oakland, California. "If the principles of the theory, as demonstrated for vitamin K and selenium, can be generalized to other vitamins and minerals, this may provide the foundation needed."

McCann and colleagues reached their conclusions by compiling and assessing several general types of scientific evidence. They tested whether selenium-dependent proteins that are essential from an evolutionary perspective are more resistant to selenium deficiency than those that are less essential. They discovered a highly sophisticated array of mechanisms at cellular and tissue levels that, when selenium is limited, protect essential selenium-dependent proteins at the expense of those that are nonessential.

They also found that mutations in selenium-dependent proteins that are lost on modest selenium deficiency result in characteristics shared by age-related diseases including cancer, heart disease, and loss of immune or brain function. Results should inform attempts to locate mechanistic linkages between vitamin or mineral deficiencies and age-related diseases by focusing attention on the vitamin and mineral-dependent proteins that are nonessential from an evolutionary perspective. Such mechanistic linkages are likely to present opportunities for treatment.

"This paper should settle any debate about the importance of taking a good, complete, multivitamin every day," said Gerald Weissmann, M.D., Editor-in-Chief of the FASEB Journal. "As this report shows, taking a multivitamin that contains selenium is a good way to prevent deficiencies that, over time, can cause harm in ways that we are just beginning to understand."


Contacts and sources:

Receive monthly highlights from the FASEB Journal by e-mail. Sign up at http://www.faseb.org/fjupdate.aspx. The FASEB Journal (http://www.fasebj.org) is published by the Federation of the American Societies for Experimental Biology (FASEB) and celebrates its 25th anniversary in 2011. Over the past quarter century, the journal has been recognized by the Special Libraries Association as one of the top 100 most influential biomedical journals of the past century and is the most cited biology journal worldwide according to the Institute for Scientific Information.

FASEB comprises 23 societies with more than 100,000 members, making it the largest coalition of biomedical research associations in the United States. FASEB enhances the ability of scientists and engineers to improve—through their research—the health, well-being and productivity of all people. FASEB's mission is to advance health and welfare by promoting progress and education in biological and biomedical sciences through service to our member societies and collaborative advocacy.

Details: Joyce C. McCann and Bruce N. Ames. Adaptive dysfunction of selenoproteins from the perspective of the triage theory: why modest selenium deficiency may increase risk of diseases of aging . FASEB J. 2011 25:1793-1814. doi: 10.1096/fj.11-180885 ; http://www.fasebj.org/content/25/6/1793.abstract

FRAVE: Flexible Virtual Reality System, No CAVE Needed

Product designers harness time-consuming procedures in prototype construction. Only then are they able to assess the results of their work in a comprehensive manner. In a three-dimensional model world, they are able to do so instantly and can experience how the product fits into its natural surroundings. Design alterations can be visualized immediately, saving time and cutting the costs associated with the development process.

With her cyber-gloves the researcher is able to navigate and to modify superimposed simulation or infrastructural data. Unlike a CAVE, which needs a special building, the FRAVE can be brought to the place where it is needed.
Credit: Andreas Heddergott / TUM
Up to now, the so-called CAVE has been used. This consists of between three and six projection surfaces that create a walk-in space. Video projectors are used to visualize the calculations and applications in real time and in 3D. The nearly-closed space allows for intense immersion in virtual reality.

The FRAVE also offers this degree of so-called immersion. However, it is capable of even more: it can be used in a variety of ways thanks to its flexible, modular structure. "An engineer wants to enter the 3D world to be able to envisage the interior design of a vehicle. A researcher wants to visualize his or her measurement or simulation data, while a manager uses it as a presentation space," as Dr. Marcus Tönnis, scientist at the TU München Faculty of Informatics explains.

The FRAVE is made up of ten plasma screens with a screen diagonal of 65 inch which can be arranged in different ways. When they form a floor and an enclosure on three sides, the user is immersed deep in a virtual explorative world. The screens at the sides can be opened wide, with a tracking system on the screens automatically adapting the image display to the movement of the side sections. The side sections can even be disconnected from the system entirely. "In a meeting, I can simply push the screens up to the table to demonstrate a 3D view. In this way, the system comes to users and not the other way round," says Tönnis.

As the FRAVE consists of end user devices, it is significantly less expensive than the CAVE, an advantage that could promote more widespread use of Virtual Reality Systems. Another important benefit of the FRAVE is its a smaller footprint. Since the CAVE normally uses back projection, a lot of room is needed behind the projection surfaces. It requires at least 8 x 8 x 8 meters, while a space of 3 x 3 x 3 meters is sufficient for the FRAVE, thereby facilitating installation and relocation.

Researchers at the TUM Faculty of Informatics use the FRAVE to view simulation data. For example, the landscape of Saudi Arabia is displayed virtually as part of a project being run in collaboration with the King Abdullah University of Science and Technology (KAUST). Unlike existing virtual globes, like Google Earth, this system is able to show images above and below the earth's surface. As part of another joint research project, the FRAVE is being used to simulate CO2 separation and storage processes in order to optimize crude oil extraction.

Technical data:
Screens: 10 3D full HD (1920x1080) plasma screens Panasonic TX-P65VT20E
Graphic cards: 6 NVidia QuadroPlex 7000 with Fermi architecture for graphics and 6 Tesla C2070 CUDA graphic cards for simulation data
Computer: 6 Dual Quad Core with 24 GB RAM each and an 8 TB hard drive


 Contacts and sources:

Less Sleeps Lowers Testosterone Levels in Healthy Young Men

Cutting back on sleep drastically reduces a healthy young man's testosterone levels, according to a study published in the June 1 issue of the Journal of the American Medical Association (JAMA).

Eve Van Cauter, PhD, professor in medicine and director of the study, found that men who slept less than five hours a night for one week in a laboratory had significantly lower levels of testosterone than when they had a full night's sleep. Low testosterone has a host of negative consequences for young men, and not just in sexual behavior and reproduction. It is critical in building strength and muscle mass, and bone density.

"Low testosterone levels are associated with reduced well being and vigor, which may also occur as a consequence of sleep loss" said Van Cauter.

At least 15% of the adult working population in the US gets less than 5 hours of sleep a night, and suffers many adverse health effects because of it. This study found that skipping sleep reduces a young man's testosterone levels by the same amount as aging 10 to 15 years.

"As research progresses, low sleep duration and poor sleep quality are increasingly recognized as endocrine disruptors," Van Cauter said.

The ten young men in the study were recruited from around the University of Chicago campus. They passed a rigorous battery of tests to screen for endocrine or psychiatric disorders and sleep problems. They were an average of 24 years old, lean and in good health.

For the study, they spent three nights in the laboratory sleeping for up to ten hours, and then eight nights sleeping less than five hours. Their blood was sampled every 15 to 30 minutes for 24 hours during the last day of the ten-hour sleep phase and the last day of the five-hour sleep phase.

The effects of sleep loss on testosterone levels were apparent after just one week of short sleep. Five hours of sleep decreased their testosterone levels by 10% to 15%. The young men had the lowest testosterone levels in the afternoons on their sleep restricted days, between 2 pm and 10 pm.

The young men also self-reported their mood and vigor levels throughout the study. They reported a decline in their sense of well-being as their blood testosterone levels declined. Their mood and vigor fell more every day as the sleep restriction part of the study progressed.

Testosterone levels in men decline by 1% to 2% a year as they age. Testosterone deficiency is associated with low energy, reduced libido, poor concentration, and fatigue.

The National Heart, Lung, and Blood Institute funded this study. Additional funding came from the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institutes of Health. Rachel Leproult, PhD, organized and supervised the experiment which took place in the University of Chicago Clinical Research Center. The health impact of sleep deprivation has been the focus of research conducted by Eve Van Cauter and Rachel Leproult for more than 10 years.

 Contacts and sources:

Number of Millionaire Households Jumps by 12.2 Percent; Global Wealth Continues Its Strong Recovery with $9 Trillion Gain


Global Wealth Continues Its Strong Recovery with $9 Trillion Gain, but Pressures on Wealth Managers Persist, Says Study by The Boston Consulting Group

Assets Under Management Rise by 8.0 Percent to Hit a Record $121.8 Trillion; the Number of Millionaire Households Jumps by 12.2 Percent; but Changes in Regulations and Client Behavior Continue to Dampen Wealth Managers’ Results

Credit: The Boston Consulting Group

Propelled by growth in nearly every region, global wealth continued a solid recovery in 2010, increasing by 8.0 percent, or $9 trillion, to a record of $121.8 trillion.1 That level was about $20 trillion above where it stood just two years prior during the depths of the financial crisis, according to a new study by The Boston Consulting Group (BCG).

Findings from the study appear in BCG’s eleventh annual Global Wealth report titled Shaping a New Tomorrow: How to Capitalize on the Momentum of Change, which was released today at a press briefing in New York. Among the other key findings:

North America had the largest absolute gain of any regional wealth market in assets under management (AuM), at $3.6 trillion, and the second-highest growth rate, at 10.2 percent. Its $38.2 trillion in AuM made it the world’s richest region, with nearly one-third of global wealth.

In Europe, wealth grew at a below-average rate of 4.8 percent, but the region still had a gain of $1.7 trillion in AuM.

Wealth grew fastest in Asia-Pacific (excluding Japan), at a 17.1 percent rate. In the Middle East and Africa, growth was somewhat above the global average, at 8.6 percent. In Latin America, wealth grew by 8.2 percent. Together, these three regions accounted for 24.4 percent of global wealth in 2010, up from 20.9 percent in 2008.

Wealth declined by 0.2 percent in the Japanese market to $16.8 trillion. As recently as 2008, Japan accounted for more than half of all the wealth in Asia-Pacific. In 2010, it accounted for about 44 percent.

In terms of individual countries, the nations showing the largest absolute gains in wealth were the United States, China, the United Kingdom, and India.

The strong performance of the financial markets accounted for the lion’s share (59 percent) of the growth in AuM. Its impact was amplified by the ongoing reallocation of wealth. From year-end 2008 through 2010, the share of wealth held in equities increased from 29 percent to 35 percent. “During the crisis, cash was king,” said Monish Kumar, a BCG senior partner and a coauthor of the report. “Since then, clients have been steering their assets back into riskier investments.” North America continued to have the highest proportion of wealth held in equities—44 percent, up from 41 percent in 2009.

“The wealth management industry has overcome tremendous adversity over the past several years, and the sustained recovery of global wealth bodes well for its future,” added Kumar, who is the global leader of asset and wealth management at BCG. “But the positive signs should not be misread as a return to normal. A number of disruptive forces, including increased regulatory oversight and changes in client behavior, are rewriting the rules of the game—both literally and figuratively.”

Millionaire Households Grow in Number and Wealth

Millionaire households represented just 0.9 percent of all households but owned 39 percent of global wealth, up from 37 percent in 2009. The number of millionaire households increased by 12.2 percent in 2010 to about 12.5 million.

The United States had by far the most millionaire households (5.2 million), followed by Japan, China, the United Kingdom, and Germany.

Singapore continued to have the highest concentration of millionaire households, with 15.5 percent of all households having at least $1 million in AuM. Switzerland had the highest concentration of millionaire households in Europe and the second-highest overall, at 9.9 percent.

Three of the six densest millionaire populations were in the Middle East—in Qatar, Kuwait, and the United Arab Emirates.

The proportion of wealth owned by millionaire households increased the most in Asia-Pacific, at 2.9 percentage points, followed by North America, at 1.3 percentage points.

The country with the fastest-growing number of millionaire households was Singapore, with 170,000—up nearly a third from 2009.

This year, for the first time, BCG published figures on the countries with the highest number of “ultra-high-net-worth” (UHNW) households, defined as those with more than $100 million in AuM. The United States had the largest number of these super-wealthy households (2,692), while Saudi Arabia had the highest concentration of UHNW households, measured per 100,000 households, at 18, followed by Switzerland (10), Hong Kong (9), Kuwait (8), and Austria (8). China experienced the fastest growth in the number of super-wealthy households, which jumped by more than 30 percent to 393.

Pressures Continue to Mount for Offshore Private Banks

The amount of offshore wealth—defined as assets booked in a country where the investor has no legal residence or tax domicile—increased to $7.8 trillion in 2010, up from $7.5 trillion in 2009. At the same time, however, the percentage of wealth held offshore slipped to 6.4 percent, down from 6.6 percent in 2009. The decline was the result of strong asset growth in countries where offshore wealth is less prominent, such as China, as well as stricter regulations in Europe and North America, which prompted clients to move their wealth back onshore.

“Offshore private banking remains a tumultuous part of the business,” said Anna Zakrzewski, a BCG principal and a coauthor of the report. “The relative importance of offshore centers is changing rapidly. Some are benefiting from continued asset growth, while others are suffering large asset outflows, with wealth being repatriated to onshore banks, transferred to other offshore centers, redirected into nonfinancial investments, or simply spent at a faster rate.”

For most clients, however, the core value proposition of offshore banking remains, Zakrzewski said. “Offshore wealth managers offer a sense of stability and security that these clients cannot find in their home countries. Other clients value the expertise or access to certain investments provided by offshore private banks. To continue to grow, offshore wealth managers will need to adapt to the changes imposed by the push for greater transparency while accentuating their strengths in areas that remain extremely relevant to clients around the world.”

Mixed Results for Wealth Managers

To gauge the performance of wealth managers (both private banks and wealth management units of large universal-banking groups), BCG gathered benchmarking data from 120 wealth-management institutions worldwide. The survey revealed wide variations in margins, cost ratios, and AuM growth across and within regions. On the whole, the industry experienced mixed results. The average pretax profit margin of wealth managers increased by 4 basis points to 23 basis points in 2010. In most regions, however, revenue margins remained lower than they were before the crisis (and in some places continued to decline), while cost-to-income ratios remained higher (and in some places continued to rise).

“In some markets, changes in regulations and client behavior have had a profound impact on wealth managers,” said Peter Damisch, a BCG partner and a coauthor of the report. “Especially in parts of Europe, clients are becoming more price sensitive, demanding more price transparency, and still avoiding higher-margin products.”

He continued, “For most wealth managers, pricing remains a vastly underutilized tool for improving revenue margins. At many wealth-management institutions, pricing strategies are more arbitrary than deliberate and are often decoupled from the services provided to specific client segments. Wealth managers simply cannot afford to overlook the importance of pricing and the need to adapt their pricing strategies and practices to the new realities of wealth management. Smarter pricing models and a more contained approach to discounting will become increasingly critical.”

Outlook

Tjun Tang, another BCG partner who worked on the report, said that the firm expects global wealth to grow at a compound annual rate of 5.9 percent from year-end 2010 through 2015—to about $162 trillion—driven by the performance of the capital markets and the growth of GDP in countries around the world. Wealth will grow fastest in emerging markets. In India and China, for example, it is expected to increase at a compound annual rate of 18 percent and 14 percent, respectively. As a result, the Asia-Pacific region’s share of global wealth (ex Japan) is projected to rise from 18 percent in 2010 to 23 percent in 2015.

In Japan, the amount of wealth is expected to decrease slightly in 2011 and then grow slowly for several years. The impact of the recent disaster on private wealth is still unclear, but it could put further stress on the growth of AuM in Japan.

“As much as the sustained recovery of global wealth reaffirms wealth management’s place as a relatively stable and attractive part of the financial services world,” Tang said, “it also masks important and lasting changes to the dynamics of this industry. Perhaps more than ever, a wealth manager’s adaptability—its capacity to anticipate and respond to a combination of regulatory, client-driven, and competitive changes—will determine how well it prospers from the continued growth of wealth.”

To request a copy of the report, please email Global.Wealth@bcg.com. For media inquiries or to arrange an interview with one of the authors, please contact Alexandra Corriveau at +1 212 446 3261 or corriveau.alexandra@bcg.com.

1Global wealth is defined as total assets under management (AuM) across all households. AuM includes cash deposits, money market funds, listed securities held directly or indirectly through managed investments, and onshore and offshore assets. It excludes wealth attributed to investors’ own businesses, residences, or luxury goods. Unless stated otherwise, AuM figures and percentage changes are based on local AuM totals that were converted to U.S. dollars using a constant year-end 2010 exchange rate for all years. This approach excludes the effects of fluctuating exchange rates.

Source: The Boston Consulting Group

About The Boston Consulting Group

The Boston Consulting Group (BCG) is a global management consulting firm and the world's leading advisor on business strategy. We partner with clients in all sectors and regions to identify their highest-value opportunities, address their most critical challenges, and transform their businesses. Our customized approach combines deep insight into the dynamics of companies and markets with close collaboration at all levels of the client organization. This ensures that our clients achieve sustainable competitive advantage, build more capable organizations, and secure lasting results. Founded in 1963, BCG is a private company with 74 offices in 42 countries.

Why Childhood Obesity? It's So Much More Than What Kids Eat

University of Illinois scientists from a variety of disciplines have teamed up to examine the factors that contribute to childhood obesity. Why? Because individual researchers have found that the problem is too complicated for any of them to tackle alone.

"Our Strong Kids team members are looking at such diverse factors as genetic predisposition, the effect of breastfeeding, how much TV a child watches, and the neighborhood he lives in, among many others," said Kristen Harrison of the U of I's Division of Nutritional Sciences. "It seems like the answer should be simple, just eat less and exercise more, but when you look at the reasons that kids overeat and burn fewer calories, it turns out there are a lot of them."

Harrison and other Strong Kids team members received funding for a three-year longitudinal study and are applying for support to keep the research going. The scientists have collected and analyzed two generations of data on approximately 400 families, and they are beginning a third wave of data collection. Individual studies, including communication professor Harrison's own examination of preschoolers' television viewing and eating habits, are ongoing.

But the first step was developing a model for studying the problem. The team's Six Cs model will examine the problem of childhood obesity from the following angles: cell, child, clan (or family), community, country and culture. A paper detailing their approach appeared in a recent issue of Child Development Perspectives.

"From 30 to 40 percent of the population has a variety of genetic markers that puts them at greater risk for obesity," said professor of nutrition Margarita Teran-Garcia, who is approaching the problem at the cellular level. As a starting point, she is taking saliva samples from preschoolers in the study group to map their genetic susceptibility to obesity.

Child development professor Kelly Bost is looking at the quality of parent-child attachment. "There's evidence that insecure attachment predicts more TV exposure, more consumption of unhealthful foods, and other factors leading to greater obesity," she said.

Another kinesiology and community health professor, Diana Grigsby-Toussaint, is geomapping retail environments in the neighborhoods where the participating families live, looking in detail at what foods are available there. "She's also mapping how much green space is available and how that relates to outdoor play and activity," Harrison said.

Later work will add more puzzle pieces relating to the community and culture components. For example, what's the community BMI and do participants in the study believe that BMI is normal? What's the usual portion size in this culture? Are children urged to take second and third helpings at mealtime?

"Southern U.S. culture, Latin American culture, and the Sam's Club bulk-buying phenomenon are all elements of what we're trying to capture when we talk about culture," Harrison said.

And professor of applied family studies Angela Wiley is collecting data relating to childhood obesity prevention among Mexican immigrant families in the Abriendos-Caminos program so the researchers can compare parallel populations across countries.

"Childhood obesity is a puzzle, and at different stages, certain variables drop in or out of the picture. Breastfeeding versus formula feeding is a predictor, but it drops out of the model entirely when you get past babyhood. Vending machines in schools are important later in a child's life, but they weren't important before," she added.

There has been very little transdisciplinary effort to map out how all these factors work together, although research shows that no single factor is the most important, Harrison noted.

"We're each looking at different spheres in the model, but we're also looking at potential interactions. That's one of the exciting things we'll get to do as we move forward," she said.

Co-authors of the paper are Harrison, Kelly K. Bost, Brent A. McBride, Sharon M. Donovan, Diana S. Grigsby-Toussaint, Janet M. Liechty, Angela Wiley, Margarita Teran-Garcia, and Gwen Costa-Jacobsohn, all of the U of I.

Funding was provided by the State of Illinois Council for Food and Agricultural Research and the Illinois Department of Human Services via grants supporting the U of I Strong Kids program.

Contacts and sources:

The Volatile Side Of The Moon Targeted For Landing

Four decades after the first Moon landing, our only natural satellite remains a fascinating enigma. Specialists from Europe and the US have been looking at ESA’s proposed Lunar Lander mission to find out how to seek water and other volatile resources.

This image of the Moon was taken with by Rosetta's OSIRIS Narrow Angle Camera (NAC) at 07:36 CET on 13 November 2007, about nine hours after Rosetta's closest approach to Earth during one of its gravity assist manoeuvres.

OSIRIS has been designed to image faint objects, so a neutral density filter was placed in the optical path to reduce the sensitivity of the camera to one fiftieth. The above image was acquired through the far-focus red filter of the camera (750 nanometres).
The Moon seen after the Earth swing-by
Credits: ESA ©2007 MPS for OSIRIS Team MPS/UPD/LAM/IAA/RSSD/INTA/UPM/DASP/IDA

Europe is developing the technology for the Lunar Lander mission, a precursor voyage to the Moon in preparation for human exploration beyond low Earth orbit.

The ESA's lunar lander mission aims to land in the mountainous and heavily cratered terrain of the lunar south pole, possibly in 2018. The region may be a prime location for future human explorers because it offers almost continuous sunlight for power and potential access to vital resources such as water-ice.

New peopulsion technologies are one of the key areas of the ‘Phase-B1’ study, now going on under the leadership of EADS-Astrium Bremen and some of the key technologies will be developed and tested for the first time.

The project will be presented to ESA’s Ministerial Council meeting in 2012 for full approval.
Credit: ESA

“Our ambition is to see one day a European astronaut working on the Moon,” says ESA’s Bruno Gardini.
Expected to be launched in 2018, the unmanned craft will land near the lunar South Pole.

In Bruno’s words, “It is the mission that will provide Europe with the planetary landing technology of the future.”

The ESA's lunar lander mission aims to land in the mountainous and heavily cratered terrain of the lunar south pole, possibly in 2018. The region may be a prime location for future human explorers because it offers almost continuous sunlight for power and potential access to vital resources such as water-ice.
Credit: ESA

Specialists, including prestigious scientists who worked on the Apollo programme, recently gathered at ESA’s ESTEC space technology centre in the Netherlands to discuss the mission.

The lander’s scientific payload addresses a number of key aspects of the unique environment on the Moon: radiation, dust and volatiles and.

Volatiles, such as water, are those delicate chemical components that under certain conditions would just disappear.

Volatiles may be readily extracted from lunar soil and provide valuable resources such as carbon, nitrogen, phosphorus or sulphur to aid future human exploration.

Like water, these chemical elements have been implanted by billions of years of exposure to the solar wind and are especially likely to be found at the poles.

Bruno Gardini, from ESA, has been involved in the Lunar Lander project since 2005. He envisages a European astronaut working on the Moon.
Bruno Gardini
Credits: ESA

Ground truth

“To analyse the volatiles or the water that is all over the Moon in very small quantities, we have to take samples of the materials we find on the surface and analyse them in situ”, says Bruno.

Recent missions have transformed our view of the Moon. This new era of lunar science was well represented by Colin Pillinger. Having begun his career analysing samples of moon rock for the Apollo programme, he is now professor of planetary science at the Open University in the UK.

Colin Pillinger is a professor at the Open University in the UK. 
Colin Pillinger
Credits: ESA

“We certainly don’t know where the water comes from until we get down there and do more experiments. That’s why the Lunar Lander is so important,” notes Prof. Pillinger.

“I play the devil’s advocate,” says Larry Taylor, from the University of Tennessee, a scientist who guided the US astronauts in the quest for samples on the Moon.

“I’m giving my knowledge about lunar soil, something that I’ve been working on for 40 years. I have a different perspective, so I’m saying to the engineers: are you sure you are going to find this?”

Larry Taylor , from the University of Tennessee, holds a sample of the lunar soil brought back by the Apollo 11 astronauts in 1969.
Larry Taylor
Credit: ESA

Go south

Finding the right landing site is also crucial for science. “You have to go to the exact places where we think these valuable resources might be concentrated,” says Prof. Pillinger.

ESA has selected the South Pole as a landing site for two main reasons. First, there are long periods of illumination that would allow the lander to rely on solar power alone.

Secondly, concludes Bruno, “We will go to a very different place from the equatorial regions explored during the Apollo era, giving the scientists the opportunity to do new experiments and get completely new insights.”

This mosaic of the lunar south pole was obtained with images taken by the Advanced Moon Imaging Experiment (AMIE) on board ESA's SMART-1.

The pictures were taken between Dec 2005 and March 2006, during lunar southern summer. When obtaining the images, SMART-1 was flying over the south pole at a distance of about 500 km, allowing individual images with small-field (about 50 km across) high resolution views (50 m/pixel).

Each individual image includes areas imaged with colour filters and a more exposed area. The differences have been corrected accordingly to obtain this mosaic. The mosaic, composed of about 40 images obtained over more than 30 orbits, covers an area of about 500 by 150 km. The lunar near-side facing Earth is at the top of the map, while the far-side is at the bottom.
Lunar south pole mosaic
Credits: ESA/SMART-1/Space-X (Space Exploration Institute), ESA/SMART-1/ AMIE camera team

Code Green: Energy-Efficient Programming To Curb Computers’ Power Use

Soaring energy consumption by ever more powerful computers, data centers and mobile devices has many experts looking to reduce the energy use of these devices. Most projects so far focus on more efficient cooling systems or energy-saving power modes.

A University of Washington project sees a role for programmers to reduce the energy appetite of the ones and zeroes in the code itself. Researchers have created a system, called EnerJ, that reduces energy consumption in simulations by up to 50 percent, and has the potential to cut energy by as much as 90 percent. They will present the research next week in San Jose at the Programming Language Design and Implementation annual meeting.

Luis Ceze
Luis Ceze
Credit: University of Washington

“We all know that energy consumption is a big problem,” said author Luis Ceze, a UW assistant professor of computer science and engineering. “With our system, mobile phone users would notice either a smaller phone, or a longer battery life, or both. Computing centers would notice a lower energy bill.”

The basic idea is to take advantage of processes that can survive tiny errors that happen when, say, voltage is decreased or correctness checks are relaxed. Some examples of possible applications are streaming audio and video, games and real-time image recognition for augmented-reality applications on mobile devices.

“Image recognition already needs to be tolerant of little problems, like a speck of dust on the screen,” said co-author Adrian Sampson, a UW doctoral student in computer science and engineering. “If we introduce a few more dots on the image because of errors, the algorithm should still work correctly, and we can save energy.”

The UW system is a general framework that creates two interlocking pieces of code. One is the precise part – for instance, the encryption on your bank account’s password. The other portion is for all the processes that could survive occasional slipups.

The software creates an impenetrable barrier between the two pieces.

“We make it impossible to leak data from the approximate part into the precise part,” Sampson said. “You’re completely guaranteed that can’t happen.”

While computers’ energy use is frustrating and expensive, there is also a more fundamental issue at stake. Some experts believe we are approaching a limit on the number of transistors that can run on a single microchip. The so-called “dark silicon problem” says that as we boost computer speeds by cramming more transistors onto each chip, there may no longer be any way to supply enough power to the chip to run all the transistors.

The UW team’s approach would work like a dimmer switch, letting some transistors run at a lower voltage. Approximate tasks could run on the dimmer regions of the chip.

“When I started thinking about this, it became more and more obvious that this could be applied, at least a little bit, to almost everything,” Sampson said. “It seemed like I was always finding new places where it could be applied, at least in a limited way.”

Researchers would use the program with a new type of hardware where some transistors have a lower voltage, the force on electrons in the circuit. This slightly increases the risk of random errors; EnerJ shuttles only approximate tasks to these transistors.

“If you can afford one error every 100,000 operations or so, you can already save a lot of energy,” Ceze said.

Other ways to use hardware to save energy are lowering the refresh rate and reducing voltage of the memory chip.

Simulations of such hardware show that running EnerJ would cut energy by about 20 to 25 percent, on average, depending on the aggressiveness of the approach. For one program the energy saved was almost 50 percent. Researchers are now designing hardware to test their results in the lab.

Today’s computers could also use EnerJ with a purely software-based approach. For example, the computer could round off numbers or skip some extra accuracy checks on the approximate part of the code to save energy – researchers estimate between 30 and 50 percent savings based on software alone.

Combining the software and hardware methods they believe they could cut power use by about 90 percent.

“Our long-term goal would be 10 times improvement in battery life,” Ceze said. “I don’t think it is totally out of the question to have an order of magnitude reduction if we continue squeezing unnecessary accuracy.”

The program is called EnerJ because it is an extension for the Java programming language. The team hopes to release the code as an open-source tool this summer.

Co-authors of the paper are UW computer science and engineering professor Dan Grossman, postdoctoral researcher Werner Dietl, graduate student Emily Fortuna and undergraduate Danushen Gnanapragasam. Also involved in the research is doctoral student Hadi Esmaeilzadeh.

Contacts and sources:

Lingodroids: Robots Develop Proto-Language, Sounds A Little Like R2-D2

University of Queensland (UQ) postdoctoral research fellow Dr Ruth Schulz and her colleagues have created a pair of robots that have their own language.

Lingodroids
Lingodriods.tif
Credit: University of Queensland

The ‘Lingodroids' are a pair of mobile robots that communicate by developing their own words for places, and relationships between places based on distance and direction. The Lingodroids are a pair of mobile robots that evolve a language for places and relationships between places (based on distance and direction). Each robot in these studies has its own understanding of the layout of the world, based on its unique experiences and exploration of the environment. 


Credit: University of Queensland


Despite having different internal representations of the world, the robots are able to develop a common lexicon for places, and then use simple sentences to explain and understand relationships between places – even places that they could not physically experience, such as areas behind closed doors. By learning the language, the robots are able to develop representations for places that are inaccessible to them, and later, when the doors are opened, use those representations to perform goal-directed behavior.



The language sounds like a sequence of phone tones, which are easy for the robots to produce and hear in a noisy office environment, before being translated into syllables to make it easy for humans to recognise them.

Dr Schulz said that the robots start by playing where-are-we games.

“If they encounter an area that has not yet been named, one will invent a word, such as “kuzo”, choosing a random combination of syllables, which it is then able to communicate to other robots it meets, thus defining the name of the place,” she said.

“These words are known as “toponyms” (“topo” meaning place and “nym” meaning name).

“The robots then start to play how-far and which-direction games, which enable them to develop relationship words (like English prepositions).”

The resulting language consists of location, distance and direction words, enabling the robots to refer to new places based on their relationship to known locations.

“These languages are very powerful – they are known as “generative” languages because they enable the robots to refer to places they haven't been to or even places that they imagine beyond the edges of their explored world,” Dr Schulz said.

An essential aspect of these games is that the robots develop robust ideas of where a word should be used.

Their understanding of the new language was tested using games in which two robots attempted to meet at a particular toponym, or place name.

If one robot told the other “jaya”, they would independently navigate to where they thought “jaya” was.

When both robots arrived at the same location, the concept “jaya” was consistent between the robots.

After having played hundreds of games to develop their language, the robots agreed upon concepts for toponyms within 0.65 metres, directions within 10 degrees and distances within 0.375 metres.

The robots consist of a mobile platform which is fitted with a camera, laser range finder, sonar for mapping and obstacle avoidance, and a microphone and speakers for audible communication with each other.

Dr Schulz and her colleagues presented their research at the International Conference on Robotics and Automation in Shanghai on Tuesday, May 10, and have since received international coverage.

“We believe that the natural way to communicate with robots will be through human language,” Dr Schulz said.

“The Lingodroids have developed their own proto-language as part of realising this ambition,” she said.

“In the future, our aim is to extend the types of concepts able to be formed by the robots, as well as expand to additional grammatical features of language and to human-robot interaction.”

Source: University of Queensland


The Lingodroids can be seen playing a location language game at www.youtube.com/watch?v=vhUOUaSNP3w

More information about the research can be found at the Lingodroids web site,http://itee.uq.edu.au/~ruth/Lingodroids.htm 

Schulz, R., Glover, A., Milford, M., Wyeth, G., & Wiles, J. (2011) Lingodroids: Studies in Spatial Cognition and Language, ICRA 2011, The International Conference on Robotics and Automation, Shanghai, China, May 2011


2 NASA Satellites See Typhoon Songda Weaken And Move Past Japan

NASA's Tropical Rainfall Measuring Mission and Aqua satellite provided forecasters some insights into the behavior of Super Typhoon Songda over the past weekend. Former Super typhoon Songda brought rainfall to parts of Japan over the weekend and today marine warnings for high surf remain in several Sub-prefecture regions as extra-tropical depression Sondga's remnants push further out to sea.

This image of Typhoon Sondga's rainfall was captured by NASA's TRMM satellite on Saturday, May 28. Notice that the outer fringes of the storm brushed by Taiwan (left). The strongest rainfall (about 2 inches/50 mm per hour) appears in red. The yellow and green areas are moderate rainfall falling at a rate between .78 and 1.57 inches (20 and 40 mm) per hour.
Credit: Credit: NASA/SSAI, Hal Pierce

Sub-prefecture regions of Nemuro Chiho, Kushiro Chiho, and Tokachi Chiho still have high wave advisories in place today, May 31, 2011, from the Japanese Meteorological Agency as Sondga's remnants continue moving into the open waters of the Northwestern Pacific Ocean.

On Sunday, May 30, BBC News reported that as Songda continued its northeasterly journey past Japan, the work at the Fukushima nuclear plant was suspended until the storm had passed.

On Saturday, May 29 at 1500 UTC (11 a.m. EDT) Songda had weakened to a depression with maximum sustained winds near 30 knots (34 mph/55 kmh). It was located 300 miles (482 km) west-southwest of Yokosuka, Japan near 34.4 North and 136.6 East. It was moving to the northeast at 26 knots (30 mph/48 kmh).

Earlier, Songda made landfall over the Wakayama prefecture and weakened. It then reemerged over water and moved east-northeast while transitioning into an extra-tropical storm.

According to Stars and Stripes newspaper, the Kadena Air Base (island) issued an "all clear" on Sunday May 29 at 7:56 a.m. local/Japan time for most areas of Okinawa. Down powerlines were reported at Marine Corps Air Station Futenma due to downed power lines.

On Saturday, May 28, Kadena Air Base experienced strong winds and a lot of rainfall in a short period of time as it Songda moved north-northeast. At 1500 UTC (11 a.m. EDT), Kadena Air Base reported sustained winds of 52 knots (60 mph/96 kmh), gusting to 61 knots (70 mph/113 kmh). Rainfall totals were as much as 21 inches (52 centimeters) in 3 hours!

At that time, Songda's center was just 60 miles (96 km) west of Kadena Air Base, Japan, so its center did not cross the island. At that time its maximum sustained winds around the low-level center were near 75 knots (86 mph/139 kmh) and it was still a Category 1 typhoon. It was generating very rough seas at the time reported wave heights were near 37 feet (~11 meters).

Typhoon Sondga's rainfall was captured by NASA's TRMM satellite on Saturday, May 28 at 0613 UTC (1:13 a.m. EDT). At that time, the outer fringes of the storm brushed by Taiwan. The strongest rainfall (about 2 inches/50 mm per hour) remained at sea and was mostly confined to the northwestern quadrant of the storm. Most of the rainfall in the storm was moderate, falling at a rate between .78 and 1.57 inches (20 and 40 mm) per hour.

Stars and Stripes newspaper reported that before Songda approached Kadena Air Base the "Navy's 7th Fleet has moved assets out of port at Yokosuka Naval Base. That fleet included a number of ships including flagship USS Blue Ridge and four destroyers: Fitzgerald, McCain, Mustin and Curtis Wilbur." The newspaper reported that two vessels stayed in port for maintenance and others at sea shifted their navigation away from the storm.

On May 27 at 2100 UTC (5 p.m. EDT) Typhoon Songda was 385 miles (619 km) southwest of Kadena Air Base, Japan and its maximum sustained winds were near 115 knots (132 mph/213 kmh).

The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument onboard NASA's Aqua satellite captured a visible image of Songda on May 27 at 05:10 UTC (1:10 a.m. EDT) when it was a super typhoon off shore from the northern Philippines. At that time, Songda still had an eye.

By 17:11 UTC (1:11 p.m. EDT) another satellite image showed a changing story. At that time, an infrared image was taken by the Atmospheric Infrared Sounder (AIRS) instrument onboard NASA's Aqua satellite as Songda's center was parallel to the southern tip of Taiwan, but far to the east at Sea. At that time, the eye was no longer visible in this image, indicating that the storm is weakening. Fortunately increased wind shear kicked up and continued weakening the storm by the time it approached Kadena Air Base.

One aspect of the infrared imagery that was impressive is the extent of the clouds connected to Songda. The infrared imagery shows what looks like a tail of clouds extending to the northeast that stretches from Taiwan into northern Japan.

Contacts and sources: 

The Rings of Neptune

In this Voyager wide-angle image taken on Aug. 23 1989, the two main rings of Neptune can be clearly seen. In the lower part of the frame the originally announced ring arc, consisting of three distinct features, is visible. This feature covers about 35 degrees of longitude and has yet to be radially resolved in Voyager images. From higher resolution images it is known that this region contains much more material than the diffuse belts seen elsewhere in its orbit, which seem to encircle the planet. This is consistent with the fact that ground-based observations of stellar occultations by the rings show them to be very broken and clumpy. The more sensitive wide-angle camera is revealing more widely distributed but fainter material. Each of these rings of material lies just outside of the orbit of a newly discovered moon. One of these moons, 1989N2, may be seen in the upper right corner.

In this image from NASA's Voyager wide-angle image taken on Aug. 23 1989, the two main rings of Neptune can be clearly seen. In the lower part of the frame the originally announced ring arc, consisting of three distinct features, is visible.
Credit: JPL

The moon is streaked by its orbital motion, whereas the stars in the frame are less smeared. The dark area around the bright moon and star are artifacts of the processing required to bring out the faint rings. This wide-angle image was taken from a range of 2 million kilometers (1.2 million miles), through the clear filter. The Voyager Mission is conducted by JPL for NASA's Office of Space Science and Applications.

Source: JPL
PIA00053 

Spirit's Triumphs on Mars

Reflections on Spirit's Journey

Dangerous Bacteria on Cell Phones of Hospital Patients, Deadly Enough To Kill

It would be a good idea not to borrow a cell phone in a hospital...it could make you sick and possibly even kill you.

Cell phones used by patients and their visitors were twice as likely to contain potentially dangerous bacteria as those of healthcare workers (HCW), according to a study published in the June issue of the American Journal of Infection Control, the official publication of APIC - the Association for Professionals in Infection Control and Epidemiology.

A team of researchers from the Department of Medical Microbiology at Inonu University in Malatya, Turkey collected swab samples from three parts of cell phones—the keypad, microphone and ear piece. A total of 200 mobile phones (MPs) were cultured for the study, 67 of which belonged to medical employees and 133 to patients, patients’ companions and visitors. The researchers found that 39.6 percent of the patient group phones and 20.6 percent of HCW phones tested positive for pathogens. Additionally, seven patient phones contained multidrug resistant (MDR) pathogens such as methicillin-resistant Staphylococcus aureus (MRSA) and multiply resistant gram-negative organisms, while no HCW phones tested positive for MDR pathogens.

“The types of bacteria that were found on the patients’ MPs and their resistance patterns were very worrisome,” state the authors. “Some investigators have reported that MPs of medical personnel may be a potential source of bacterial pathogens in the hospital setting. Our findings suggest that mobile phones of patients, patients’ companions and visitors represent higher risk for nosocomial pathogen colonization than those of HCWs. Specific infection control measures may be required for this threat.”

Hospital-acquired infections affect more than 25 percent of admitted patients in developing countries. In U.S. hospitals, they cause 1.7 million infections a year and are associated with approximately 100,000 deaths. It is estimated that one third of these infections could be prevented by adhering to standard infection control guidelines.

Source: Elsevier

Citation: “Do Mobile Phones of Patients, Companions and Visitors Carry Multi Drug Resistant Hospital Pathogens?” appears in the American Journal of Infection Control, Volume 39, Issue 5 (June 2011).

Dead' Galaxies Aren't So Dead After All, U-M Researchers Find

University of Michigan astronomers examined old galaxies and were surprised to discover that they are still making new stars. The results provide insights into how galaxies evolve with time.

U-M research fellow Alyson Ford and astronomy professor Joel Bregman presented their findings May 31 at a meeting of the Canadian Astronomical Society in London, Ontario.

Using the Wide Field Camera 3 on the Hubble Space Telescope, they saw individual young stars and star clusters in four galaxies that are about 40 million light years away. One light year is about 5.9 trillion miles.

Individual young stars and star clusters in the 'dead' elliptical galaxy, Messier 105, detected using the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST). Messier 105 can be seen in the top, left corner, in an image from the Sloan Digital Sky Survey (SDSS; Data Release 8). The outlined region in the center of Messier 105 is expanded to reveal Hubble's unique view of the inner region of Messier 105, which is further expanded to unveil several individual young stars and star clusters (denoted by dashed circles; top, right). These signposts of recent star formation are unexpected in old, 'dead' galaxies.
Individual young stars and star clusters in the 'dead' elliptical galaxy, Messier 105, detected using the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST). Messier 105 can be seen in the top, left corner, in an image from the Sloan Digital Sky Survey (SDSS; Data Release 8). The outlined region in the center of Messier 105 is expanded to reveal Hubble's unique view of the inner region of Messier 105, which is further expanded to unveil several individual young stars and star clusters (denoted by dashed circles; top, right). These signposts of recent star formation are unexpected in old, 'dead' galaxies. Data from HST's WFC3 and Advanced Camera for Surveys (ACS) were used in the creation of these HST images.
Data from HST's WFC3 and Advanced Camera for Surveys (ACS) were used in the creation of these HST images

"Scientists thought these were dead galaxies that had finished making stars a long time ago," Ford said. "But we've shown that they are still alive and are forming stars at a fairly low level."

Galaxies generally come in two types: spiral galaxies, like our own Milky Way, and elliptical galaxies. The stars in spiral galaxies lie in a disk that also contains cold, dense gas, from which new stars are regularly formed at a rate of about one sun per year.

Stars in elliptical galaxies, on the other hand, are nearly all billions of years old. These galaxies contain stars that orbit every which way, like bees around a beehive. Ellipticals have little, if any, cold gas, and no star formation was known.

"Astronomers previously studied star formation by looking at all of the light from an elliptical galaxy at once, because we usually can't see individual stars," Ford said. "Our trick is to make sensitive ultraviolet images with the Hubble Space Telescope, which allows us to see individual stars."

The technique enabled the astronomers to observe star formation, even if it is as little as one sun every 100,000 years.

Ford and Bregman are working to understand the stellar birth rate and likelihood of stars forming in groups within ellipticals. In the Milky Way, stars usually form in associations containing from tens to 100,000 stars. In elliptical galaxies, conditions are different because there is no disk of cold material to form stars.

"We were confused by some of the colors of objects in our images until we realized that they must be star clusters, so most of the star formation happens in associations," Ford said.

The team's breakthrough came when they observed Messier 105, a normal elliptical galaxy that is 34 million light years away, in the constellation Leo. Though there had been no previous indication of star formation in Messier 105, Ford and Bregman saw a few bright, very blue stars, resembling a single star 10 to 20 times the mass of the sun.

They also saw objects that aren't blue enough to be single stars, but instead are clusters of many stars. When accounting for these clusters, stars are forming in Messier 105 at an average rate of one sun every 10,000 years, Ford and Bregman concluded. "This is not just a burst of star formation but a continuous process," Ford said.

These findings raise new mysteries, such as the origin of the gas that forms the stars.

"We're at the beginning of a new line of research, which is very exciting, but at times confusing," Bregman said. "We hope to follow up this discovery with new observations that will really give us insight into the process of star formation in these 'dead' galaxies."
Contacts and sources:

Stamping Out Low Cost Nanodevices

A simple technique for stamping patterns invisible to the human eye onto a special class of nanomaterials provides a new, cost-effective way to produce novel devices in areas ranging from drug delivery to solar cells.

The technique was developed by Vanderbilt University engineers and described in the cover article of the May issue of the journal Nano Letters.

The new method works with materials that are riddled with tiny voids that give them unique optical, electrical, chemical and mechanical properties. Imagine a stiff, sponge-like material filled with holes that are too small to see without a special microscope.

Vanderbilt graduate student Jason Ryckman demonstrating the operation of a diffraction-based biosensor produced out of a nanoporous material by the new imprinting process.
Credit: Anne Raynor / Vanderbilt University

For a number of years, scientists have been investigating the use of these materials – called porous nanomaterials – for a wide range of applications including drug delivery, chemical and biological sensors, solar cells and battery electrodes. There are nanoporous forms of gold, silicon, alumina, and titanium oxide, among others.

Simple stamping

A major obstacle to using the materials has been the complexity and expense of the processing required to make them into devices.

Now, Associate Professor of Electrical Engineering Sharon M. Weiss and her colleagues have developed a rapid, low-cost imprinting process that can stamp out a variety of nanodevices from these intriguing materials.

"It's amazing how easy it is. We made our first imprint using a regular tabletop vise," Weiss said. "And the resolution is surprisingly good."

The traditional strategies used for making devices out of nanoporous materials are based on the process used to make computer chips. This must be done in a special clean room and involves painting the surface with a special material called a resist, exposing it to ultraviolet light or scanning the surface with an electron beam to create the desired pattern and then applying a series of chemical treatments to either engrave the surface or lay down new material. The more complicated the pattern, the longer it takes to make.

About two years ago, Weiss got the idea of creating pre-mastered stamps using the complex process and then using the stamps to create the devices. Weiss calls the new approach direct imprinting of porous substrates (DIPS). DIPS can create a device in less than a minute, regardless of its complexity. So far, her group reports that it has used master stamps more than 20 times without any signs of deterioration.

Process can produce nanoscale patterns

The smallest pattern that Weiss and her colleagues have made to date has features of only a few tens of nanometers, which is about the size of a single fatty acid molecule. They have also succeeded in imprinting the smallest pattern yet reported in nanoporous gold, one with 70-nanometer features.

The first device the group made is a "diffraction-based" biosensor that can be configured to identify a variety of different organic molecules, including DNA, proteins and viruses. The device consists of a grating made from porous silicon treated so that a target molecule will stick to it. The sensor is exposed to a liquid that may contain the target molecule and then is rinsed off. If the target was present, then some of the molecules stick in the grating and alter the pattern of reflected light produced when the grating is illuminated with a laser.

According to the researchers' analysis, when such a biosensor is made from nanoporous silicon it is more sensitive than those made from ordinary silicon.

The Weiss group collaborated with colleagues in Chemical and Biomolecular Engineering to use the new technique to make nano-patterned chemical sensors that are ten times more sensitive than another type of commercial chemical sensor called Klarite that is the basis of a multimillion-dollar market.

The researchers have also demonstrated that they can use the stamps to make precisely shaped microparticles by a process called "over-stamping" that essentially cuts through the nanoporous layer to free the particles from the substrate. One possible application for microparticles made this way from nanoporous silicon are as anodes in lithium-ion batteries, which could significantly increase their capacity without adding a lot of weight.

Vanderbilt University has applied for a patent on the DIPS method.



Contacts and sources:

The Rap Guide to Evolution Launched

An online series of rap music videos to aid the teaching of evolution in schools launched in London last week.
The videos are based on the successful theatre show 'The Rap Guide to Evolution', which was performed to critical acclaim at the 2009 Edinburgh Fringe Festival. The brainchild of the award-winning performer Baba Brinkman, the show combines the wit, poetry and charisma of an accomplished rapper with the accuracy, knowledge and expertise of an evolutionary scientist.

Drawings of comparitive skeletons
Credit: Wellcome Trust

Now, with support from the Wellcome Trust, Baba has split the show into 12 parts, each with its own video addressing a different area of the science behind evolution. The videos will be available for free online from a resource-packed website, and a DVD will be available to schools from autumn 2011.

The launch event takes place tonight at the Prince Charles Cinema in London and will include previews of the videos, as well as performances by Baba and an introduction to the thinking behind the project.

The show owes its origins to Dr Mark Pallen, author of 'The Rough Guide to Evolution', who had seen Brinkman's internationally acclaimed Rap Canterbury Tales and challenged Brinkman to "do for Darwin what he had done for Chaucer". To ensure scientific and historical accuracy, Brinkman consulted Pallen throughout the creative process, making The Rap Guide to Evolution the first peer-reviewed hip-hop show. Pallen has described Brinkman as having "swallowed the idea and turned it into a work of genius".
After the show's success in Edinburgh, a host of science teachers contacted Baba to ask whether The Rap Guide to Evolution was available on DVD for them to use in the classroom, and thus the seed was sown in Baba's mind to devise a way of taking the material into schools.

On the launch of the videos, Baba said: "The response to the show so far has been overwhelming, but these videos really take it to the next level. I hope educators all over the world find them helpful in overcoming the indifference and hostility that often impede the teaching of evolution, and science in general. Hip-hop music is all about rebellion, and no one's ideas are more revolutionary than Charles Darwin's."

The Rap Guide has been described as "astonishing and brilliant" by the 'New York Times', with 'Science' magazine adding that Baba "marries the fast, complex, literate delivery of Eminem with the evolutionary expertise and confrontational manner of Dawkins".
The videos will be made available for free on the Rap Guide to Evolution website, along with educational resources and a host of bonus features and ongoing updates to facilitate their use by teachers.

Subsequent music videos in the series will be released over the coming months and made available for free online, and Baba will continue to promote their use with a world tour of the live show.

Read an interview with Baba Brinkman on the Wellcome Trust Blog to find out more about how the project came about and his experiences teaching evolution through rap music.