Search This Blog

Monday, June 27, 2011

Particle physic



FOR decades doctors have attacked cancer with drugs that kill malignant cells. Unfortunately, such chemotherapy kills a lot of healthy cells as well. In recent years, the use of drug-carrying particles a few nanometres across has improved matters. Such particles can be tailored to release their payloads only when the surrounding environment indicates that they are near a tumour, thus reducing collateral damage. Even that, however, has not proved perfect. Typically, only about 1% of the drugs packaged up in nanoparticles this way make it to their destination. Sangeeta Bhatia and Geoffrey von Maltzahn of the Massachusetts Institute of Technology, however, hope to change that. As they report in Nature Materials, they believe that by granting nanoparticles the ability to communicate with one another, the success of drug delivery can be increased fortyfold.
Dr Bhatia and Dr von Maltzahn were inspired by one of the body’s natural communications systems: the way that injured tissue calls for help to stem bleeding. They wondered if they might be able to piggyback on this system to deliver drugs to tumours—and they found that they could.
When the body sustains an injury, molecules called notification proteins are produced at the site. These proteins communicate with clotting agents in the blood. In particular they round up cellular fragments called platelets, and also molecules of a soluble protein called fibrinogen, both of which circulate routinely in the bloodstream. The fibrinogen turns into an insoluble, filamentous protein called fibrin, which traps the platelets and causes them to link up into a quilt that helps stop bleeding.
The two researchers wondered if they could subvert this system to gather drug-carrying nanoparticles into the right place. To do so, they realised that they would need two types of nanoparticles. “Signalling nanoparticles” would function like notification proteins, marking the location where action was required. “Receiving nanoparticles” would then be recruited like platelets, but instead of staunching a wound they would deliver the drugs. 
For the signalling nanoparticles the team used tiny golden rods. These tend to collect at the locations of tumours because the blood vessels which serve tumours often have unusual pores in them. These pores are between 100 and 200 nanometres in diameter—perfect for trapping the rods and thus marking the tumour. Once the rods were in place, the team fired a burst of laser light in the general location of the tumour. This light was tuned to be absorbed by gold and the energy in it was thus converted into heat only in places where the rods had accumulated. That damaged the surrounding tissue enough to activate the coagulation system.
The clever bit was that the receiving nanoparticles, which carried the pharmaceutical payload, were doped with protein fragments that bind to fibrin—and thus to the wound-staunching quilts that form when the heat from the nanorods does its work. Only then do they release their cargo.
The result, Dr Bhatia and Dr von Maltzahn report, is a delivery system 40-times more effective than using nanoparticles by themselves. Moreover, in mice, at least, it shrinks cancers more effectively than other nanoparticle-based treatments. Work on men (and women) should follow soon.

The Difference Engine: The beef about corn




IN A 
surprise U-turn, members of the United States Senate voted 73-27 last week to abolish a 45-cents-a-gallon subsidy for ethanol from corn (ie, maize) that is used for blending with petrol. They also voted to kill the 54-cents-a-gallon import duty on ethanol from abroad. This is the first time in over three decades that the Senate has challenged the sacrosanct $6 billion-a-year tax break for American corn-growers and ethanol producers.

The federal government started subsidising corn-based ethanol back in the late 1970s—in a bid to wean the country off imported oil. As recently as last December, lawmakers voted to extend the ethanol subsidy for yet another year. Since then, two things have happened to make the politicians change their minds.

First, a broad consensus has now thrown its weight behind the environmentalists’ view that using home-grown ethanol—as a replacement for imported oil—squanders far too much energy and water in the process, and is not a particularly good way or reducing greenhouse gases anyway. Indeed, given the intensive use of energy in agribusiness, it is debatable whether replacing petrol with ethanol breaks even in terms of the “wells-to-wheels” energy consumed, or even produces a net reduction in carbon emission.
Besides, even if America’s entire corn crop were to be devoted to ethanol production, it would still only supply 4% of the country’s oil consumption. So much for the argument that home-grown ethanol offers an answer to America’s dependence on foreign oil.

Second, the food industry has gone noisily public about the way the federal government’s corn subsidies—which have encouraged American farmers to devote more and more of their corn crops to ethanol production—have driven up food prices. Last year, 40% of the corn grown in the United States (some five billion bushels) was used for making ethanol. This summer, corn supplies for animal feed are heading for a 15-year low. As a consequence, corn futures have soared to almost $8 a bushel—twice their price a year ago. Consumers counting the cost at the supermarket checkout now know who to blame.

In America, two ethanol-blends of fuel have been approved for use. The most common by far is E10, a blend of petrol containing up to 10% ethanol. In this case, the ethanol is used simply as an oxygenate (ie, an oxygen-rich additive) to reduce the carbon monoxide produced during combustion and to raise the octane rating of the fuel enough to protect the engine from “knocking” under load—a condition caused by the air-fuel mixture in the cylinders exploding prematurely instead of burning smoothly. Previously, MTBE (methyl tertiary-butyl ether) was the oxygenate of choice, but fell out of favour in 2004 when it was found to contaminate ground water.

A less-common blend, a fuel containing 85% ethanol and 15% petrol, is known as E85. This exists thanks to a political ploy designed to help motor manufacturers achieve the Corporate Average Fuel Economy (CAFE) requirement for the fleet of vehicles they sell each year. In 2011, the motor industry has to achieve a fleet-wide average of 30.2mpg (7.8 litres/100km) for all the new cars and 24.1mpg for all the light trucks they sell in America. Under the ethanol fudge, so-called “flex-fuel” vehicles that can run (even if they never do) on E85 as well as petrol are granted a 54% bonus towards their CAFE target. Judging from the limited availability of the blend outside the corn belt, few owners of flex-fuel vehicles ever fill up with E85.
There are good reasons why not. A gallon of pure ethanol contains two-thirds the energy of a gallon of petrol. If a flex-fuel vehicle achieves 30mpg on petrol, switching to ethanol would give it 20mpg. In other words, 50% more fuel is needed to travel the same distance. In having some petrol blended in it, the consumption penalty falls to 25% to 30% when a car is fuelled with E85. On a cost-per-mile basis, ethanol fuels like E85—even with their hefty subsidies—are typically 20% more expensive than petrol. Something similar goes for E10, though the penalty is much less.
Of course, engines designed specifically to run on ethanol can be as efficient as petrol versions. Ethanol’s higher octane rating (around 96 compared with 91 for premium grade petrol) allows them to have a higher compression ratio—and thereby deliver more power as a result. Unfortunately, without some special means for altering the compression ratio, such engines would quickly disintegrate if fuelled with petrol. By and large, flex-fuel vehicles sacrifice ethanol’s higher octane rating—and accept its poorer fuel economy—so they can also use widely available petrol.

Apart from cost, there are other reasons why motorists might want to avoid ethanol. A looming one concerns E15, a proposed blend containing 15% ethanol that producers would like to see replace E10. The Environmental Protection Agency (EPA) has given approval for E15 to be used in vehicles built since 2001. The reason for excluding older models is the fear that the stronger ethanol blend could finish off the vehicles' ageing fuel pumps, fuel lines, rubber seals and other parts, causing leaks and possibly fires.

Being hydrophilic, ethanol absorbs far more rust-causing water vapour from the atmosphere than petrol. It may take years, but steel components that come in continuous contact with ethanol will eventually corrode. In tests carried out by the Underwriters Laboratories, a safety-testing facility used widely by industry, to show that E15 was perfectly safe to use at petrol stations, only three of the eight main components in the fuel-dispensing equipment survived the evaluation unscathed.

Even E10 can cause corrosion. Laboratory tests of 70 police cars in Baltimore, taken out of service because of misfiring and lack of power, confirmed that ethanol in the fuel had caused their filters and injectors to become clogged with corrosion debris from the fuel system. Presumably, this is happening all the time to private motorists using E10 in older vehicles, but has so far gone unreported because of the sporadic nature of the incidents.

No surprise, then, that motor manufacturers have been urging the EPA not to allow E15 on the forecourt. The last thing they want is to be hit by a string of warranty claims for corroded fuel systems. And even on cars out of warranty, all it would take would be a handful of leaky fuel lines causing disasters to whip up a fire-storm of product-liability suits. Consumer groups say that if ethanol distillers like Archer Daniels Midland and Cargill are so confident about the safety of E15, they should assume the legal responsibility for any damage it may cause. Naturally enough, such calls have fallen on deaf ears.

The problem is that the Energy Independence and Security Act, passed by Congress in 2007, requires some 36 billion gallons of renewable fuels (the bulk being ethanol made from corn) to be used in vehicles by 2022—nearly three times more than this year’s requirement of 14 billion gallons. Because motorists across America have started buying far more efficient motor cars, less fuel overall is being consumed. As a result, ethanol blenders are beginning to produce more than the domestic market can absorb. Hence all the lobbying to get a pipeline built to take surplus ethanol from the Midwest to ports on the East Coast—so the subsidised fuel can then be exported to Europe at American taxpayers’ expense.

The EPA’s answer is to expand the domestic market for ethanol. With the stroke of a pen, E15 would magically increase demand by 50%. The House of Representatives has sought to block such moves, citing “important safety issues” concerning E15 that the EPA has failed to address. The House has also voted to stop public money being used to pay for the special blender pumps and tanks needed for E15—something the ethanol lobby has been counting on.

But the victory for energy, environment, food supply and fiscal commonsense remains incomplete. Last week’s vote in the Senate to scrap ethanol subsidies is unlikely to become law. The underlying tax bill to which the amendment was attached does not have a hope of being passed. But the broad bipartisan action by Congress generally to put a stop to wasteful ethanol subsidies suggests they are most unlikely to be extended when they come up for renewal in December.



Urban brains behave differently from rural ones


A New York state of mind

Urban brains behave differently from rural ones


 Shelley contemplates urban decay
“HELL is a city much like London,” opined Percy Bysshe Shelley in 1819. Modern academics agree. Last year Dutch researchers showed that city dwellers have a 21% higher risk of developing anxiety disorders than do their calmer rural countrymen, and a 39% higher risk of developing mood disorders. But exactly how the inner workings of the urban and rural minds cause this difference has remained obscure—until now. A study just published in Nature by Andreas Meyer-Lindenberg of the University of Heidelberg and his colleagues has used a scanning technique called functional magnetic-resonance imaging (fMRI) to examine the brains of city dwellers and country bumpkins when they are under stress.
In Dr Meyer-Lindenberg’s first experiment, participants lying with their heads in a scanner took maths tests that they were doomed to fail (the researchers had designed success rates to be just 25-40%). To make the experience still more humiliating, the team provided negative feedback through headphones, all the while checking participants for indications of stress, such as high blood pressure.
The urbanites’ general mental health did not differ from that of their provincial counterparts. However, their brains dealt with the stress imposed by the experimenters in different ways. These differences were noticeable in two regions: the amygdalas and the perigenual anterior cingulate cortex (pACC). The amygdalas are a pair of structures, one in each cerebral hemisphere, that are found deep inside the brain and are responsible for assessing threats and generating the emotion of fear. The pACC is part of the cerebral cortex (again, found in both hemispheres) that regulates the amygdalas.
People living in the countryside had the lowest levels of activity in their amygdalas. Those living in towns had higher levels. City dwellers had the highest. Not that surprising, to those of a Shelleyesque disposition. In the case of the pACC, however, what mattered was not where someone was living now, but where he or she was brought up. The more urban a person’s childhood, the more active his pACC, regardless of where he was dwelling at the time of the experiment.
The amygdalas thus seem to respond to the here-and-now whereas the pACC is programmed early on, and does not react in the same, flexible way as the amygdalas. Second-to-second changes in its activity might, though, be expected to be correlated with changes in the amygdalas, because of its role in regulating them. fMRI allows such correlations to be measured.
In the cases of those brought up in the countryside, regardless of where they now live, the correlations were as expected. For those brought up in cities, however, these correlations broke down. The regulatory mechanism of the native urbanite, in other words, seems to be out of kilter. Further evidence, then, for Shelley’s point of view. Moreover, it is also known that the pACC-amygdala link is often out of kilter in schizophrenia, and that schizophrenia is more common among city dwellers than country folk. Dr Meyer-Lindenberg is careful not to claim that his results show the cause of this connection. But they might.
Dr Meyer-Lindenberg and his team conducted several subsequent experiments to check their findings. They asked participants to complete more maths tests—and also tests in which they mentally rotated an object—while investigators chided them about their performance. The results matched those of the first test. They also studied another group of volunteers, who were given stress-free tasks to complete. These experiments showed no activity in either the amygdalas or the pACC, suggesting that the earlier results were indeed the result of social stress rather than mental exertion.
As is usually the case in studies of this sort, the sample size was small (and therefore not as robust as might be desirable) and the result showed an association, rather than a definite, causal relationship. That association is, nevertheless, interesting. Living in cities brings many benefits, but Dr Meyer-Lindenberg’s work suggests that Shelley and his fellow Romantics had at least half a point.

In Search of the Memory Molecule, Researchers Discover Key Protein Complex


The CaMKII molecule has 12 lobes (6 are shown here), each of which has enzymatic activity. This molecule can bind to the NMDA receptor, forming a complex. The number of such complexes at the synapse may increase the amount of memory that can be stored. (Credit: Neal Waxham)
Science Daily — Have a tough time remembering where you put your keys, learning a new language or recalling names at a cocktail party? New research from the Lisman Laboratory at Brandeis University points to a molecule that is central to the process by which memories are stored in the brain.













The brain is composed of neurons that communicate with each other through structures called synapses, the contact point between neurons. Synapses convey electrical signals from the "sender" neuron to the "receiver" neuron. Importantly, a synapse can vary in strength; a strong synapse has a large effect on its target cell, a weak synapse has little effect.A paper published in the June 22 issue of the Journal of Neurosciencedescribes the new findings.New research by John Lisman, professor of biology and the Zalman Abraham Kekst chair in neuroscience, helps explain how memories are stored at synapses. His work builds on previous studies showing that changes in the strength of these synapses are critical in the process of learning and memory.
"It is now quite clear that memory is encoded not by the change in the number of cells in the brain, but rather by changes in the strength of synapses," Lisman says. "You can actually now see that when learning occurs, some synapses become stronger and others become weaker."
But what is it that controls the strength of a synapse?
Lisman and others have previously shown that a particular molecule called Ca/calmodulin-dependent protein kinase II (CaMKII) is required for synapses to change their strength. Lisman's team is now showing that synaptic strength is controlled by the complex of CaMKII with another molecule called the NMDAR-type glutamate receptor (NMDAR). His lab has discovered that the amount of this molecular complex (called the CaMKII/NMDAR complex) actually determines how strong a synapse is, and, most likely, how well a memory is stored.
"We're claiming that if you looked at a weak synapse you'd find a small number of these complexes, maybe one," says Lisman. "But at a strong synapse you might find many of these complexes."
A key finding in their experiment used a procedure that reduced the amount of this complex. When the complex was reduced, the synapse became weaker. This weakening was persistent, indicating that the memory stored at that synapse was erased.
The experiments were done using small slices of rat hippocampus, the part of the brain crucial for memory storage.
"We can artificially induce learning-like changes in the strength of synapses because we know the firing pattern that occurs during actual learning in an animal," Lisman says.
To prove their hypothesis, he explained, his team first strengthened the synapse, eventually saturating it to the point where no more learning or memory could take place. They then added a chemical called CN-19 to the synapse, which they suspected would dissolve the CaMKII/NMDAR complex. As predicted, it did in fact make the synapse weaker, suggesting the loss of memory.
A final experiment, says Lisman, was the most exciting: They started out by making the synapse so strong that it was "saturated," as indicated by the fact that no further strengthening could be induced. They then "erased" the memory with the chemical CN-19. If the "memory" was really erased, the synapse should no longer be saturated. To test this hypothesis, Lisman's team again stimulated the synapse and found that it could once again "learn." Taken together, these results demonstrated the ability of CN19 to erase the memory of a synapse -- a critical criterion for establishing that the CaMKII/NMDAR complex is the long sought memory storage molecule in the brain.
Lisman's team used CN19 due to previous studies, which indicate that the chemical could affect the CaMKII/NMDAR complex. Lisman's team wanted to show that CN19 would decrease the complex in living cells. Several key control experiments proved this to be the case.
"Most people accept that the change in the synapses that you can see under the microscope is the mechanism that actually occurs during learning," says Lisman. "So this paper will have a lot of impact -- but in science you still have to prove things, so the next step would be to try this in an actual animal and see if we can make it forget something it has previously learned."
Lisman says that if memory is understood at the biochemical level, the impact will be enormous.
"You have to understand how memory works before you can understand the diseases of memory."
Lisman assembled a large team to undertake this complex research. A key collaborator was Magdalena Sanhueza, who once worked with Dr. Lisman at Brandeis, and her student, German Fernandez-Villalobos, both now of the University of Chile, Department of Biology and Ulli Bayer of the University of Colorado Denver School of Medicine, Department of Pharmacology, who developed CN19, a particular form that could actually enter neurons.
Others involved include Nikolai Otmakhov and Peng Zhang from Brandeis and Gyulnara Kasumova, who worked in the Lisman laboratory for several years as an undergraduate. An additional group contributing to the work was that of Johannes Hell, Professor of Pharmacology at the UC Davis School of Medicine. He and his student, Ivar S. Stein, used immunoprecipitation methods to actually show that the CN19 had dissolved the CaMKII/NMDAR complex

Deep History of Coconuts Decoded: Origins of Cultivation, Ancient Trade Routes, and Colonization of the Americas



Analysis of coconut DNA revealed much more structure than scientists expected given the long history of coconut exploitation by people. Written in the DNA are two origins of cultivation and many journeys of exploration and colonization. (Credit: Kenneth Olsen/WUSTL)
Science Daily — The coconut (the fruit of the palm Cocos nucifera) is the Swiss Army knife of the plant kingdom; in one neat package it provides a high-calorie food, potable water, fiber that can be spun into rope, and a hard shell that can be turned into charcoal. What's more, until it is needed for some other purpose it serves as a handy flotation device.















So extensively is the history of the coconut interwoven with the history of people traveling that Kenneth Olsen, a plant evolutionary biologist, didn't expect to find much geographical structure to coconut genetics when he and his colleagues set out to examine the DNA of more than 1300 coconuts from all over the world.No wonder people from ancient Austronesians to Captain Bligh pitched a few coconuts aboard before setting sail. (The mutiny of the Bounty is supposed to have been triggered by Bligh's harsh punishment of the theft of coconuts from the ship's store.)

"I thought it would be mostly a mish-mash," he says, thoroughly homogenized by humans schlepping coconuts with them on their travels.
He was in for a surprise. It turned out that there are two clearly differentiated populations of coconuts, a finding that strongly suggests the coconut was brought under cultivation in two separate locations, one in the Pacific basin and the other in the Indian Ocean basin. What's more, coconut genetics also preserve a record of prehistoric trade routes and of the colonization of the Americas.
The discoveries of the team, which included Bee Gunn, now of the Australian National University in Australia, and Luc Baudouin of the Centre International de Recherches en Agronomie pour le Développement (CIRAD) in Montpellier, France, as well as Olsen, associate professor of biology at Washington University in St. Louis, are described in the June 23 online issue of the journal PLoS ONE.
Morphology a red herring
Before the DNA era, biologists recognized a domesticated plant by its morphology. In the case of grains, for example, one of the most important traits in domestication is the loss of shattering, or the tendency of seeds to break off the central grain stalk once mature.
The trouble was it was hard to translate coconut morphology into a plausible evolutionary history.
There are two distinctively different forms of the coconut fruit, known as niu kafa and niu vai, Samoan names for traditional Polynesian varieties. The niu kafa form is triangular and oblong with a large fibrous husk. The niu vai form is rounded and contains abundant sweet coconut "water" when unripe.
"Quite often the niu vai fruit are brightly colored when they're unripe, either bright green, or bright yellow. Sometimes they're a beautiful gold with reddish tones," says Olsen.
Coconuts have also been traditionally classified into tall and dwarf varieties based on the tree "habit," or shape. Most coconuts are talls, but there are also dwarfs that are only several feet tall when they begin reproducing. The dwarfs account for only 5 percent of coconuts.
Dwarfs tend to be used for "eating fresh," and the tall forms for coconut oil and for fiber.
"Almost all the dwarfs are self fertilizing and those three traits -- being dwarf, having the rounded sweet fruit, and being self-pollinating -- are thought to be the definitive domestication traits," says Olsen.
"The traditional argument was that the niu kafa form was the wild, ancestral form that didn't reflect human selection, in part because it was better adapted to ocean dispersal," says Olsen. Dwarf trees with niu vai fruits were thought to be the domesticated form.
The trouble is it's messier than that. "You almost always find coconuts near human habitations," says Olsen, and "while the niu vai is an obvious domestication form, the niu kafa form is also heavily exploited for copra (the dried meat ground and pressed to make oil) and coir (fiber woven into rope)."
"The lack of universal domestication traits together with the long history of human interaction with coconuts, made it difficult to trace the coconut's cultivation origins strictly by morphology," Olsen says.
DNA was a different story.
Collecting coconut DNA
The project got started when Gunn, who had long been interested in palm evolution, and who was then at the Missouri Botanical Garden, contacted Olsen, who had the laboratory facilities needed to study palm DNA.
Together they won a National Geographic Society grant that allowed Gunn to collect coconut DNA in regions of the western Indian Ocean for which there were no data. The snippets of leaf tissue from the center of the coconut tree's crown she sent home in zip-lock bags to be analyzed.
"We had reason to suspect that coconuts from these regions -- especially Madagascar and the Comoros Islands -- might show evidence of ancient 'gene flow' events brought about by ancient Austronesians setting up migration routes and trade routes across the southern Indian Ocean," Olsen says.
Olsen's lab genotyped 10 microsatellite regions in each palm sample. Microsatellites are regions of stuttering DNA where the same few nucleotide units are repeated many times. Mutations pop up and persist pretty easily in these regions because they usually don't affect traits that are important to survival and so aren't selected against, says Olsen. "So we can use these genetic markers to 'fingerprint' the coconut," he says.
The new collections were combined with a vast dataset that had been established by CIRAD, a French agricultural research center, using the same genetic markers. "These data were being used for things like breeding, but no one had gone through and systematically examined the genetic variation in the context of the history of the plant," Olsen says.
Two origins of cultivation
The most striking finding of the new DNA analysis is that the Pacific and Indian Ocean coconuts are quite distinct genetically. "About a third of the total genetic diversity can be partitioned between two groups that correspond to the Indian Ocean and the Pacific Ocean," says Olsen.
"That's a very high level of differentiation within a single species and provides pretty conclusive evidence that there were two origins of cultivation of the coconut," he says.
In the Pacific, coconuts were likely first cultivated in island Southeast Asia, meaning the Philippines, Malaysia, Indonesia, and perhaps the continent as well. In the Indian Ocean the likely center of cultivation was the southern periphery of India, including Sri Lanka, the Maldives, and the Laccadives.
The definitive domestication traits -- the dwarf habit, self-pollination and niu vai fruits -- arose only in the Pacific, however, and then only in a small subset of Pacific coconuts, which is why Olsen speaks of origins of cultivation rather than of domestication.
"At least we have it easier than scientists who study animal domestication," he says. "So much of being a domesticated animal is being tame, and behavioral traits aren't preserved in the archeological record."
Did it float or was it carried?
One exception to the general Pacific/Indian Ocean split is the western Indian Ocean, specifically Madagascar and the Comoros Islands, where Gunn had collected. The coconuts there are a genetic mixture of the Indian Ocean type and the Pacific type.
Olsen and his colleagues believe the Pacific coconuts were introduced to the Indian Ocean a couple of thousand years ago by ancient Austronesians establishing trade routes connecting Southeast Asia to Madagascar and coastal east Africa.
Olsen points out that no genetic admixture is found in the more northerly Seychelles, which fall outside the trade route. He adds that a recent study of rice varieties found in Madagascar shows there is a similar mixing of the japonica and indica rice varieties from Southeast Asia and India.
To add to the historical shiver, the descendants of the people who brought the coconuts and rice are still living in Madagascar. The present-day inhabitants of the Madagascar highlands are descendants of the ancient Austronesians, Olsen says.
Much later the Indian Ocean coconut was transported to the New World by Europeans. The Portuguese carried coconuts from the Indian Ocean to the West Coast of Africa, Olsen says, and the plantations established there were a source of material that made it into the Caribbean and also to coastal Brazil.
So the coconuts that you find today in Florida are largely the Indian ocean type, Olsen says, which is why they tend to have the niu kafa form.
On the Pacific side of the New World tropics, however, the coconuts are Pacific Ocean coconuts. Some appear to have been transported there in pre-Columbian times by ancient Austronesians moving east rather than west.
During the colonial period, the Spanish brought coconuts to the Pacific coast of Mexico from the Philippines, which was for a time governed on behalf of the King of Spain from Mexico.
This is why, Olsen says, you find Pacific type coconuts on the Pacific coast of Central America and Indian type coconuts on the Atlantic coast.
"The big surprise was that there was so much genetic differentiation clearly correlated with geography, even though humans have been moving coconut around for so long."
Far from being a mish-mash, coconut DNA preserves a record of human cultivation, voyages of exploration, trade and colonization

How Solar Arrays Are Made


How Solar Arrays Are Made

A new lab is inventing alternative ways to package and install solar cells.BY KEVIN BULLIS

Fraunhofer scientist Theresa Christian tests the power output of a solar cell, the basic device within a solar panel that absorbs light and converts it into electricity. The lab doesn’t design solar cells, but building solar panels requires knowing how well they perform, because a panel’s power output is limited by its worst-performing cell.
Credit: Porter Gifford

When the Snake Bites ... Try Ointment

When the Snake Bites ... Try Ointment

sn-snakebite.jpg
Don't tread on me. An ointment might help people bitten by the eastern brown snake of Australia or other snakes.
Credit: ANT Photo Library /Photo Researchers, Inc.
Time is the foe for people who have been bitten by a poisonous snake, but a new study may give them a bit more of it. Researchers have identified an ointment that slows the spread of some kinds of snake venom through the body, potentially giving snakebite victims longer to reach a hospital or clinic.
Although poisonous snakes kill only a handful of people in the United States each year, the World Health Organization puts the global toll at about 100,000 people. When some snakes strike, the bulky proteins in their venom don't infiltrate the bloodstream immediately but wend through the lymphatic system to the heart. In Australia, a country slithering with noxious snakes, the recommended first aid for a bite includes tightly wrapping the bitten limb to shut the lymphatic vessels—a method called pressure bandage with immobilization (PBI). The idea is to hamper the venom's spread until the victim can receive antivenom medicine, essentially antibodies that lock onto and neutralize the poison. But PBI is not practical if the bite is on the torso or face, and one study found that even people trained to perform the technique do it right only about half the time. As a result, some people don't get antivenom in time.
So physiologist Dirk van Helden of the University of Newcastle in Australia and colleagues went looking for a chemical method to detain the venom. They settled on an ointment that contains glyceryl trinitrate, the compound better known as nitroglycerin that doctors have used to treat everything from tennis elbow to angina. The ointment, prescribed for a painful condition called anal fissures, releases nitric oxide, causing the lymphatic vessels to clench. The researchers first injected volunteers in the foot with a harmless radioactive mixture that, like some snake toxins, moves through the lymphatic vessels. In control subjects that didn't receive the ointment, the mixture took 13 minutes to climb to the top of the leg. But it required 54 minutes if the researchers immediately smeared the ointment around the injection site, the team reports online today in Nature Medicine.
To determine whether the ointment improved survival, the researchers injected the feet of anesthetized rats with venom from the eastern brown snake, a cobra relative that is one of Australia's deadliest, and measured how much time elapsed before the rodents stopped breathing. Rats lived about 50% longer if the researchers slathered the rodents' hind limbs with the cream.
Although the team can't specify how many minutes or hours the treatment might buy, the findings suggest that "it gives you time and a half to get help," says van Helden. "I'd prefer that to just time." He says that hikers and people who work in rural areas might consider carrying the cream in case they get bitten when they are far from medical facilities.
The method is "very exciting," says Steven Seifert, medical director of the New Mexico Poison and Drug Information Center in Albuquerque. "It makes sense to try to slow the passage of the venom into the circulation." Medical toxicologist Eric Lavonas, associate director of the Rocky Mountain Poison and Drug Center in Denver, Colorado, is also impressed. "This is really promising," he says. The authors "did the right studies to evaluate this approach."
Still, Seifert and Lavonas question whether such a treatment would do much good in the United States. Australian snakes largely inject neurotoxic venom that spreads through the body and attacks the nervous system, triggering paralysis. The perpetrators of most U.S. snakebites are rattlesnakes, copperheads, and cottonmouths, which inject a different type of venom that mainly destroys the tissue near the bite. But the researchers note that the ointment could prove valuable in many other countries inhabited by dangerous snakes, such as cobras, mambas, and kraits, that produce neurotoxic venom. "If this treatment pans out, it may revolutionize first aid for snakebite in parts of the world where venom causes paralysis," Lavonas says

Ladybirds fool ants for food



CSIRO   

David_Cappaert_MSU
A predatory ladybird larva (Cryptolaemus montrouzieri) is disguised and protected by its woolly coat of wax filaments.
Image: David Cappaert, Michigan State University
CSIRO research has revealed that the tremendous diversity of ladybird beetle species is linked to their ability to produce larvae which, with impunity, poach members of ‘herds’ of tiny, soft-bodied scale insects from under the noses of the aggressive ants that tend them.

Reconstructing the evolutionary history of ladybird beetles (family Coccinellidae), the researchers found that the ladybirds’ first major evolutionary shift was from feeding on hard-bodied (“armoured”) scale insects to soft-bodied scale insects.

“Soft-bodied scales are easier to eat, but present a whole new challenge,” says Dr Ainsley Seago, a researcher with the CSIRO’s Australian National Insect Collection.

“These soft-bodied sap-feeding insects are tended by ants, which guard the defenceless scales and collect a ‘reward’ of sugary honeydew. The ant tenders aggressively defend their scale insect ‘livestock’ and are always ready to attack any predator that threatens their herd.”

Therein lay the evolutionary problem confronting ladybird beetles, whose larvae were highly vulnerable to ant attack.

To avoid being killed as they poach the ant’s scales, ladybird larvae evolved to produce two anti-ant defences: an impregnable woolly coat of wax filaments, and glands which produce defensive chemicals. Most of the ladybird family’s 6,000 species are found in lineages with one or both of these defences.

“We found that most of ladybird species’ richness is concentrated in groups with these special larval defences,” Dr Seago said.

”These groups are more successful than any other lineage of ladybird beetle. Furthermore, these defences have been ‘lost’ in the few species that have abandoned soft-scale poaching in favour of eating pollen or plant leaves.

“This is an unusual way for diversity to arise in an insect group.

“In most previous research, insect species richness has been linked to co-evolution or adaptive ‘arms races’ with plants.”

This research helps to place Australia’s ladybirds in the evolutionary tree of life for insects, and helps us to understand the complex system of mechanisms by which beetle diversity has arisen.