Search This Blog

Thursday, March 1, 2012

Eight Cars Which Will Save You Tons




Looking to save some money daily? When you purchase your next car, make it a fuel efficient one! Here are the most fuel efficient ones on the market today!
24/7 Wall Street shares…
8. 2012 Honda Insight
>MPG: 42
>Engine type: hybrid
>Car type: Front wheel drive, compact
>Fuel cost per year: $1,329
>Sticker price/base model: $18,350
Two Honda (NYSE: HMC) Insights made the list of fuel-efficient cars — the base Insight and the AV-S7 model. Unlike some of the other cars that get very high gas mileage, the Insight has full seating capacity for five people, which Honda’s smaller hybrid, the CR-Z, does not. The base price of the Honda Insight is well below that of most of the cars on this list, which probably reflects Honda’s attempt to lure hybrid buyers away from the mass market. Honda has added a number of interactive features meant to “engage” the driver in the fuel-monitoring experience. This includes an “ECON Button” that modifies various vehicle systems to help minimize its overall energy use — and maximize fuel efficiency.
7. 2012 Toyota Camry Hybrid LE
>MPG: 41
>Engine type: hybrid
>Car type: Front wheel drive, mid-sized
>Fuel cost per year: $1,361
>Sticker price/base model: $29,500
The Camry Hybrid is a good example of a car manufacturer adding a hybrid version to a well-known, established brand of gas-driven models. The Camry is Toyota’s base 4-door sedan that sells for $21,955. This base model gets average gas mileage of only 25 MPG in city driving and fuel costs up to $1,993 per year. The hybrid version costs $8,000 more than the base model, so consumers have to trade upfront costs against future savings on fuel costs over time. Toyota (NYSE: TM) has placed the Camry Hybrid just above the Prius in both size and price.
6. 2012 Lexus CT 200h
>MPG: 42
>Engine type: hybrid
>Car type: Front wheel drive, luxury compact
>Fuel cost per year: $1,329
>Sticker price/base model: $29,120
Toyota’s luxury nameplate, Lexus, is one of a growing number of luxury car lines that have begun to offer hybrids. The fuel efficiency trend has progressed enough that even some full-sized SUVs now come with hybrid engines. The full-sized flagship Lexus S comes in a hybrid version, selling for $129,750. The CT, a five door hatchback, is the brand’s entry level vehicle. It comes in a base and an “F” series sport model. The base price for the “F” is $37,995. The car goes from zero to 60 in 9.8 seconds. That qualifies as slow for a sports car, but not one that gets 42 MPG.
5. 2012 Honda Civic Hybrid
>MPG: 44
>Engine type: hybrid
>Car type: Front wheel, four-door sedan
>Fuel cost per year: $1,268
>Sticker price/base model: $24,050
Honda took the highly successful Civic, already known for its fuel efficiency and quality ratings, and added a hybrid engine option to its lineup. The Civic now comes in seven models that range from a $15,805 sedan to the high-end “Si” coupe. Honda will push further into the alternative engine space with a new Civic Natural Gas model. This model has a base price of $26,155. Satellite links and luxury packages can push the price of the Civic Hybrid well over $27,000. Honda has begun to offer attractive financing packages to quicken Civic Hybrid sales. This includes a 0.9% financing option over a period as long as 60 months. This could mean that Honda either cannot sell many of the cars, or that it is willing to invest to take market share from its rivals.
4. 2012 Toyota Prius
>MPG: 50
>Engine type: hybrid
>Car type: Front wheel drive mid-sized
>Fuel cost per year: $1,116
>Sticker price/base model: $23,015
The Toyota Prius has three models among the top 11 most fuel-efficient cars sold in America. The Prius now comes in a base model, a smaller “c” model designed for urban driving, and the Prius V four-door wagon. The Prius is the undisputed king of the alternative energy car market. The car went through three generations of development since it was first sold in Japan in 1997. Toyota also produced an all-electric version last year, when Prius sales passed the three-million mark worldwide. The Prius and the Honda Fit hybrid exchange the spot as the top-selling car per month in Japan. For the entire year 2011, Prius took the top spot with 252,528 units sold.
3. 2012 Azure Dynamics Transit Connect Electric Van
>MPG-equivalent: 62
>Engine type: Electric
>Car type: Front wheel drive van
>Fuel cost per year: $972
>Sticker price/base model: $22,035
Unlike the other vehicles on this list, the Azure Transit Connect is a commercial truck. It was launched by Ford (NYSE: F) and Canadian car component company Azure in mid-2010. It comes in both a basic and wagon size. The initial sales goal for the light truck were extremely modest. Reuters reports that sales are expected to be less than 2,000 this year. Like most commercial vans, the Transit Connect has a full rear door, two floor-to-ceiling side doors, and a wheel base longer than most passenger cars.
2. 2012 Nissan Leaf
>MPG-equivalent: 99
>Engine type: Electric
>Car type: Front wheel drive, mid-sized
>Fuel cost per year: $612
>Sticker price/base model: $35,200 (editor’s note: does not include tax credit)
The Leaf was a major model launch for Nissan and its partner Renault. Reuters reported at the time the Leaf was first released that “Nissan and Renault are counting on an aggressive push into the nascent electric car market to boost their brand image — much as the Prius hybrid did for Toyota Motor Corp.” The Leaf was originally available in only seven states — Arizona, California, Hawaii, Oregon, Tennessee, Texas and Washington. By July 2011, as more capacity came online, Nissan marketed the Leaf in a number of states. Unlike Mitsubishi, Nissan already has a large presence in the U.S. The company sold 79,313 cars and light trucks in January, up 10% from the same month in 2011. With a nearly 9% share of the American market, Nissan has the dealer network and marketing tools to push the Leaf as a major alternative engine car.
1. 2012 Mitsubishi i-MiEV
>MPG-equivalent: 112
>Engine type: Electric
>Car type: Rear wheel drive subcompact
>Fuel cost per year: $540
>Sticker price/base model: $29,125
Mitsubishi is one of the least successful major car companies that offers a fleet of cars and light trucks in the U.S. During the month of January, Mitsubishi sold only 4,711 cars in the North America, down 18% from January of last year. The Japanese company has, however, decided to offer the i-MiEV early this year to compete with more well-known electric cars like the Chevy Volt and Nissan Leaf. The federal government is so anxious to drive the market for fuel-efficient cars that it offers a $7,500 tax credit for people who buy the car. The American Council for an Energy-Efficient Economy recently named it the Greenest Car — the first time an electric vehicle has taken the number one spot.
Get the entire article at 24/7 Wall Street!

Jobs @ Multiple Openings in our PHP Department at iFlair Web Technologies Pvt. Ltd.Ahmedabad.


We have very new openings in our PHP Development Department.

Position:

Junior PHP Developer (Position 3) Experience: 6 month to 1.5 year
Senior PHP Developer (Position 3) Experience: 1.5 Year to 3 Years
PHP Team Leader with Good communication skills (Position 2) Experience: 3 Year to 5 Years

Desired Profile:

1. Good Experience in PHP/ MySql
2. Knowledge for AJAX/ Javascript/ Json/ Jquery
3. Strong Technical & Research Skills.
4. Knowledge of Open source [Wordpress, Joomla, Magento, Drupal,OsCommerce,Zencart] is added advantage

Pl. send your latest resume ASAP on php@iflair.com
 with following details.

Total Experience:
Current Company :
Experience in PHP:
Since how long you are with this company:
Designation:
Current On hand Salary:
Expected Salary:
Required Joining Period:


Also you can call on 9558803176 for your earliest interview schedule.

Company Profile: iFlair Web Technologies Pvt. Ltd. is the leading Custom Website Development and Open Source Development Company. We are 8 years old, 100% offshore Outsourcing company.
We have completed more than 500+ PHP projects with 45+ Employee strength. We work with Custom PHP Apps, Wordpress, Joomla, Magento, Drupal, OsCommerce, Zencart, Mobile Development etc..

Development Center: Nr. Parimal Under Bridge, Ahmedabad.

Have a good day!!!

Epigenetic culprit in Alzheimer's memory decline




Blockade of learning and memory genes may occur early in Alzheimer's diseaseIn a mouse model of Alzheimer's disease (right), HDAC2 levels in the hippocampus are higher than in the normal mouse hippocampus (left). Credit: Dr. Li-Huei Tsai, Massachusetts Institute of Technology
In a mouse model of Alzheimer's disease, memory problems stem from an overactive enzyme that shuts off genes related to neuron communication, a new study says.
When researchers genetically blocked the enzyme, called HDAC2, they 'reawakened' some of the neurons and restored the animals' cognitive function. The results, published February 29, 2012, in the journal Nature, suggest that drugs that inhibit this particular enzyme would make good treatments for some of the most devastating effects of the incurable neurodegenerative disease.
"It's going to be very important to develop selective chemical inhibitors against HDAC2," says Howard Hughes Medical Institute investigator Li-Huei Tsai, whose team at the Massachusetts Institute of Technology performed the experiments. "If we could delay the cognitive decline by a certain period of time, even six months or a year, that would be very significant."
In every cell, DNA wraps itself around proteins called histones. Chemical groups such as methyl and acetyl can bind to histones and affect DNA expression. HDAC2 is a histone deacetylase, an enzyme that removes acetyl groups from the histone, effectively turning off nearby genes.
In 2007, Tsai's group reported in Nature that this so-called epigenetic change can contribute to cognitive decline. They used a strain of mutant mice developed in her lab called CK-p25, which shows a profound loss of neurons and synapses, the junctions between neurons. The animals also carry the amyloid-beta plaques thought to cause Alzheimer's disease and show impaired learning and memory. When Tsai's team gave the mice drugs that block all HDACs, the animals sprouted more synapses and showed better memory function.
There are 19 known HDACs. In 2009, the researchers found that one of these, HDAC2, can cause a loss of synapses and memory function in normal mice.
The new study pulls from both of these previous findings, investigating HDAC2's affect on CK-p25 mice.
The researchers showed that the mutant animals have an elevated level of HDAC2 in two regions known to be affected in neurodegenerative disease: the hippocampus, important for learning and memory, and part of the temporal lobe called the entorhinal cortex. In these regions, the researchers also found that HDAC2 binds to a host of memory genes and dampens their expression.
Tsai's team then used a technique called RNA interference to silence the expression of HDAC2 in neurons in the hippocampus. Four weeks later, they found a dramatic increase in synaptic density. What's more, when given two different memory tests, the treated animals were indistinguishable from normal controls.
Blocking HDAC2 expression did not change the number of dying neurons. Still, the findings suggest that memory can be improved even in later stages of the disease, Tsai says.
"The neurons that are still alive are essentially zombies: they're not really functioning properly because of the epigenetic blockade," Tsai says. "What we're showing is that, if we can get some of those neurons to wake up, we can get cognitive function to recover to a certain extent."
Using hippocampal neurons grown in culture, Tsai also uncovered a potential mechanism that raises the level of HDAC2 in the first place. She showed that amyloid beta and oxidative stress—both risk factors for Alzheimer's disease—can activate a protein called the glucocorticoid receptor 1. This receptor, in turn, can switch on the runaway expression of HDAC2.
"The striking thing is that amyloid beta has a very, very acute effect in elevating HDAC2 expression, but then the consequences can be very long term," Tsai says. This mechanism could explain why clinical trials of drugs that clear out amyloid beta in people with Alzheimer's haven't worked very well, she says.
Finally, Tsai's team looked at postmortem brain tissue from people who died of Alzheimer's disease. These samples, like those in mice, had elevated levels of HDAC2 in the hippocampus and entorhinal cortex.
The clinical applications of this work are promising, Tsai says, but it's important not to oversell the findings. "While all the data look very promising in animal models, human studies are a completely different ball game," she says. "We need to do clinical trials to see whether this concept holds up."
More information: Graff J et al. "An epigenetic blockade of cognitive functions in the neurodegenerating brain." Nature, February 29, 2012.
Provided by Howard Hughes Medical Institute

"Epigenetic culprit in Alzheimer's memory decline." February 29th, 2012. http://medicalxpress.com/news/2012-02-epigenetic-culprit-memory-decline.html
Posted by
Robert Karl Stonjek

In what ways does lead damage the brain?




Exposure to lead wreaks havoc in the brain, with consequences that include lower IQ and reduced potential for learning. But the precise mechanism by which lead alters nerve cells in the brain has largely remained unknown.
New research led by Tomás R. Guilarte, PhD, Leon Hess Professor and Chair of Environmental Health Sciences at Columbia University Mailman School of Public Health, and post-doctoral research scientist Kirstie H. Stansfield, PhD, used high-powered fluorescent microscopy and other advanced techniques to painstakingly chart the varied ways lead inflicts its damage. They focused on signaling pathways involved in the production of brain-derived neurotropic factor, or BDNF, a chemical critical to the creation of new synapses in the hippocampus, the brain's center for memory and learning.
The study appears online in the journal Toxicological Sciences.
Once BDNF is produced in the nucleus, explains Dr. Stansfield, it is transported as cargo in a railroad-car-like vesicle along a track called a microtubule toward sites of release in the axon and dendritic spines. Vesicle navigation is controlled in part through activation (phosphorylation) of the huntingtin protein, which as its name suggests, was first identified through research into Huntington's disease. By looking at huntingtin expression, the researchers found that lead exposure, even in small amounts, is likely to impede or reverse the train by altering phosphorylation at a specific amino acid.
The BDNF vesicle transport slowdown is just one of a variety of ways that lead impedes BDNF's function. The researchers also explored how lead curbs production of BDNF in the cell nucleus. One factor, they say, may be a protein called methyl CpG binding protein 2, or MeCP2, which has been linked with RETT syndrome and autism spectrum disorders and acts to "silence" BDNF gene transcription.
The paper provides the first comprehensive working model of the ways by which lead exposure impairs synapse development and function. "Lead attacks the most fundamental aspect of the brain—the synapse. But by better understanding the numerous and complex ways this happens we will be better able to develop therapies that ameliorate the damage," says Dr. Guilarte.
Provided by Columbia University

"In what ways does lead damage the brain?." February 29th, 2012. http://medicalxpress.com/news/2012-02-ways-brain.html
Posted by
Robert Karl Stonjek

Scientists develop world's most advanced drug to protect the brain after a stroke





Scientists at the Toronto Western Research Institute (TWRI), Krembil Neuroscience Center, have developed a drug that protects the brain against the damaging effects of a stroke in a lab setting. This drug has been in development for a few years. At this point, it has reached the most advanced stage of development among drugs created to reduce the brain's vulnerability to stroke damage (termed a "neuroprotectant"). Over 1000 attempts to develop such drugs by scientists worldwide have failed to be translated to a stage where they can be used in humans, leaving a major unmet need for stroke treatment. The drug developed by the TWRI team is the first to achieve a neuroprotective effect in the complex brain of primates, in settings that simulate those of human strokes. ischemic stroke.
The study, "Treatment of Stroke with a PSD95 inhibitor in the Gyrencephalic Primate Brain", published online today in Nature, shows how the drug, called a "PSD95 inhibitor" prevents brain cell death and preserves brain function when administered after a stroke has occurred.
"We are closer to having a treatment for stroke than we have ever been before," said Dr. Michael Tymianski, TWRI Senior Scientist and the study's lead author. "Stroke is the leading cause of death and disability worldwide and we believe that we now have a way to dramatically reduce its damaging effects."
During a stroke, regions of the brain are deprived of blood and oxygen. This causes a complex sequence of chemical reactions in the brain, which can result in neurological impairment or death. The PSD95 inhibitor published by the Toronto team acts to protect the brain by preventing the occurrence of these neurotoxic reactions.
The study used cynomolgus macaques, which bear genetic, anatomic and behaviour similarities to humans, as an ideal model to determine if this therapy would be beneficial in patients.
Animals that were treated with the PSD95 inhibitor after a stroke had greatly reduced brain damage and this translated to a preservation of neurological function. These improvements were observed in several scenarios that simulated human strokes. Specifically, when the treatment was given either early, or even at 3 hours, after the stroke onset, the animals exhibited remarkable recoveries. Benefits were also observed when the drug therapy was combined with conventional therapies (aimed at re-opening blocked arteries to the brain). Beneficial effects were observed even in a time window when conventional therapies on their own no longer have an effect.
"There is hope that this new drug could be used in conjunction with other treatments, such as thrombolytic agents or other means to restore blood flow to the brain, in order to further reduce the impact of stroke on patients," said Dr. Tymianski. "These findings are extremely exciting and our next step is to confirm these results in a clinical trial."
More information: DOI: 10.1038/nature10841
Provided by University Health Network

"Scientists develop world's most advanced drug to protect the brain after a stroke." February 29th, 2012. http://medicalxpress.com/news/2012-02-scientists-world-advanced-drug-brain.html
Posted by
Robert Karl Stonjek

Study finds new genes that cause Baraitser-Winter syndrome, a brain malformation




Scientists from Seattle Children's Research Institute and the University of Washington, in collaboration with the Genomic Disorders Group Nijmegen in the Netherlands, have identified two new genes that cause Baraitser-Winter syndrome, a rare brain malformation that is characterized by droopy eyelids and intellectual disabilities.
"This new discovery brings the total number of genes identified with this type of brain defect to eight," said William Dobyns, MD, a geneticist at Seattle Children's Research Institute. Identification of the additional genes associated with the syndrome make it possible for researchers to learn more about brain development. The study, "De novo mutations in the actin genes ACTB and ACTG1 cause Baraitser-Winter syndrome," was published online February 26 in Nature Genetics.
The brain defect found in Baraitser-Winter syndrome is a smooth brain malformation or "lissencephaly," as whole or parts of the surface of the brain appear smooth in scans of patients with the disorder. Previous studies by Dr. Dobyns and other scientists identified six genes that cause the smooth brain malformation, accounting for approximately 80% of affected children. Physicians and researchers worldwide have identified to date approximately 20 individuals with Baraitser-Winter syndrome.
While the condition is rare, Dr. Dobyns said the team's findings have broad scientific implications. "Actins, or the proteins encoded by the ACTB and ACTG1 genes, are among the most important proteins in the function of individual cells," he said. "Actins are critical for cell division, cell movement, internal movement of cellular components, cell-to-cell contact, signaling and cell shape," said Dr. Dobyns, who is also a University of Washington professor of pediatrics. "The defects we found occur in the only two actin genes that are expressed in most cells," he said. Gene expression is akin to a "menu" for conditions like embryo development or healing from an injury. The correct combination of genes must be expressed at the right time to allow proper development. Abnormal expression of genes can lead to a defect or malformation.
"Birth defects associated with these two genes also seem to be quite severe," said Dr. Dobyns. "Children and people with these genes have short stature, an atypical facial appearance, birth defects of the eye, and the smooth brain malformation along with moderate mental retardation and epilepsy. Hearing loss occurs and can be progressive," he said.
Dr. Dobyns is a renowned researcher whose life-long work has been to try to identify the causes of children's developmental brain disorders such as Baraitser-Winter syndrome. He discovered the first known chromosome abnormality associated with lissencephaly (Miller-Dieker syndrome) while still in training in child neurology at Texas Children's Hospital in 1983. That research led, 10 years later, to the discovery by Dobyns and others of the first lissencephaly gene known as LIS1.
More information:
"De novo mutations in the actin genes ACTB and ACTG1 cause Baraitser-Winter syndrome": http://www.nature. … ng.1091.html
"Baraitser-Winter syndrome" study slideshow: http://www.flickr. … 29446519959/
"Baraitser-Winter syndrome" studies: "Isolation of a Miller-Dieker lissencephaly gene containing G protein beta-subunit-like repeats" http://www.ncbi.nl … med/8355785; "doublecortin, a Brain-Specific Gene Mutated in Human X-Linked Lissencephaly and Double Cortex Syndrome, Encodes a Putative Signaling Protein" http://www.ncbi.nlm.nih.gov/pubmed/9489700
Provided by Seattle Children's

"Study finds new genes that cause Baraitser-Winter syndrome, a brain malformation." February 29th, 2012. http://medicalxpress.com/news/2012-02-genes-baraitser-winter-syndrome-brain-malformation.html
Posted by
Robert Karl Stonjek

Effects of a concussion may last longer than symptoms, study shows




Effects of a concussion may last longer than symptoms, study shows Director of the UK Concussion Assessment Research Lab Scott Livingston (bottom left) shows the results of MEP testing to UK men's soccer player Marco Bordon. Credit: University of Kentucky Public Relations

A study recently published by the University of Kentucky's Scott Livingston shows that physiological problems stemming from a concussion may continue to present in the patient even after standard symptoms subside.
Currently, concussions are diagnosed and monitored through a patient's self-reported symptoms (including headache, confusion or disorientation, poor concentration, and memory loss) and through computerized neuropsychological testing programs, which measure cognitive abilities including attention and concentration, cognitive processing, learning and memory, and verbal fluency. Post-concussion abnormalities in either of these markers typically return to a normal level within five to 10 days following the injury.
Conducted while he was a graduate student at the University of Virginia, Livingston's study was just published in the February 2012 issue of the Journal of Clinical Neurophysiology. The study used motor-evoked potentials (MEPs) — an electrophysiological measurement that can provide hard evidence for changes in brain function — to determine if any physiological abnormalities followed a similar recovery pattern to self-reported symptoms and neuropsychological testing.
During an MEP test, subjects have electrodes placed on a limb – such as the hand or foot. A magnetic stimulating device is placed over the head, and they receive a brief pulse of magnetic stimulation to the brain. The "reaction time" — the amount of time it takes for the subject's limb to receive the response from the brain after the stimulation — is recorded.
Livingston's study enrolled 18 collegiate athletes — nine who had been concussed within the previous 24 hours, and nine who had not experienced a concussion. Each concussed subject was matched with a non-concussed subject using age, gender, sport, position played, prior concussion history, and history of learning disability or attention deficit-hyperactivity disorder as inclusion criteria.
Subjects were evaluated for evidence of concussion based on self-reported symptoms, computerized neurocognitive test performance, and MEPs for a period of 10 days. Post-concussion symptoms were more frequent and greater in severity in the immediate timeframe after the injury (24-72 hours) and decreased in the following days. Some subjects reported no symptoms by day 10, though others did not have complete symptoms resolution by that time. Neurocognitive deficits followed a similar pattern, proving greater just after the injury and returning to normal (or closer to normal) by day 10.
MEPs, however, showed delays in response time and smaller MEP size which continued up to day 10, with these physiological changes actually increasing as the concussed athletes' symptoms decreased and cognitive functioning improved.

This video is not supported by your browser at this time.
The University of Kentucky's Scott Livingston discusses preseason baseline testing for concussions in athletes. A recently published study performed by Livingston while he was at the University of Virginia used motor-evoked potential testing to show evidence that the physiological effects of a concussion may last longer than its symptoms. Livingston's research lab at UK recently began a new program to study motor-evoked potentials in athletes pre- and post-concussion. At UK, all athletes who participate in a contact sport — including football, soccer, volleyball, diving, gymnastics, and basketball — are assessed preseason using MEP and neurocognitive testing to estbalish a baseline measure for each athlete. If an athlete receives a concussion, he or she will come back to the lab as soon as possible after the injury for follow-up testing. This approach allows researchers to get a clearer idea of the extent of an athlete's injury. Credit: University of Kentucky Public Relations
Livingston, director of the UK Concussion Assessment Research Lab and assistant professor in the Department of Rehabilitation Sciences, says these findings are significant for both athletes and sports medicine clinicians.
"Further investigation of MEPs in concussed athletes is needed, especially to assess how long the disturbances in physiological functioning continue after those initial ten days post-injury," Livingston said. "But in the meantime, sports medicine personnel caring for concussed athletes should be cautious about relying solely on self-reported symptoms and neurocognitive test performances when making return-to-play decisions."
Livingston's research lab recently began a new program to further study MEPs in athletes pre- and post-concussion. At UK, all athletes who participate in a contact sport — including football, soccer, volleyball, diving, gymnastics, and basketball — are assessed preseason using MEP and neurocognitive testing to establish a baseline measure for each athlete.
If an athlete receives a concussion, he or she will come back to the lab as soon as possible after the injury for follow-up testing. This approach allows researchers to get a clearer idea of the extent of an athlete's injury, Livingston says.
Neurocognitive tests, such as the Immediate Post-Concussion Assessment and Testing (ImPACT)™, are a valuable component of concussion management. While major professional sports organizations like the NFL and NHL, as well as hundreds of colleges, universities, and high schools across the United States follow this standard, UK Athletics did not have a formal, standardized neurocognitive testing protocol in place until last year. The addition of the MEP assessment in the preseason testing and post-concussion management are unique — UK is the first and only collegiate athletics program to implement a baseline physiologic measure of brain function.
"No other college of university in the country is currently assessing physiologic brain responses and using this information to determine the extent of the functional brain injury," Livingston said. "This type of information enables us to closely track recovery, which may not correspond to the decrease in concussion symptoms or recovery of memory and other cognitive functions."
Provided by University of Kentucky

"Effects of a concussion may last longer than symptoms, study shows." February 29th, 2012. http://medicalxpress.com/news/2012-02-effects-concussion-longer-symptoms.html
Posted by
Robert Karl Stonjek

Dutch launch mobile mercy killing teams





Six specialised teams, each with a doctor, will criss-cross the Netherlands as of Thursday to carry out euthanasia at the home of patients whose own doctors refuse to do so, a pro-mercy killing group said.
"From Thursday, the Levenseindekliniek (Life-end clinic) will have mobile teams where people who think they comply with the criteria for euthanasia can register," Right-to-die NL (NVVE) spokeswoman Walburg de Jong said.
"If they comply, the teams will carry out the euthanasia at patients' homes should their normal doctors refuse to help them," she said.
Made up of a specially-trained doctor and nurse who will work part time for the group, called the Life-end clinic, teams will be able to visit patients all over the Netherlands, De Jong said.
The Netherlands became the first country in the world to legalise euthanasia in April 2002 and strict criteria regulates how such mercy killings can be carried out. Patients must be mentally alert when making the request to die.
Patients also have to face a future of "unbearable, interminable suffering" and both the patient and the doctor -- who have to obtain a second opinion -- before euthanasia is carried out, must agree there is no cure.
Each euthanasia case is then reported to one of five special commissions, each made up of a doctor, a jurist and an ethical expert charged with verifying that all criteria had been observed.
But the plan, which received the thumbs-up from Dutch Health Minister Edith Schippers in the Dutch parliament, has met with scepticism from one of the Netherlands' largest medical lobbies.
The Royal Dutch Society of Doctors (KNMG) said it doubted whether the "euthanasia doctors" would be able to form a close-enough relationship with a patient to make a correct assessment.
Yearly, some 3,100 mercy killings are carried out in the Netherlands said De Jong, adding that the NVVE have already been phoned by 70 potential patients since the plan was announced in early February.
The NVVE said its teams were expected to receive around 1,000 assisted suicides requests per year.
(c) 2012 AFP
"Dutch launch mobile mercy killing teams." February 29th, 2012. http://medicalxpress.com/news/2012-02-dutch-mobile-mercy-teams.html
Posted by
Robert Karl Stonjek

Dutch launch mobile mercy killing teams





Six specialised teams, each with a doctor, will criss-cross the Netherlands as of Thursday to carry out euthanasia at the home of patients whose own doctors refuse to do so, a pro-mercy killing group said.
"From Thursday, the Levenseindekliniek (Life-end clinic) will have mobile teams where people who think they comply with the criteria for euthanasia can register," Right-to-die NL (NVVE) spokeswoman Walburg de Jong said.
"If they comply, the teams will carry out the euthanasia at patients' homes should their normal doctors refuse to help them," she said.
Made up of a specially-trained doctor and nurse who will work part time for the group, called the Life-end clinic, teams will be able to visit patients all over the Netherlands, De Jong said.
The Netherlands became the first country in the world to legalise euthanasia in April 2002 and strict criteria regulates how such mercy killings can be carried out. Patients must be mentally alert when making the request to die.
Patients also have to face a future of "unbearable, interminable suffering" and both the patient and the doctor -- who have to obtain a second opinion -- before euthanasia is carried out, must agree there is no cure.
Each euthanasia case is then reported to one of five special commissions, each made up of a doctor, a jurist and an ethical expert charged with verifying that all criteria had been observed.
But the plan, which received the thumbs-up from Dutch Health Minister Edith Schippers in the Dutch parliament, has met with scepticism from one of the Netherlands' largest medical lobbies.
The Royal Dutch Society of Doctors (KNMG) said it doubted whether the "euthanasia doctors" would be able to form a close-enough relationship with a patient to make a correct assessment.
Yearly, some 3,100 mercy killings are carried out in the Netherlands said De Jong, adding that the NVVE have already been phoned by 70 potential patients since the plan was announced in early February.
The NVVE said its teams were expected to receive around 1,000 assisted suicides requests per year.
(c) 2012 AFP
"Dutch launch mobile mercy killing teams." February 29th, 2012. http://medicalxpress.com/news/2012-02-dutch-mobile-mercy-teams.html
Posted by
Robert Karl Stonjek

Shirdi Sai Baba Aarti (with lyrics)

Tuesday, February 28, 2012

Battery to Take On Diesel and Natural Gas



Battery building: Aquion Energy recently announced plans to retrofit this factory—which used to make Sony televisions—to make large batteries for use with solar power plants.
RIDC Westmoreland

Energy


Aquion Energy says its batteries could make the power grid unnecessary in some countries.

  • By Kevin Bullis
Aquion Energy, a company that's making low-cost batteries for large-scale electricity storage, has selected a site for its first factory and says it's lined up the financing it needs to build it.
The company hopes its novel battery technology could allow some of the world's 1.4 billion people without electricity to get power without having to hook up to the grid.
The site for Aquion's factory is a sprawling former Sony television factory near Pittsburgh. The initial production capacity will be "hundreds" of megawatt-hours of batteries per year—the company doesn't want to be specific yet. It also isn't saying how much funding it's raised or where the money comes from, except to mention that some of it comes from the state of Pennsylvania and none from the federal government.
The first applications are expected to be in countries like India, where hundreds of millions of people in communities outside major cities don't have a connection to the electrical grid or any other reliable source of electricity. Most of these communities use diesel generators for power, but high prices for oil and low prices for solar panels are making it cheaper to install solar in some cases.
Advertisement
To store power generated during the day for use at night, these communities need battery systems that can handle anything from tens of kilowatt-hours to a few megawatt-hours, says Scott Pearson, Aquion's CEO. Such a system could make long-distance transmission lines unnecessary, in much the same way that cell-phone towers have allowed such communities access to cellular service before they had land lines.
Eventually Aquion plans to sell stacks of batteries in countries that have electrical grids. They could provide power during times of peak demand and make up for fluctuations in power that big wind farms and solar power plants contribute to the grid. Those applications require tens to hundreds of gigawatt-hours' worth of storage, so to supply them, Aquion needs to increase its manufacturing capacity. Competing with natural-gas power plants—especially in the United States, where natural gas is so cheap—will mean waiting until economies of scale bring costs down.
The company has said that it initially hopes to make batteries for under $300 per kilowatt-hour, far cheaper than conventional lithium-ion batteries. Lead-acid batteries can be cheaper than Aquion's, but they last only two or three years. Aquion's batteries, which can be recharged 5,000 times, could last for over a decade in situations in which they're charged once a day (the company has tested the batteries for a couple of years so far).
Jay Whitacre, a Carnegie Mellon University professor of materials science and engineering who developed Aquion's technology and founded the company, says the cost will need to drop to less than $200 per kilowatt-hour for grid-connected applications. Reaching this price, and production capacity on the scale of gigawatt-hours, "will take a long time," he says. "But you have to start somewhere."
Whitacre developed the batteries with low cost and durability in mind from the start. In searching for potential electrode materials, he limited himself to cheap, abundant elements, settling on sodium and manganese. He also picked a water-based electrolyte that's safer and cheaper than the organic ones used in lithium-ion batteries. In turn, this allowed him to use cheap manufacturing equipment to make them. To keep costs down, the company is making the batteries with equipment that's normally used to make food or aspirin. Construction on the factory in Pennsylvania will begin immediately, and the first stage is expected to be finished next year.
The current battery technology of choice for electric buses is lithium-ion, the price of which has dropped in nearly 40 percent since 2010, and is projected to drop another 50 percent by 2020 or 2025. A lithium-ion battery provides enough energy to operate a bus for about 150 miles (in most conditions) before needing to be recharged. For hilly cities or cities where air conditioning must be used a lot, that range is significantly reduced. Charging can be done in a few different ways: slowly overnight (which causes the least wear to the battery and other components), by using an overhead charging system, or by using a system that is embedded under the pavement. The latter two methods are much quicker than the first method, but tend to degrade the bus components more quickly.
"Some papers proposing new battery materials look great until you read the fine print about how they're made," Whitacre says. "We focused on manufacturing from the beginning."

Alta Devices: Finding a Solar Solution



Looking to enter a highly ­competitive solar market, Alta Devices hopes to use a combination of technological advances and manufacturing savvy to succeed where many others have crashed and burned.
  • By David Rotman
Suited up: CEO Christopher Norris holds a gallium arsenide wafer used in making Alta’s solar cells. Behind him is a custom-designed reactor used to grow thin layers of the semiconductor. Credit: Gabriela Hasbun
Alta Devices is a small but well-funded startup located in the same nondescript Silicon Valley office building that once served as the headquarters for Solyndra, the infamous solar company that went bankrupt last year after burning through hundreds of millions of dollars in public and venture investments. Whether the location has bad karma is still not clear, jokes Alta's CEO, Christopher Norris. But Norris, a former semiconductor-industry executive and venture capitalist, does know that the fate of his company will hinge on its ability to navigate the risky and expensive process of scaling up its novel technology, which he believes could produce power at a price competitive with fossil-fuel plants and far more cheaply than today's solar modules.
On a table in Alta's conference room, Norris lays out samples of the company's solar cells, flexible black patches encapsulated in clear plastic. They look unremarkable, but that's because the key ingredient is all but invisible: microscopically thin sheets of gallium arsenide. The semiconductor is so good at absorbing sunlight and turning it into electricity that one of Alta's devices, containing an active layer of gallium arsenide only a couple of micrometers thick, recently set a record for photovoltaic efficiency. But gallium arsenide is also extremely expensive to use in solar cells, and thin films of it tend to be fragile and difficult to fabricate. In fact, Alta's innovations lie not in choosing the material—the semiconductor has been used in solar cells on satellites and spacecraft for decades—but in figuring out how to turn it into solar modules cheap enough to be practical for most applications.
The company, which was founded in 2007, is based on the work of two of the world's leading academic researchers in photonic materials. One of them, Eli Yablonovitch, now a professor of electrical engineering at the University of California, Berkeley, developed and patented a technique for creating ultrathin films of gallium arsenide in the 1980s, when he worked at Bell Communications Research. The other, Harry Atwater, a professor of applied physics and materials science at Caltech, is a pioneer in the use of microstructures and nanostructures to improve materials' ability to trap light and convert it into electricity. Andy ­Rappaport, a venture capitalist at August Capital, teamed up with the two scientists to found Alta, recruiting fellow Silicon Valley veteran Bill Joy as an investor and, with the other cofounders, building a management team that included Norris. The goal: to make highly efficient solar cells, and to make them more cheaply than those based on existing silicon technology.
It is at this point that many solar startups have gone wrong, rushing to scale up an innovative technology before understanding its economics and engineering challenges. Instead, Alta spent its first several years in stealth mode, quietly attempting to figure out, as Norris puts it, whether its process for making gallium arsenide solar cells was more than a "science experiment" and could serve as a viable basis for manufacturing.
Flexible power: Alta’s solar cells can be made into bendable sheets. In this sample, a series of solar cells are encapsulated in a roofing material. Credit: Gabriela Hasbun
Remnants of the science experiment are still visible in the modest lab at the back of Alta's offices. Small ceramic pots sit on electric hot plates—relics of the company's early efforts to optimize ­Yablonovitch's technique of "epitaxial liftoff," which uses acids to precisely separate thin films of gallium arsenide from the wafers on which they are grown. Elsewhere in the lab the equipment gets progressively larger and more sophisticated, reflecting the scaling up of the process. Near a viewing window that allows potential investors to peer into the lab without donning clean-room coverings is one of the jewels of the company's development efforts: a long piece of equipment in which batches of samples are processed to create the thin-film solar cells. It's convincing evidence that the early work with pots and hot plates can be transformed into an automated process capable of the yields necessary for real-world manufacturing.
SOLAR LIFTOFF
When Bill Joy, a cofounder of Sun Microsystems and now a leading Silicon Valley venture capitalist, first saw the business plan for what became Alta Devices, he and his colleagues at Kleiner Perkins Caufield & Byers were already looking for high-efficiency thin-film solar technology. Joy keeps a running list—currently about 12 to 15 items long—of desirable technologies that he believes he has "a reasonable chance of finding." Solar cells that are highly efficient in converting sunlight and that can be made cheaply in flexible sheets could provide ways to dramatically lower the overall costs of solar power. Gallium arsenide technology was a natural choice for efficiency, but Alta's economics were what really interested the investors. "Their core competency was how to make it manufacturable," says Joy, who joined Rappaport as an investor within a few months.
Gallium arsenide is a nearly ideal solar material, for a number of reasons. Not only does it absorb far more sunlight than silicon—thin films of it capture as many photons as silicon 100 times thicker—but it's less sensitive to heat than silicon solar cells, whose performance dramatically declines above 25 °C. And gallium arsenide is better than silicon in retaining its electricity-producing abilities in conditions of relatively low light, such as early in the morning or late in the afternoon.
Key to reducing its manufacturing costs is the technique that Yablonovitch helped figure out decades ago. The semiconductor can be grown epitaxially: when thin layers are chemically deposited on a substrate of single-crystal gallium arsenide, each adopts the same single-crystal structure. Yablonovitch found that if a layer of aluminum arsenide is sandwiched between the layers, this can be selectively eaten away with an acid, and the gallium arsenide above can be peeled off. It was an elegant and simple way to create thin films of the material. But the process was also problematic: the single-crystal films easily crack and become worthless. In adapting Yablonovitch's fabrication method, Alta researchers have found ways to create rugged films that aren't prone to cracking. And not only do the thin films use little of the semiconductor material, but the valuable gallium arsenide substrate can be reused multiple times, helping to make the process affordable.
Research by Alta's founding scientists has also led to techniques for increasing the performance of the solar cells. Photovoltaics work because the photons they absorb boost the energy levels of electrons in the semiconductor, freeing them up to flow to metal contacts and create a current. But the roaming electrons can be wasted in various ways, such as in heat. In gallium arsenide, however, the freed electrons frequently recombine with positively charged "holes" to re-create photons and start the process over again. Work done by ­Yablonovitch and Atwater to explain this process has helped Alta design cells to take advantage of this "photon recycling," providing many chances to recapture photons and turn them into electricity.
Thus Alta's efficiency record: its cells have converted 28.3 percent of sunlight into electricity, whereas the highest efficiency for a silicon solar cell is 25 percent, and commonly used thin-film solar materials don't exceed 20 percent. Yablonovitch suggests that Alta has a good chance of eventually breaking 30 percent efficiency and nearing the theoretical limit of 33.4 percent for cells of its type.
The high efficiency, combined with gallium arsenide's ability to perform at relatively high temperatures and in low light, means that the cells can produce two or three times more energy over a year than conventional silicon ones, says Norris. And that, of course, translates directly into lower prices for solar power. Norris says a "not unreasonable expectation" is that the gallium arsenide technology could yield a "levelized cost of energy" (a commonly used industry metric that includes the lifetime costs of building and operating a power plant) of seven cents per kilowatt-hour. At such a price, says Norris, solar would be competitive with fossil fuels, including natural gas; new gas plants generate electricity for around 10 cents per kilowatt-hour. And it would trounce today's solar power, which Norris says costs around 20 cents per kilowatt-hour to generate.
Such numbers are tantalizing. But Norris is quick to bring up another: it costs roughly $1 billion to build a manufacturing facility capable of producing enough solar modules to generate a gigawatt of power, which is roughly the output of several medium-sized power plants. "I don't see any scenario where we would do this on our own," he says.
GHOST OF SOLYNDRA
Silicon Valley has been infatuated with clean tech since the mid-2000s, but it has yet to figure out something crucial: who will supply all the money necessary to scale up energy technologies and build factories to manufacture them? Venture investors might be skilled at picking technologies, but few of them have the deep pockets or the patience required to compete in a capital-intensive business such as the manufacturing of solar modules. The collapse of Solyndra, which built a $733 million factory in Fremont, California, is just the most recent reminder of what can go wrong.
Alta's lead investor Andy Rappaport says he usually stays away from investments in clean tech, including photovoltaics. Many investors in solar, he suggests, have bet that a startup could lower the marginal costs of manufacturing and thus "capture some market share." That's "a recipe for failure," he says, because "you need to spend hundreds of millions to build a factory before you know if you have anything of value." The strategy is especially risky now, because photovoltaics are becoming an increasingly competitive commodity business and prices continue to plummet, creating a moving target for new production. But rather than trying to create value by building manufacturing capacity, Rappaport says, Alta can profit from its intellectual property: "We have said simply and consistently that we can scale capacity faster and build a much stronger company by leveraging partnerships rather than raising and spending our own capital to build factories."
Current investors in Alta include GE, Sumitomo, and Dow Chemical, which recently introduced roofing shingles that incorporate thin-film photovoltaics (see "Can We Build Tomorrow's Breakthroughs?" January/February 2012). Though these companies have invested in several rounds of funding—Alta has so far raised $120 million—eventually Norris would like to see deals, such as licensing agreements or joint ventures, in which manufacturers build capacity to produce Alta's solar cells or use the solar technology in their products. To do that, he says, Alta first needs to "retire the risk" of the production technology, demonstrating to prospective partners that the gallium arsenide solar modules can in fact be produced in an economically competitive way.
Less than a mile from its headquarters, Alta is gutting and renovating a building where Netflix used to warehouse DVDs, turning it into a $40 million pilot facility to test its equipment. Though the facility is far smaller than a commercial solar factory, it is still no small or inexpensive undertaking. Norris warily eyes the new columns required to reinforce the roof, which will need to hold heavy ventilation and emission-control equipment. But the Alta CEO becomes more buoyant as he approaches the nearly completed back section of the facility. There, in several white rooms, are the large custom-designed versions of the lab apparatus used to make the solar cells.
Whether Alta succeeds will depend chiefly on how well these manufacturing inventions perform. The cost of the pilot facility might pale next to the price tag for a commercial-scale solar factory, but it is still a critical investment for the startup. And even as Alta is busily trying to get the facility up and running by the end of the year, Norris says, it is taking a deliberate, methodical approach to the process of scaling up. That contrasts sharply with earlier solar startups that spent hundreds of millions in venture investments to build factories as fast as possible. But Alta's cautious approach should not be confused with a lack of ambition. The goal, says Norris, is to make this a "foundational, transformative technology."
David Rotman is Technology Review's editor. 

Foundation Medicine: Personalizing Cancer Drugs


Foundation Medicine is offering a test that helps oncologists choose drugs targeted to the genetic profile of a patient's tumor cells. Has personalized cancer treatment finally arrived?

  • By Adrienne Burke
It's personal now: Alexis Borisy (left) and Michael Pellini lead an effort to make DNA data available to help cancer patients. Credit: Christopher Harting
Michael Pellini fires up his computer and opens a report on a patient with a tumor of the salivary gland. The patient had surgery, but the cancer recurred. That's when a biopsy was sent to Foundation Medicine, the company that Pellini runs, for a detailed DNA study. Foundation deciphered some 200 genes with a known link to cancer and found what he calls "actionable" mutations in three of them. That is, each genetic defect is the target of anticancer drugs undergoing testing—though not for salivary tumors. Should the patient take one of them? "Without the DNA, no one would have thought to try these drugs," says Pellini. 
Starting this spring, for about $5,000, any oncologist will be able to ship a sliver of tumor in a bar-coded package to Foundation's lab. Foundation will extract the DNA, sequence scores of cancer genes, and prepare a report to steer doctors and patients toward drugs, most still in early testing, that are known to target the cellular defects caused by the DNA errors the analysis turns up. Pellini says that about 70 percent of cases studied to date have yielded information that a doctor could act on—whether by prescribing a particular drug, stopping treatment with another, or enrolling the patient in a clinical trial.
The idea of personalized medicine tailored to an individual's genes isn't new. In fact, several of the key figures behind Foundation have been pursuing the idea for over a decade, with mixed success. "There is still a lot to prove," agrees Pellini, who says that Foundation is working with several medical centers to expand the evidence that DNA information can broadly guide cancer treatment.
Foundation's business model hinges on the convergence of three recent developments: a steep drop in the cost of decoding DNA, much new data about the genetics of cancer, and a growing effort by pharmaceutical companies to develop drugs that combat the specific DNA defects that prompt cells to become cancerous. Last year, two of the 10 cancer drugs approved by the U.S. Food and Drug Administration came with a companion DNA test (previously, only one drug had required such a test). So, for instance, doctors who want to prescribe Zelboraf, Roche's treatment for advanced skin cancer, first test the patient for the BRAFV 600E mutation, which is found in about half of all cases.
About a third of the 900 cancer drugs currently in clinical trials could eventually come to market with a DNA or other molecular test attached, according to drug benefits manager Medco. Foundation thinks it makes sense to look at all relevant genes at once—what it calls a "pan-cancer" test. By accurately decoding cancer genes, Foundation says, it uncovers not only the most commonly seen mutations but also rare ones that might give doctors additional clues. "You can see how it will get very expensive, if not impossible, to test for each individual marker separately," Foundation Medicine's COO, Kevin Krenitsky, says. A more complete study "switches on all the lights in the room."
So far, most of Foundation's business is coming from five drug companies seeking genetic explanations for why their cancer drugs work spectacularly in some patients but not at all in others. The industry has recognized that drugs targeted to subsets of patients cost less to develop, can get FDA approval faster, and can be sold for higher prices than traditional medications. "Our portfolio is full of targets where we're developing tests based on the biology of disease," says Nicholas Dracopoli, vice president for oncology biomarkers at Janssen R&D, which is among the companies that send samples to Foundation. "If a pathway isn't activated, you get no clinical benefit by inhibiting it. We have to know which pathway is driving the dissemination of the disease."
Cancer is the most important testing ground for the idea of targeted drugs. Worldwide spending on cancer drugs is expected to reach $80 billion this year—more than is spent on any other type of medicine. But "the average cancer drug only works about 25 percent of the time," says Randy Scott, executive chairman of the molecular diagnostics company Genomic Health, which sells a test that examines 16 breast-cancer genes. "That means as a society we're spending $60 billion on drugs that don't work."
Analyzing tumor DNA is also important because research over the past decade or so has demonstrated that different types of tumors can have genetic features in common, making them treatable with the same drugs. Consider Herceptin, the first cancer drug approved for use with a DNA test to determine who should receive it (there is also a protein-based test). The FDA cleared it in 1998 to target breast cancers that overexpress the HER2 gene, a change that drives the cancer cells to multiply. The same mutation has been found in gastric, ovarian, and other cancers—and indeed, in 2010 the drug was approved to treat gastric cancer. "We've always seen breast cancer as breast cancer. What if a breast cancer is actually like a gastric cancer and they both have the same genetic changes?" asks Jennifer Obel, an oncologist in Chicago who has used the Foundation test.
The science underlying Foundation Medicine had its roots in a 2007 paper published by Levi Garraway and Matthew Meyerson, cancer researchers at the Broad Institute, in Cambridge, Massachusetts. They came up with a speedy way to find 238 DNA mutations then known to make cells cancerous. At the time, DNA sequencing was still too expensive for a consumer test—but, Garraway says, "we realized it would be possible to generate a high-yield set of information for a reasonable cost." He and Meyerson began talking with Broad director Eric Lander about how to get that information into the hands of oncologists.
In the 1990s, Lander had helped start Millennium Pharmaceuticals, a genomics company that had boldly promised to revolutionize oncology using similar genetic research. Ultimately, Millennium abandoned the idea—but Lander was ready to try again and began contacting former colleagues to "discuss next steps in the genomics revolution," recalls Mark Levin, who had been Millennium's CEO.
Levin had since become an investor with Third Rock Ventures. Money was no object for Third Rock, but Levin was cautious—diagnostics businesses are difficult to build and sometimes offer low returns. What followed was nearly two years of strategizing between Broad scientists and a parade of patent lawyers, oncologists, and insurance experts, which Garraway describes as being "like a customized business-school curriculum around how we're going to do diagnostics in the new era."
In 2010, Levin's firm put $18 million into the company; Google Ventures and other investors have since followed suit with $15.5 million more. Though Foundation's goals echo some of Millennium's, its investors say the technology has finally caught up. "The vision was right 10 to 15 years ago, but things took time to develop," says Alexis Borisy, a partner with Third Rock who is chairman of Foundation. "What's different now is that genomics is leading to personalized actions."
One reason for the difference is the falling cost of acquiring DNA data. Consider that last year, before his death from pancreatic cancer, Apple founder Steve Jobs paid scientists more than $100,000 to decode all the DNA of both his cancerous and his normal cells. Today, the same feat might cost half as much, and some predict that it will soon cost a few thousand dollars.
So why pay $5,000 to know the status of only about 200 genes? Foundation has several answers. First, each gene is decoded not once but hundreds of times, to yield more accurate results. The company also scours the medical literature to provide doctors with the latest information on how genetic changes influence the efficacy of specific drugs. As Krenitsky puts it, data analysis, not data generation, is now the rate-limiting factor in cancer genomics.
Although most of Foundation's customers to date are drug companies, Borisy says the company intends to build its business around serving oncologists and patients. In the United States, 1.5 million cancer cases are diagnosed annually. Borisy estimates that Foundation will process 20,000 samples this year. At $5,000 per sample, it's easy to see how such a business could reward investors. "That's ... a $100-million-a-year business," says Borisy. "But that volume is still low if this truly fulfills its potential."
Pellini says Foundation is receiving mentoring from Google in how to achieve its aim of becoming a molecular "information company." It is developing apps, longitudinal databases, and social-media tools that a patient and a doctor might use, pulling out an iPad together to drill down from the Foundation report to relevant publications and clinical trials. "It will be a new way for the world to look at molecular information in all types of settings," he says.
Several practical obstacles stand in the way of that vision. One is that some important cancer-related genes have already been patented by other companies—notably BRCA1 and BRCA2, which are owned by Myriad Genetics. These genes help repair damaged DNA, and mutations in them increase the risk of breast or ovarian cancer. Although Myriad's claim to a monopoly on testing those genes is being contested in the courts and could be overturned, Pellini agrees that patents could pose problems for a pan-cancer test like Foundation's. That's one reason Foundation itself has been racing to file patent applications as it starts to make its own discoveries. Pellini says the goal is to build a "defensive" patent position that will give the company "freedom to operate."
Another obstacle is that the idea of using DNA to guide cancer treatment puts doctors in an unfamiliar position. Physicians, as well as the FDA and insurance companies, still classify tumors and drug treatments anatomically. "We're used to calling cancers breast, colon, salivary," says oncologist Thomas Davis, of the Dartmouth-Hitchcock Medical Center, in Lebanon, New Hampshire. "That was our shorthand for what to do, based on empirical experience: 'We tried this drug in salivary [gland] cancer and it didn't work.' 'We tried this one and 20 percent of the patients responded.'"
Now the familiar taxonomy is being replaced by a molecular one. It was Davis who ordered DNA tests from several companies for the patient with the salivary-gland tumor. "I got bowled over by the amount of very precise, specific molecular information," he says. "It's wonderful, but it's a little overwhelming." The most promising lead that came out of the testing, he thinks, was evidence of overactivity by the HER2 gene—a result he says was not picked up by Foundation but was found by a different test. That DNA clue suggests to him that he could try prescribing Herceptin, the breast-cancer drug, even though evidence is limited that it works in salivary-gland cancer. "My next challenge is to get the insurance to agree to pay for these expensive therapies based on rather speculative data," he says.
Insurance companies may also be unwilling to pay $5,000 for the pan-cancer test itself, at least initially. Some already balk at paying for well-established tests, says Christopher-Paul Milne, associate director of the Tufts Center for the Study of Drug Development, who calls reimbursement "one of the biggest impediments to personalized medicine." But Milne predicts that it's just a matter of time before payers come around as the number of medications targeted to people's DNA grows. "Once you get 10 drugs that require screening, or to where practitioners wouldn't think about using a drug without screening first, the floodgates will open," he says. "Soon, in cancer, this is the way you will do medicine."
Adrienne Burke was founding editor of Genome Technology magazine and is a contributor to Forbes.com and Yahoo Small Business Advisor.

The fallout of Rupee depreciation and fuel price increase



  


Subsidies? Well, yes, but make them smart subsidies

Nigeria: Subsidising 
the neighbours
Nigeria’s Central Bank Governor, Sanusi Lamidi Sanusi, had a problem. In a recent live interview with the Aljazeera TV, he said that his country had to raise the retail prices of all petroleum products to match the rising international prices, despite the violent and massive public protests against that move. That was because Nigeria could not afford to subsidise them anymore though Nigeria is a leading petroleum-producing country and a net exporter of that commodity.
“We had fixed the retail prices of petroleum prices in Nigeria when the international price of crude oil was $ 50 a barrel,” he said. “But the international prices are around $ 110 a barrel now and we still supply petroleum products at the original prices” Then, he came out with his problem, a problem which he raised as an economist and not as a politician. “This was a huge burden on the Nigerian government’s budget because that money could have been used for developing the basic infrastructure of Nigeria which is now far from desired. But the major problem was something else. In all the neighbouring countries where the petroleum prices were at the current international level, it became a very profitable business for some groups to buy those products at the subsidised prices in Nigeria and smuggle the same to those countries and sell still at a lower price than the international prices. So, the demand for petroleum products in Nigeria was higher than their normal level. The government’s subsidy requirements were therefore higher than what it would have otherwise been. Worst was that the Nigerian government was subsidising the petroleum consumers of neighbouring countries as well”
Governor Sanusi’s explanation was the official justification of the government’s decision to raise the retail prices of petroleum products in the market and response to the massive and bloody protests that had brought Nigeria’s economy to a halt.
Sri Lanka: Making money by selling subsidised rice
Nigeria is not alone in this predicament. In all the countries where unmanaged and uncontrolled subsidies have been extended by governments in good public spirit, there had been similar unintended consequences and wastage of scarce resources of the countries concerned. In Sri Lanka, prior to 1977, every citizen of the country, irrespective of the income level, had been supplied with two measures of rice, equal to about two kilograms, at a highly subsidised price. The objective of the government in doing so was noble: Eliminate hunger amongst the poor by supplying them with their staple food. But this system engendered a thriving underground market in which traders started to buy subsidised rice from the franchise holders by making a payment to them and supply the same to hotels in the city where there was a high price for rice in the open market. So, both the poor and the rich got the opportunity of earning an additional income through the subsidy scheme which economists call ‘earning a rent’ meaning that it was earned not by making a worthwhile contribution to the economy but by using the available control and regulatory systems. So, the government wanted people to consume rice. Instead, they made money out of it and spent on other needs.
Fishermen’s demand: Reduce prices or no fishing anymore
After the fuel prices were increased by a significant margin in Sri Lanka recently, there were spontaneous agitations calling for reducing them to the original price levels. The fishermen on the North Western coast of Sri Lanka refused to accept the subsidy that was offered to them by the government on the fear that that subsidy was a temporary measure to placate them and would not be continued. They refused to take their boats to sea until the government met their demand. Similar demands were made by others like three-wheeler taxi men, lorry owners and school van operators. This was not what they were supposed to do because, if their costs had increased due to the fuel price hike, they should have hiked the prices of their individual products as well to compensate for the cost increases. The objective of the fuel price increase by the government was to discourage the excessive use of petroleum products by both consumers and producers and thereby check on the growth of the fuel bill of the country. Economists call this allowing a ‘pass-through’ of the price increase to the rest of the economy forcing everyone to make a painful but necessary adjustment to their consumption pattern. This adjustment is exactly what is needed in Sri Lanka today where everyone, including the government, is notorious for over-consuming beyond their means. Both the private bus operators and the Ceylon Electricity Board in the very first instance and lorry owners subsequently made this pass through by raising bus fares, electricity tariffs and lorry hiring charges, respectively, in line with the increase in the fuel prices.
A price pass-through is not necessarily inflationary
Many have feared that such a pass through will raise the cost of living and contribute to high future inflation. This argument is both right and ill-conceived. It is right because a fuel price increase raises the consumer prices and shrinks the basket of goods and services which a person could buy out of a given income. It in fact puts the poor and those earning fixed incomes to innumerable misery because they are now forced to tighten their belts beyond the minimum consumption needed to maintain them and their families. So, the painful adjustment which the price increase expects everyone to make falls squarely on them. However, this argument of long term inflation is ill-conceived because the initial increase in the cost of living will fizzle out pretty soon if the central bank does not increase money supply and allow the economy to increase its total aggregate demand through a liberal credit expansion. All other groups will be forced to accommodate the initial price increase through an improvement in productivity in the long run. This process which should naturally occur in the economy as per the objective of increasing fuel prices will short-circuit if the central bank allows credit to expand liberally, ostensibly to force-track economic growth beyond the country’s true growth potential. Such a liberal credit expansion, in the opinion of economists, is a subsidy to be extended by the central bank to the economy which a central bank is neither capable of doing nor supposed to do. It is not capable of doing it because merely through money creation it cannot influence people to work hard and deliver prosperity; it is not supposed to do so because it delays the adjustment needed and makes its own life painful.
Offer smart subsidies
to the poor Economists generally agree that the ultra-poor and the vulnerable need support in such an event to wade through the adjustment process successfully. Such support is generally delivered to them through a subsidy scheme, not just a general subsidy system, but a ‘smart subsidy system’.
The general subsidy schemes are not endorsed by economists due to several weaknesses inherent in them. A smart subsidy system is one that eliminates or minimises those inherent weaknesses.
What are the weaknesses in general subsidy systems?
Avoid unintended consequences
First, they generate unintended consequences throughout the economy. A subsidy is not free but paid for by someone else in the economy. For instance, if the government extended a cash subsidy to fishermen or, as demanded by them, reduced the prices of fuel to their original levels without a compensating reduction in costs in the Ceylon Petroleum Corporation or the Indian Oil Company, there are losses to be made by these suppliers. The Indian Oil Company can close shop and go home if these losses are mounting. But the Petroleum Corporation, being a public monopoly, cannot do so easily. It could for some time continue to operate by financing losses by borrowing from banks, especially from state banks. But sooner or later, the loss levels will become too high even for the banks to bear. At that stage, its losses have to be borne by ‘someone else’ and this someone else in this case is the government as its owner. The government has to pay for those losses by increasing taxes, or by cutting expenses elsewhere or by printing money or by increasing public debt. In the past, the government did so by taking over the debt of the Petroleum Corporation by issuing Treasury bonds to the two state banks which is a combination of both printing money (because it increases net credit to government by banks) and by raising public debt (because it adds to the total public debt of the country). Whatever the method of financing, it is an unintended consequence because in the first instance, it reduces the credit granting capacity of state banks to other customers and when the government takes over the debt later, it causes future inflation. A central bank cannot be happy about the Petroleum Corporation running at losses or its losses being financed through bank borrowing or, eventually, those losses being taken over by the government.
The Moral Hazard
Problem: Give subsidies and corrupt them too
Second, subsidies lead to a problem known as the ‘moral hazard problem’ in an economy. A moral hazard is simply a situation where, when one is supported by another without a commensurate sacrifice by the first one, the supported person has no incentive to work for gaining capacity for standing on his own feet one day and, in the absence of this quality, will become a greater burden to the person who chooses to support him. A good example is the case of a child whose home assignments are done by his mother out of pure love for him. But the child does not learn to do it by himself and, as a result, the mother will have to continue to do his assignments at a great burden to herself. This problem is not a new one and had been known for long. As preached by the Buddha in the Chakkavatthi Seehanada Sutra in the Dheegha Nikaya, a king, on being advised by his Ministers that people in his kingdom had resorted to theft and looting because they did not have enough money to undertake their own enterprises, had given them, out of sympathy, free capital from his treasury. After sometime, people had realised that they could get more money from the king if they engaged themselves in more theft and looting. So, instead of theft and looting subsiding, they had proliferated in his kingdom. So, uncontrolled subsidies, instead of helping people, perpetuate their misery and become a huge burden to the subsidy provider.
Adverse selection: Bad guys get together to be selected
Third, subsidies generate another problem called the ‘adverse selection’ problem by promoting people to flock to get the subsidy though they may not deserve to receive such a subsidy. If fishermen are given a ration of fuel, many will join the bandwagon of fishermen to get the benefit of the subsidy for them. The wrong signal given by the subsidy collects an underserving group together and the selection of that group by the government for granting the subsidy is adverse to the government right from the beginning. That is because the government comes under constant pressure by this group for enhancement and continuation of the subsidy. The subsidy is paid with a good intention, but it does not generate the expected results because it is used by people who have been adversely selected.
Have a timeline for exiting the subsidy
Fourth, subsidies, once granted, have the tendency of becoming a permanent feature in the system and at a later date, when it is no longer necessary to grant that subsidy, it will be difficult for the government to withdraw the same. One good example is the fertiliser subsidy granted to farmers in Sri Lanka. The advocates of the subsidy had justified it on the ground that it is not a waste of resources but an investment because it helps farmers to raise output by reducing their cost of production. However, the unlimited and untargeted subsidy ballooned when the international prices of fertiliser too ballooned since 2007. Accordingly, the fertiliser subsidy which was around Rs 3.6 billion in 2004 shot up to Rs 11 billion in 2007 and further to Rs 26 billion in 2008. It has remained around that level since then draining a significant portion of the country’s scarce resources for the subsidy. Hence, even if the government wants to curtail or eliminate the subsidy now, it is not possible to do so without facing severe resistance from the recipients. Hence, a smart subsidy should have clearly thought out this problem right at the outset and introduced a dateline for exiting the subsidy when it has delivered the desired results.
Have a fall back strategy to hang on when things go wrong
Fifth, subsidies can go wrong due to factors beyond the control of authorities. One possibility is the cost escalation due to increase in the international prices. Sri Lanka’s fertiliser subsidy is a casualty of this adverse development. At the time the open and unrestricted fertiliser subsidy was designed in 2005, the international prices of petroleum products including that of fertilisers were at an affordable level and therefore the authorities could talk very safely of fertiliser subsidy as ‘an investment’ because the total cost was to rise only from Rs 3.6 billion in 2004 to Rs 4.2 billion in 2005. This marginal increase was within the resources of the government. However, the year 2006 ended up with a total subsidy level of Rs 12 billion due to cost escalations. The authorities still stuck to the original plan and continued with their so called ‘investment’ in the agriculture sector. But in 2008, fertiliser prices increased sharply raising the cost of the subsidy to Rs 26 billion and in 2009 to Rs 27 billion. When such mishaps occur, there should be a ‘fall-back strategy’ designed and built into the subsidy model so that the authorities could easily move out of the subsidy for a better alternative. But this has not been there in the fertiliser subsidy and as a result the subsidy levels have remained every year at above Rs 25 billion throughout since 2008.
Offer smart subsidies
So, subsidies could be extended to help those vulnerable groups in the case of a sudden price increase due to international price increases. But those subsidies should necessarily be smart subsidies with clear targeting of the beneficiary groups, avoiding unintended consequences, free from moral hazard and adverse selection issues, an announced timeline for exiting the subsidy after the desired results have been attained and a clearly thought-out fall back strategy in the event of the subsidy becoming untenable.
The general subsidies with no smartness built into them are doomed to failure though they are very much desired by politicians and the public at large.
(W.A. Wijewardena could be reached at waw1949@gmail.com )    

A Guide to Integrated Urban Flood Risk Management for the 21st Century


Cities and Flooding


Cities and Flooding
KEY FINDINGS
  • Floods are the most frequent among all natural disasters, causing widespread devastation, economic damages and loss of human lives.


  • The East Asia and Pacific region is particularly vulnerable: In the past 30 years, the number of floods in Asia amounted to about 40% of the total worldwide.


  • Urban flooding is becoming increasingly costly and difficult to manage as low- and middle income countries in the region transition to largely urban societies, with a greater concentration of people and assets in urban centers.


  • In addition to direct economic damage, floods have long-term consequences such as loss of education opportunities, disease and reduced nutrition which may erode development goals.


  • Rapid urbanization creates poorer neighborhoods which lack adequate housing, infrastructure and services, making the poor more vulnerable to floods, especially women and children.


  • The most effective way to manage flood risk is to take an integrated approach which combines structural and non-structural measures.


  • This includes:
    • Building drainage channels and floodways;


    • Incorporating “urban greening” such as wetlands and environmental buffers;


    • Creating flood warning systems; and


    • Land use planning for flood avoidance.

  • The key is getting the balance right, because current risks may change in the future as the effects of urbanization and climate change accelerate, requiring flexible solutions.


  • Various aspects of the impact of these measures need to be considered, including environmental degradation, biodiversity, equity, social capital and other potential trade-offs.


  • Successful flood risk management requires robust decision making, with greater coordination between different levels of government, public sector agencies, civil society, educational and private sectors among others.


  • Tools such as flood hazard maps as well as simulation and visualization techniques can help decision makers better understand flood risk and its hazards, predict outcomes and assess costs.


  • Communications also plays a significant role in raising awareness and reinforcing preparedness. The guidebook warns that less severe disasters can be forgotten in less than three years.


  • As flood risk cannot be eliminated entirely, planning for a speedy recovery is also necessary, using reconstruction as an opportunity to build safer and stronger communities which have the capacity to withstand flooding better in the future.