Search This Blog
Friday, August 12, 2011
Pushing Light Beyond Light Speed
by Kate McAlpine
Speedy. Without special materials, the peak of a pulse stays at the same location in the pulse, but fast light materials can shift the peak forward, making it appear to travel faster than light speed.
Credit: Adapted from D. J. Gauthier and R. W. Boyd, Photonics Spectra, 41 (January 2007)
Warp speed is still out of reach for spaceships, but two new experiments have pushed a pulse of light beyond the speed limit of 300,000 kilometers per second set down in Einstein's theory of special relativity. Although physicists had pulled off similar feats before, two teams now report ways to beat down the loss of light so that much more of it appears to break the universal speed limit. Don't worry, the tricky experiments don't really violate relativity. But the techniques might, in principle, slightly speed up optical communications.
Here's how you make a pulse of light appear to travel faster than light speed. The pulse can be thought of as a kind of rogue wave of electromagnetic radiation zipping through space so that if you made a graph of the pulse's intensity, it would start at zero, smoothly climb to a single peak, and then decline back toward zero. But such a peak can also be thought of as a collection of waves with a range of wavelengths, all continuously oscillating up and down and piled on top of one another. At the center of the pulse, the various waves all line up and reinforce each other (see figure). In contrast, near the leading and trailing edges of the pulse, the different waves get out of sync and cancel one another out.
Now suppose you run the light pulse through a special material that slows down some wavelengths of light more than others. That can change the way the waves line up and, ironically, shift the spot at which the various waves reinforce each other forward, making the peak appear to jump forward faster than light. The peak can even appear to emerge from the back of the material before it enters the front. None of this violates relativity, however, as that would require the individual waves to run faster than light speed. More generally, physicists now interpret relativity to mean that information cannot be transmitted faster than light. And it's the unvarying speed of the first overlapping light waves, not the exact position of the pulse's peak, that determines the ultimate rate at which information can flow.
Vitaliy Lomakin of the University of California, San Diego, and colleagues at the Public University of Navarre in Pamplona, Spain, put this idea into practice by sending microwaves into a 35-micrometer-thick, holey sheet of copper sandwiched between two 0.79-millimeter-thick Teflon discs. As a consequence of this design, the peak of a microwave pulse can emerge from the other side of the device even before it enters the metal sandwich.
But the metal doesn't naturally let much light through. That's where the Teflon comes in. The two layers maintain the brightness and direction of the waves while the metal's pattern of holes builds up the signal in these strong waves. Whereas previous experiments might have seen less than 1% of the light pulse break the cosmic speed limit, the interaction between the metal and Teflon allowed the team to send 10% through about 100 picoseconds early, an advance soon to be described in Physical Review B. "This is achieved with a remarkably thin structure that can be fabricated easily in a wide spectrum from microwaves to visible light," Lomakin says.
Li Zhan and colleagues at Shanghai Jiao Tong University in China say they could make a pulse arrive even earlier with optical fibers, which are already used for high-speed data communications. This team sent an infrared light signal clockwise through a loop of optical fiber and measured it at two sensors, one near the point where the light entered the fiber and the other 10 meters on. Ordinarily, the signal passing through the silicon fiber would show up in the first sensor and then reach the second sensor 48.6 nanoseconds later. However, Zhan and his colleagues managed to speed the signal up so much that it arrived at the second sensor 221.2 nanoseconds before it reached the first one.
In this experiment, the optical fiber itself played a role similar to that of the holey plate. To speed up the light signal, a second light wave ran counterclockwise through the optical fiber. The presence of that additional light changed the speed at which light waves of different wavelength zip along to modify the alignment of the waves. Typically, this second wave absorbs so much of the signal light that pushing the peak ahead even 1 nanosecond reduces the peak's intensity by about 20%. By contrast, the researchers managed to push the light forward by 211.3 nanoseconds before losing that much light, they report in a paper in press at Physical Review Letters.
Although information can't really travel faster than light speed, Zhan argues that communications could make small gains in the speed at which signals are detected. The receivers in optical communication systems also react to the peak of a pulse, not its leading edge. Pushing the peak closer to that edge might save only a few hundred nanoseconds, but Zhan says it could one day make the difference in the whirlwind world of high-speed stock exchange.
"This may be true, but they have not done the experiment," says Daniel Gauthier, a specialist in fast light at Duke University in Durham, North Carolina. Based on his research and that of others, he expects the pulse's information-carrying shape will be mangled in the process. Günter Nimtz, an expert in faster-than-light-speed phenomena at the University of Cologne in Germany, agrees that changes to the pulse shape would be problematic, but he suggests that with some knowledge about the wavelengths in the pulse, and the material it moves through, the receiving end could recover the information.
'Serial Killer' Immune Cells Put Cancer in Remission
by Sarah C.P. Williams
Fighter. T cells, like the one shown here, help the immune system kill invaders to the body, such as bacteria.
Credit: © NIBSC / Photo Researchers, Inc.
The patient was dying of leukemia. One hundred seventy out of every 200 cells in his bone marrow had a cancer-causing mutation, and his lymph nodes were swelling, a sign the cancer was getting worse. He'd already been on multiple courses of chemotherapy, but his disease showed few signs of improvement. Then, in July 2010, he enrolled in a clinical trial for an experimental treatment, designed to turn his own immune cells against his cancer. Months later, all signs of leukemia had vanished, his physicians report today.
"The surprise was how well this worked in clearing so much tumor so rapidly," says immunologist Bruce Levine of the University of Pennsylvania, one of the scientists who created the new treatment.
The cancer therapy, described today in two papers in Science Translational Medicine and The New England Journal of Medicine, is based on the idea that the human body is already primed to fight abnormal, dangerous cells. T cells, frontline defenders of the immune system, recognize cells that don't belong in the body—such as bacteria—and set off a cascade of events to kill them. Scientists studying cancer have long grappled with how to make these T cells kill tumor cells. But they've never before found the perfect combination of factors to turn T cells into cancer killers.
Levine and his colleagues designed a new gene that can be inserted into T cells to trick them into attacking cancerous B cells, the cause of chronic lymphocytic leukemia (CLL). The new gene encodes a receptor that, on one end, can bind to a molecule that's unique to cancerous B cells. The other end of the receptor sets off a chain reaction when such a B cell is bound, eventually leading the T cell to destroy the cancerous cell. "Essentially, we're converting T cells that would normally recognize other types of cells to be tumor specific," Levine says.
In the initial clinical trial, the researchers tested their method in three patients with CLL. They took a sample of each patient's T cells and added the new gene to the cells, using the same procedure and new gene in each case, while keeping the T cells from each patient separate. Once the T cells contained the new, tumor-specific receptors, the scientists infused the T cells back into the blood of each patient.
All three patients are now in remission. On average, the modified T cells killed over a kilogram of tumor cells in each patient. For every modified T cell infused into the patients' blood, at least 1000 tumor cells were killed, leading the researchers to dub the T cells "serial killers." Moreover, after 12 months, blood tests revealed that the patients still had copies of the modified T cells circulating in their bloodstream able to kill cancer cells. And although the data spans only a year, Levine says the effects are likely even longer.
"The power of these T cells," Levine says, "is that you don't have to keep giving them like you do with chemotherapy drugs. They are a dynamic, living, dividing drug, and you can administer them once and they survive and multiply, continuing to protect against cancer."
The new therapy isn't just a potential boon for CLL patients, says oncologist Walter Urba of Providence Cancer Center in Portland, Oregon. The success of the clinical trial could translate to other cancer types. "You can now try to switch this receptor to recognize a different target," says Urba, who specializes in cancer immunotherapy. "Let's make a breast cancer-specific receptor, and a prostate cancer-specific receptor, and a colon cancer-specific receptor. The potential here is huge to apply this to different tumor types."
And Urba cautions that the current results are based on a small sample size. More work is needed to verify that the treatment is broadly successful in all CLL patients and that stray cancer cells don't eventually mutate so that they avoid displaying the molecule targeted by the T cells.
"This is really exciting," he says. "The results are very promising. But it's also important to remember that this is just a couple of patients and they've only been followed for a short time. We need to see more studies now."
Study finds multiple sclerosis genes
THE UNIVERSITY OF MELBOURNE |
Previous research has shown an increased risk of multiple sclerosis from Vitamin D deficiency. Image: io_nia/iStockphoto In one of the largest human genetic studies ever undertaken, scientists have identified the major common genetic variants that contribute to the cause of the devastating neurologic disease, multiple sclerosis (MS). The results of the study are published today in the prestigious scientific journal, Nature. They represent years of work by the International Multiple Sclerosis Genetics Consortium (IMSGC) involving more than 250 researchers, in 15 countries, including from the University of Melbourne and Florey Neuroscience Institutes. Australian scientists have played a significant role and more than 1000 Australians with MS contributed DNA samples. The study confirmed the presence of up to 57 MS genes with a remarkable pattern that shows that the reason some people get MS and others don’t is largely due to subtle, inherited differences in immune function. It points to a pivotal role for T cells – the ‘orchestra leaders’ of the immune system and makes it clear that MS is primarily an immunologic disease. Professor Trevor Kilpatrick, Director of the Melbourne Neuroscience Institute at the University of Melbourne, with his research team and colleagues at the Florey Neuroscience Institutes, were able to provide a strong cohort of patients to the internationally-led research. “Our assembled cohort of thousands of patients is a huge resource for international research efforts such as this. Our Victorian team and colleagues also have the expertise to assist in investigating the significance of this discovery. “The next step is rigorous assessment of hundreds of patients to see how these newly identified genes function and contribute to the development of MS. This study has already commenced.” he said. The Australian and New Zealand contribution was led by Prof Graeme Stewart, a Clinical Immunologist in the Westmead Millennium Institute, University of Sydney. It involved a consortium of 18 researchers from 5 states and New Zealand (in a group called ANZgene). “Discovering so many new leads is an enormous step towards understanding the cause of MS,” Professor Stewart said. “Most importantly, for people with MS, these genes also strengthen the case for immunologic treatments currently in clinical trials and point to new therapeutic approaches.” Previous Australian research has suggested a link between Vitamin D deficiency and an increased risk of multiple sclerosis and the ANZgene consortium identified a vitamin D gene on chromosome 12. The international study has now identified a second vitamin D gene and provides insight into a link between genetic and environmental risk factors. Multiple Sclerosis Research Australia (MSRA), together with the Australian government, has funded MS genetic research over the past ten years. MSRA’s Executive Director Jeremy Wright welcomed the breakthrough announcement. “This is a terrific milestone which brings welcome new hope to people with MS and great credit to the researchers. The ANZgene groups played a significant role in this international effort and have put Australia in the front line of potential new findings in both the diagnosis and treatment of MS,” Mr Wright said. |
Antibody linked to infertility
RENEE SIZER, SCIENCENETWORK WA |
A number of women who have unknown causes of fertility have the antibodies to ZP3. Image: nazdravie/iStockphoto A UWA study has found antibodies to be a potential factor in causing autoimmune disease associated with female infertility. The study, originally aimed at producing a contraceptive for mice, uncovered new insight into the causes of ovarian disease. Researchers at UWA’s school of Biomedical, Biomolecular and Chemical Sciences found that an antibody to the glycoprotein zona pellucida 3 (ZP3) greatly depleted follicles within the ovary, causing complete infertility in mice. The glycoprotein zona pellucida 3 is a vital component of ovarian follicles that is important for their development and enables fertilization to occur. Mice were given a genetically modified virus expressing the glycoprotein ZP3 which caused an immune response to the protein and the subsequent production of antibodies. Presence of the antibodies in the mice resulted in a fertility reduction within 14 days and complete infertility within 21 days. Research head Dr Megan Lloyd says these results show that it is possible antibodies to ZP3 contribute to ovarian damage in humans. “In humans there are a number of women who have infertility of unknown cause and a proportion of these women do have antibodies to ZP3,” Dr Lloyd says. “The importance of these antibodies is really difficult to determine because usually if you have infertility you’re probably quite away through the disease process already.” Dr Lloyd says further research into the early stages of the auto immune disease is hard because diagnosis often occurs after the disease has done damage. “What I would hope is that we can get more information about the genesis of autoimmune disease and what actually goes wrong within the ovary in those early times,” Dr Lloyd says. “We are hoping to look at the type of antibody that’s produced and what happens to that antibody whether it induces other parts of the immune system to act within the ovary.” Dr Lloyd says she’s interested to know why ovarian follicles are lost in the early stages of autoimmune disease and whether normal bodily processes have increased or whether it’s an abnormal process. The research paper, ‘Immunoglbulin to zona pellucida 3 mediates ovarian damage and infertility after contraceptive vaccination in mice,’ was awarded in the medical category for UWA’s best published article in ground breaking research 2011. Dr Lloyd has worked in this area of research since her PhD began in 1998 and has partnered with research teams from UWA and the University of Adelaide. |
Logging out: carbon credits in
THE AUSTRALIAN NATIONAL UNIVERSITY |
By reducing the harvesting of native forests, Austalia can generate a substantial quantity of carbon credits. Image: szefei/iStockphoto If Australia stopped logging native forests it would meet almost half of its five per cent carbon emission reduction target for 2020, according to an expert from ANU. According to Andrew Macintosh, Associate Director of the ANU Centre for Climate Law and Policy, stopping native forest harvesting would generate enough carbon credits during the period 2013-2020 to meet 45 per cent of Australia’s abatement task. In a new report, Potential carbon credits from reducing native forest harvesting in Australia, Mr Macintosh compared four possible approaches – harvesting rates at 2002-2009 average levels, keeping harvesting at 2010 levels (30 per cent below the 2002-2009 average), reducing harvesting by 50 per cent and a complete end to logging. “The results of the study suggest that by reducing the harvesting of native forests, Australia could generate a substantial quantity of carbon credits,” said Mr Macintosh. “The most significant of these is putting a halt to logging. Stopping native harvesting altogether would yield 38 mega tonnes of CO2 credits each year – providing almost half of Australia’s abatement task with a five per cent reduction target to 2020. “Reducing logging by 50 per cent would generate enough carbon credits to meet 22 per cent of the task. Even if harvesting rates were kept at 2010 levels, Australia could generate enough carbon credits to meet 14 per cent of its 2013-2020 abatement task.” Mr Macintosh said there was significant public interest in identifying cost-effective ways of reducing emissions that may not be captured by the Government’s proposed carbon pricing scheme. “The carbon pricing scheme and accompanying Carbon Farming Initiative are likely to be the main drivers of change in the Australian economy. However, there is space for complementary policies that capture cheap abatement opportunities that might not be realised through carbon markets. The native forest sector is one area where there are these additional opportunities,” said Mr Macintosh. “And if the Opposition wins the next election, and the carbon pricing scheme is abolished, it will be essential that cheap abatement opportunities in the forest sector are realised. “Despite this, it should be emphasised that the results of this study do not imply that reducing native forest harvesting is necessarily the cheapest way to reduce emissions. Further research is now required to evaluate this issue, and to analyse the budgetary implications of abatement in this area.” Copies of the report are available at http://law.anu.edu.au/cclp/Index.asp |
Dark beer has more iron than pale beer
Posted by Biomechanism
A team of researchers from the University of Valladolid (Spain) has analysed 40 brands of beer, discovering that dark beer has more free iron than pale and non-alcoholic beers. Iron is essential to the human diet, but also helps oxidise the organic compounds that give these beverages stability and flavour.
According to the analysis carried out by the University of Valladolid (UVa) on 40 types of beers from all 5 continents, dark beers have an average free iron content of 121 ppb (parts per billion) compared to 92 ppb in pale beers and 63 ppb in non-alcoholic beers.
“Although these quantities are very small, the differences are apparent and could be due to the production processes or raw materials used in manufacturing,” stated Carlos Blanco, professor of Food Technology at UVa and co-author of the study.
The study, published in the Journal of the Science of Food and Agriculture, indicates that higher iron content in dark beer could be explained by the malt and hop extracts used to produce it.
However, pale beer production includes a filtering stage in which diatomaceous earth is used. This sedimentary rock is a porous material with micro-algae used to lighten the beer; it traps the iron, causing its concentrations to decrease.
Non-alcoholic beer undergoes vacuum evaporation processes to remove the alcohol. This operation also removes iron ions given that they are dragged by the volatile molecules.
The study examined 17 Spanish beer brands and 23 from other countries, with 28 pale, 6 dark and 6 non-alcoholic beers. The beers with the highest iron content were a dark Spanish beer (165 ppb) and a dark Mexican beer (130 ppb). Those that had the lowest levels of iron were from The Netherlands and Ireland (41 ppb and 47 ppb, respectively).
Measuring the levels of iron and other metals in beer is not only important because they are essential to the human diet, but also because of their relevance in the brewing process. Levels of metals in beer can determine its organoleptic characteristics, stability and quality.
Researchers have validated the technique they developed to analyse iron (differential pulse adsorptive stripping voltammetry technique), which is “an ultra-sensitive, selective, rapid, reliable and cost-effective method”. The team has also recently applied an ‘electronic tongue’ for the first time to quantify the degree of bitterness and alcohol in beer.
___________
References:
Sancho, Daniel; Blanco, Carlos A.; Caballero, Isabel; Pascual, Ana. “Free iron in pale, dark and alcohol-free commercial lager beers”. Journal of the Science of Food and Agriculture 91(6):1142-7, 2011. Doi: 10.1002/jsfa.4298.
Subscribe to:
Posts (Atom)