Search This Blog

Sunday, May 15, 2011

Kids’ skin infections on the rise

University of Otago
leventkonuk_-_skin_infection.jpg
More kids are admitted to hospitals every
year with serious skin infections.
Image: LeventKonuk/iStockphoto
Serious skin infection rates in New Zealand children have increased markedly over the last two decades according to new research from the University of Otago, Wellington.

More than 100 children a week are now being admitted to New Zealand hospitals for treatment of skin infections with most needing intravenous antibiotics and one-third requiring surgery.

The study by Associate Professor Michael Baker, Dr Cathryn O’Sullivan and colleagues has been published in the international journal Epidemiology and Infection. For the first time it comprehensively details the high rate of serious skin infections amongst New Zealand children.

“It’s a distressing picture for our children,” says Associate Professor Baker. “We already had high rates of these infections compared to other similar countries. This research shows a large rise in children being admitted to hospital every year with serious skin infections like cellulitis, abscesses and impetigo.”

The fundamental finding of this new study is that serious skin infections, caused mainly by the bacteria Staphylococcus aureus and Streptococcus pyogenes, have almost doubled since 1990; from 298 cases per 100,000 to 547 cases.

There is now an average of 4,450 overnight hospital admissions a year for children 0-14 years of age, plus a further 850 children admitted as day patients.

“This burden of disease is important for several reasons. Firstly, these infections are very distressing for the children affected. The average length of hospital stay is three to four days. Two-thirds of these children need intravenous antibiotics, and one-third need surgical drainage under general anaesthetic.”

“Secondly, these infections should be highly preventable, particularly with early primary care treatment by GPs.”

“Thirdly, skin infections are filling up hospital wards and reducing their capacity to treat other serious surgical conditions. The direct cost to DHBs is around $15 million a year, so this is a major cost to the health system.”

The research also makes the point that serious skin infections are only the ‘tip of the iceberg’ as they do not take account of the thousands of other cases which do not result in hospitalisation. In addition to the 4,450 overnight admissions and 850 day cases admitted to hospital, an estimated 60,000 children visit GPs every year for treatment of skin infections.

Other key findings in this study are:
  • Boys have a significantly greater risk of infection than girls
  • Incidence is highest in pre-school children, with children under five years having more than double the rate of 5-9 year olds.
  • The rate of serious infections is almost three times higher for Maori children and over four times higher for Pacific children compared with other ethnicities.
  • Incidence of infection increases markedly with socio-economic deprivation. The rate for children from the most deprived areas is 4.3 times greater than those from the least deprived neighbourhoods.
  • Serious skin infections are more than 1.5 times more North Island DHBs have much higher rates than South Island DHBs.
Although this study did not examine reasons for the increase in serious skin infections, some of the factors may be linked to barriers in accessing primary healthcare including cost. Factors relating to socio-economic deprivation may include access to adequate hot water for washing, diet and nutrition, and household crowding.

Associate Professor Baker says this latest study fits with previous epidemiological research by the University of Otago, Wellington which has shown a marked increase in rates of hospitalisation for infectious diseases in NZ, along with rising inequalities. However the exact causes of the increased rates are still not known. Much of these increases happened during the 1990s when income inequalities were also rising.

“There’s an urgent need for action to prevent serious skin infections in children. More research is essential so we can identify the causes of this health problem, introduce preventative measures and improve early treatment,” says Associate Professor Baker.

Climate change impacts: the next decade


Projecting the future, even only a decade ahead, can be achieved by extrapolating current trends and/or using insight concerning the fundamental processes of the involved systems. Either way, uncertainties will exist.

This is certainly true with climate change and the likely emerging impacts of that change. Nevertheless, foresight has enormous potential benefits for seizing opportunities and avoiding pitfalls. In the end, it is about the management of risk, albeit recognising that in some cases anticipation of outcomes will turn out to be useful, if not essential, while in others, in the light of experience, it may be seen as having been a waste of effort.

Both the changes that have occurred to the global and regional climate over recent decades and our theoretical understanding of the climate system make it likely that for the next decade the trend towards warmer global average temperatures will continue. There will be year-to-year variations in average temperatures and even more so in climatic parameters at the regional level. The natural climate system is variable and that variability will continue.

The challenge through this coming decade will be to cope with both the variability and the change with as little impact on human and natural systems as possible.

The past century has already seen an inexorable increase in the pressure of the high pressure ridge that lies over the southern half of Australia. There is growing observational evidence that this reflects a strengthening of the Hadley circulation, the movement of warm tropical air pole-wards in the upper atmosphere to the mid latitudes where it descends and is responsible for the aridity across these latitudes in both hemispheres.

These observations agree with many theoretical models of the climate, and are implicated in the long string of low rainfall years in the south west of Western Australia since the 1970s and in the Murray Darling and Victoria over the past decade or so. It is probable that this trend will continue with concomitant impacts on water supplies, power generation, potable water use, agricultural production and natural ecosystems.

In this regard, conflict over the use of a diminishing resource, as already apparent in the Murray Darling Basin, is likely to only grow.

Through this next decade we may also see some of the first signs of other climate impacts in Australia, including more extreme sea-levels events associated with both higher sea levels and also more intense storms. Exposures around the national coastline including sandy beaches and in the major cities will occur with little predictability in terms of exact timing, but consistent with a steadily changing frequency of such events. Similarly it is likely there will be a change to the frequency of those occasions conducive to bush fires.

Lower water availability will demand engineering responses: pipelines, desalination, dams, and ground-water options. These options will likely expose sectoral differences and needs across the economy and conflicting purposes. In addition there will be a need for ongoing improvement for reduced human demand for water. It is likely this will evoke serious rethinking of long-held views about such things as regional development, the role and nature of agriculture in the economy, trading as a market force in managing diminishing resources, and ownership over natural resources including water – as well as natural ecosystems and their component species.

At all times knowledge will be accumulating in terms of our theoretical understanding of the climate system and the systems dependent on the state of the climate. Part of this will be observed impacts across the world with a likely ongoing loss of water from the major glaciers (currently contributing around a millimeter of sea-level rise per year), a non-zero chance of the entire loss of sea ice in the Arctic during the summer and the concomitant efforts by nations of that region to claim ownership of resources that become more readily accessible – already involving Canada, China, Norway, Russia and the United States.

It is likely that such pressure on international relationships and national security will not be confined to the polar regions. There will be ongoing evidence of change to ecosystems in migration, plant and animal behaviour, breeding times, etc. The impact on island nations of our region will grow in profile. Together, these observations will provide local stimuli for both adaptive and mitigative responses.

A consequence of both the observation of change and theoretical understanding may be that the magnitude of the risks associated with climate change will become more apparent, demanding stronger actions. For example a warming target of 2oC may become viewed as unacceptable risk, albeit perceived differently by different countries and sectors of the economy, heightening efforts to pursue a 350 ppm global concentration target.

While this may be driven by the falling water availability in some countries, it is possible that the poorly appreciated risk to the natural ecosystems that currently exists will become more apparent both from an ecosystem services and a planetary stewardship point of view.

Land management

This perspective may highlight the need for addressing methodologies for not only limiting the emissions of carbon, but tackling the task of removing greenhouse gases already in the atmosphere through land management technologies.

It will raise serious consideration of the possible need for geo-engineering of the climate system itself. Companies already exist around the world to invest in such technologies and reap the benefits of a future price on carbon. Such technologies vary from relatively small-scale land management projects, to global-scale engineering efforts to modify the energy budget of the planet.

The essential development over the next decade will be the formulation of a shared global view on appropriate research protocols and national actions in geo-engineering that truly reflect the very serious potential danger of some of these technologies – and the potential dangers of narrowly focussed researchers or nations acting according to their own interests rather than those of the wider global community.

A consequence of a drive towards a low-carbon future has ramifications for energy sourcing, production and infrastructure – and investment in existing energy generation methods. But it will open up enormous opportunities for new businesses in low-carbon emission and energy-efficiency technologies.

This transition has begun, but this next decade will see this intensify. Australia may have missed some of these opportunities, but many are still available for relatively early movers. The changes will be seen in a revolutionary move towards electric-drive vehicles, decentralisation of power supplies, diversification of electricity generation options such as geothermal, solar, wind, the development and deployment of energy storage systems, smart grids and energy management systems.

Huge improvements

Above all we will see huge improvements in energy efficiency of homes, commercial buildings and industrial processes and transport. This will create issues that will need resolution – such as the impact on disadvantaged members of the community of inevitably higher energy costs, disadvantaged companies and industrial sectors and the role of more controversial energy sources such as nuclear.

The climate change issue is about more than just whether the climate is changing and how it may physically impact on our societies. It is also about why the issue exists and why it is that managing the issue has, so far, been difficult. The connection to the drivers of change – human behaviour and societal institutions – has yet to be seriously explored (despite some early signs in the literature). This is likely to change through this decade.

Climate change results from the way we source and use energy and this in turn reflects our affluence, what we perceive as success and progress, livability, acceptable lifestyles and our cultures. It reflects our population size, our attitudes to immigration, and the nature of the way we build cities and communities and manage the land. It highlights the diverse methods we have for dealing with threats, such as avoidance, denial, resignation – to name a few “coping” mechanisms – and the barriers that exist for the incorporation of expert advice from all manner of experts into policy formulation.

In particular it highlights the sectoralisation of our communities, through the disciplinary base of knowledge generation, the targeted efforts of companies and the departmentalisation of governments, each tending to work against holistic considerations in policy formation and decision making. It stems from the way social institutions have evolved and how these, including our governance, financial, economic, and cultural systems, have countenanced the underpinning causes of climate change.

Climate change may indeed be illustrative of the non-strategic nature of social evolution – its development in largely incremental steps with little control imposed from longer-term strategic aspirations and needs especially from a society-wide perspective.

Through this decade we may find that the climate change issue becomes much more of a reflection on where this relatively directionless evolution has led us, it strengths and its non-sustainable weaknesses.

This will challenge our notions of the rationality of our decisions, the largely unconscious drivers of our aspirations and needs – and how fundamental to dealing with all issues of sustainability is a new focus on where we are directed.

Waking up to the dangers of radiation

By Lyn McLean Would you be willing to take a drug that had not been trialed before its release on the market? Would you take the drug if manufacturers assured you that it was ‘safe’ on the basis that it did not cause shocks, excessive heat or flashes of light in the eye? What if others who’d taken it developed problems ranging from headaches to life-threatening diseases?

Finally, would you give it to your children to take?

As ridiculous as this scenario may sound, the truth is that most people receive potentially harmful exposures like this every day – not necessarily from a drug – but from a risk of an entirely different sort.

The risk is electromagnetic pollution – the invisible emissions from all things electric and electronic. It is emitted by power lines, household wiring, electrical appliances and equipment, computers, wireless networks, mobile and cordless phones, mobile phone base stations, TV and radio transmitters and so on.

As engineers compete to develop an ever-diversifying range of radiating technologies to seduce a generation of addicts, and thereby ensure a lucrative return, there is an implicit assumption that these technologies are safe. They comply with international standards, we are told. But there the illusion of safety ends.

Sadly compliance with international standards is no more a guarantee of safety than being born rich is a guarantee of happiness.

For such standards protect only against a very few effects of radiation, and short-term effects at that (such as shocks, heating and flashes of light in the retina). They fail entirely to protect against the long-term effects of radiation which, of course, is the sort of radiation that you and I are exposed to if we use a mobile or cordless phone every day, live near a high voltage power line, use a wireless internet computer, or live under the umbrella of a mobile phone base station, TV or radio or satellite transmitter. In short, we’re all exposed.

Regulating to protect only against some of the effects of radiation is a bureaucratic nonsense. It’s like regulating a car’s airbags and not its brakes. It’s like regulating the colour of a pill and not its contents. It’s every bit as meaningless to public health protection.

Particularly when long-term exposure to electromagnetic radiation has been convincingly linked to problems such as leukemia, Alzheimer’s disease, brain tumours, infertility, genetic damage and cancerous effects, headaches, depression, sleep problems, reduced libido, irritability and stress.

Short-term protection is a short-sighted approach to public health protection. It may guarantee safety of the politicians as far as the next election. It may guarantee protection of a manufacturer as far as its next annual profit statement. But it does not guarantee the safety of the users of this technology, particularly those children who are powerless to make appropriate choices about technology and manage their exposure, who are more vulnerable to its emissions and who have a potential lifetime of exposure.

History is replete with examples of innovations that seemed like a good idea at the time but which eventually caused innumerable problems – to users, to manufacturers and to the public purse. Tobacco, asbestos and lead are but a few.

The risk is that electromagnetic pollution is a public health disaster unfolding before our eyes. By failing to implement appropriate standards; by ignoring signs of risk from science; by failing to ensure addictive technologies are safe before they’re released onto the market – our public health authorities have abrogated their responsibilities and chosen to play Russian roulette with our health.

It’s a gamble that not everyone assumes willingly.

Lyn McLean is author of The Force: living safely in a world of electromagnetic pollution published by Scribe Publications in February.

Why is no one talking about safe nuclear power?


By Julian Cribb This article was first published in the Canberra Times.
In the wake of the Fukushima nuclear disaster, the most extraordinary thing is the lack of public discussion and the disturbing policy silence - here and worldwide - over safe nuclear energy.

Yes, it does exist.

There is a type of nuclear reactor which cannot melt down or blow up, and does not produce intractable waste, or supply the nuclear weapons cycle. It's called a thorium reactor or sometimes, a molten salt reactor - and it is a promising approach to providing clean, reliable electricity wherever it is needed.

It is safe from earthquake, tsunami, volcano, landslide, flood, act of war, act of terrorism, or operator error. None of the situations at Fukushima, Chernobyl or Three Mile Island could render a thorium reactor dangerous. Furthermore thorium reactors are cheap to run, far more efficient at producing electricity, easier and quicker to build and don't produce weapons grade material.

The first thorium reactor was built in 1954, a larger one ran at Oak Ridge in the United States from 1964-69, and a commercial-scale plant in the 1980s - so we are not talking about radical new technology here. Molten salt reactors have been well understood by nuclear engineers for two generations.

They use thorium as their primary fuel source, an element four times more abundant in the Earth's crust than uranium, and in which Australia, in particular, is richly-endowed. Large quantities of thorium are currently being thrown away worldwide as a waste by-product of sand mining for rare earths, making it very cheap as a fuel source.

Unlike Fukushima, these reactors don't rely on large volumes of cooling water which may be cut off by natural disaster, error or sabotage. They have a passive (molten salt) cooling system which cools naturally if the reactor shuts down. There is no steam pressure, so the reactor cannot explode like Chernobyl did or vent radioactivity like Fukushima. The salts are not soluble and are easily contained, away from the public and environment. This design makes thorium reactors inherently safe, whereas the world's 442 uranium reactors are inherently risky (although the industry insists the risks are very low).

They produce a tenth the waste of conventional uranium reactors, and it is much less dirty, only having to be stored for three centuries or so, instead of tens of thousands of years.

Furthermore, they do not produce plutonium and it is much more difficult and dangerous to make weapons from their fuel than from uranium reactors.

An attractive feature is that thorium reactors are ''scalable'', meaning they can be made small enough to power an aeroplane or large enough to power a city, and mass produced for almost any situation.

Above all, they produce no more carbon emissions than are required to build them or extract their thorium fuel. They are, in other words, a major potential source of green electricity. According to researcher Benjamin Sovacool, there have been 99 accidents in the world's nuclear power plants from 1952-2009. Of these, 19 have taken human life or caused over $100million in property damage.

Such statistics suggest than mishaps with uranium power plants are unavoidable, even though they are comparatively rare. (And, it must be added, far fewer people die from nuclear accidents than die from gas-fired, hydroelectric or coal-fired power generation.)

But why have most people never heard of thorium reactors? Why is there not active public discussion of their pros and cons compared with uranium, solar, coal, wind, gas and so on? Why is the public, and the media especially, apparently in ignorance of the existence of a cheap, reliable, clean and far less risky source of energy? Above all - apart from one current trial of a 200MW unit by Japan, Russia and the US - why is almost nobody seeking to commercialise this proven source of clean energy? The situation appears to hold a strong analogy with the stubborn refusal of the world's oil and motor vehicle industries for more than 70 years to consider any alternative to the petrol engine, until quite recently.

Industries which have invested vast sums in commercialising or supplying a particular technology are always wary of alternatives that could spell its demise and will invest heavily in the lobbying and public relations necessary to ensure the competitor remains off the public agenda.

It is one of the greatest of historical ironies that the world became hooked on the uranium cycle as a source of electrical power because those sorts of reactors were originally the best way to make weapons materials, back in the '50s and '60s. Electricity was merely a by-product. Today, the need is for clean power rather than weapons, and Fukushima is a plain warning that it is high time to migrate to a safer technology. Whether or not it ever adopts nuclear electricity, Australia will continue to be a prominent player as a source of fuel to the rest of the world - be it uranium or thorium.

So why this country is not doing leading-edge research and development for the rapid commercialisation of safe nuclear technology is beyond explanation. There is good money to be made both in extracting thorium and in exporting reactors (we bought our most recent one from Argentina).

As a science writer, I do not argue the case for thorium energy over any other source, but it must now be seriously considered as an option in our future energy mix. Geoscience Australia estimates Australia has 485,000 tonnes of thorium, nearly a quarter of the total estimated world reserves. Currently they are worthless but there is a strong argument to invest some of our current coal and iron ore prosperity in developing a new safe, clean energy source for our own and humanity's future.
Julian Cribb is a Canberra science writer.

How to measure learning


Belinda Probet The new Tertiary Education Quality and Standards Agency will not be fully operational until 2012.

Understandably, universities want to make sure it will focus adequate attention on the risky bits of the industry while not strangling it with red tape.
But perhaps more significant in the longer run will be the way it implements one of the most radical recommendations from the Bradley review, namely that universities report on direct measures of learning outcomes.

Earlier attempts to measure the quality of university teaching relied on indicators that had little research-based validity, leading to rankings that were uniformly rejected by the sector.

Six months ago the Bradley-inspired Department of Education, Employment and Workplace Relations' discussion paper on performance indicators proposed that cognitive learning outcomes ideally would include discipline-specific measures as well as measures of higher-order generic skills such as communication and problem-solving so valued by employers.

As recently suggested by Richard James, from the Centre for the Study of Higher Education at the University of Melbourne, the public has a right to know not just whether groups of graduates met a threshold standard but also whether their skills were rated good or excellent (HES, July 7).

The difficulty with his seemingly sensible suggestion is that there is almost no data on what students are actually learning.

Even the toughest accreditation criteria focus on inputs such as hours in class, words written, content covered, credit points earned and the status of teachers.
Tools such as the Course Experience Questionnaire and the increasingly popular Australian Survey of Student Experience provide data that can be used to good effect by academics with a serious interest in pedagogy. None of these measures learning, however.

Nearly every Australian university proclaims a set of graduate attributes that includes communication, problem-solving and teamwork.

But none defines the standards to be achieved or the method by which they will be assessed, despite pilgrimages to Alverno, the tiny private US college that knows how to do this.

And it would probably be unwise to hold our collective breath until the Organisation for Economic Co-operation and Development completes its Assessment of Higher Education Learning Outcomes feasibility study.

Does the absence of agreed measures and standards mean TEQSA should abandon this key Bradley recommendation and resort to input measures of the kind used to allocate the Learning and Teaching Performance Fund, together with some kind of graduate skills test?

If we agree with Bradley that learning is what we should be measuring, then what we have called Design for Learning at La Trobe University may be of help. Like most universities we have agreed on six graduate capabilities that all undergraduate programs should develop. But we also have agreed they will be defined in appropriate discipline or field-specific terms and be assessed against agreed standards of student achievement.

To develop these explicit standards of achievement, academic staff in each faculty are looking at real examples of student work, to define not just the standards but the indicators, measures and procedures for gathering and evaluating evidence of student learning. This is relatively straightforward for writing or quantitative reasoning, but it is not so easy when it comes to problem solving or teamwork, which may look rather different for physiotherapists and engineers.

We are not asking for spurious degrees of fine judgment (is this worth 64 or 65 marks), but for robust definitions that allow an evidence-based, university-wide judgment that the student has produced work that is good enough, better than good enough or not yet good enough.

If we expect students to demonstrate these capabilities at graduation, then we also have a responsibility to show where, in any particular course of study, they are introduced, developed, assessed and evaluated.

Most such capabilities require development across several years and are not skills that can be picked up in a single subject. Nor is there any point telling students they are not good enough if you cannot show them where and when they will have the opportunity to improve their capabilities.

For these reasons we need to be able to assess and provide feedback very early on in the course (in a cornerstone), somewhere towards the middle, as well as at the end, in a capstone experience.

It would be a lost opportunity and a backward step if TEQSA concludes that measuring student learning is too difficult and resorts to the suggested generic graduate skills assessment test -- which measures little of value about what students have learned -- or relies on students' assessments of their generic skills as captured in the CEQ. Students' assessments of their capabilities are no substitute for the skilled independent assessment, against explicit standards, of academic staff.

Would it not be better if TEQSA gave universities the opportunity to develop explicit, not minimum, standards for student learning, defined through their chosen institutional graduate capabilities?

Such a first step also would provide the foundation for setting measurable targets for improving this learning and would support the government's goal of encouraging diversity of institutional mission, by requiring not only explicitness of purpose but also of standards.

Having defined and mapped where La Trobe's capabilities are developed and assessed across the curriculum, we expect to be able to set targets for improvement, such as increasing the percentage of our graduates who meet the better-than-good-enough standard by improving the design of particular programs of study.

Or we may plan to raise the bar for what constitutes good enough by evaluating and revising parts of the curriculum.

In a diversified sector the standards chosen will vary from university to university but, once developed, the potential for benchmarking is obvious.

Nano-vaccine beats cattle virus

The University of Queensland
mikedabell_-_cows.jpg
Bovine Viral Diarrhoea Virus is the industry's
most devastating virus.
Image: Mikedabell/iStockphoto
A world-first cattle vaccine based on nanotechnology could provide protection from the Bovine Viral Diarrhoea Virus (BVDV), which costs the Australian cattle industry tens of millions of dollars in lost revenue each year.

The new BVDV vaccine that constitutes a protein from the virus loaded on nanoparticles, has been shown to produce an immune response against the industry's most devastating virus.

A group of Brisbane scientists has shown that the BVDV nanoformulation can be successfully administered to animals without the need of any additional helping agent making a new ‘nanovaccine' a real possibility for Australian cattle industries.

Scientists Dr Neena Mitter and Dr Tim Mahony from the Queensland Alliance for Agriculture and Food Innovation (QAAFI) a UQ Institute recently established in partnership with the Queensland Department of Employment Economic Development and Innovation (DEEDI), partnered with nanotechnology experts Professor Max Lu and Associate Professor Shizang Qiao from the UQ Australian Institute of Bioengineering & Nanotechnology (AIBN) to develop the vaccine.

Dr Neena Mitter said the multidisciplinary team applied the latest in nanotechnology to develop a safe and effective vaccine that has the potential to be administered more readily and cost effectively than traditional vaccines by using nanoparticles as the delivery vehicles.

“The vaccine is exciting as it could feasibly enable better protection against the virus, can be stored at room temperature and has a long shelf life,” said Dr Mitter.

According to Dr Mahony, BVDV is of considerable concern with regard to the long-term profitability of cattle industries across Australia. Cattle producers can experience productivity losses of between 25 and 50 per cent following discovery of BVDV in previously uninfected herds.

“In Queensland alone the beef cattle industry is worth approximately $3.5 billion per year and the high-value feedlot sector experiences losses of over $60 million annually due to BVDV-associated illness,” he said.

Further trials of the nanovaccine will now be conducted with plans to develop a commercial veterinary product in the near future.

The white, green and black of energy

The white, green and black of energy
By Vikki McLeod With the inclusion of a National White Certificate Scheme in the coalitions CPRS amendments, we need to ask what is it? And what is it good for?
Australia’s stationary energy sector is responsible for more than 50 per cent of Australia’s greenhouse emissions. Government policy to transition our energy sector from carbon high to carbon lite is the key to protecting both our economy and the environment.
Internationally there is consensus: the least cost and most economic secure path to a sustainable energy future is aggressive energy efficiency (the white), permanent shift to renewable energy (the green) and strategic use of fossil fuels (the black). But this is not currently the direction the energy market is taking us.
The energy market was deregulated in the 1990s and before greenhouse abatement was a priority. It is a commodity market and generators and retailers profit by selling more energy (either green or black). The challenge is to “decouple” energy sales from energy services.
Energy sales and energy use is growing at about 2 per cent per annum. Sure, this reflects economic and population growth but it is also growth in energy waste. Australia is the bottom of the class when it comes to energy efficient economies. We could learn from California, where they have been maintaining high levels of economic growth and yet they have stabilised energy growth.
A compounding problem is that our growth in renewable energy generation is much less than 2 per cent. Consequently, the additional growth in demand is not even being met by green generation but by black generation. So despite almost ten years of a green target, renewable energy is losing market share.
Without aggressively pursuing energy efficiency, the renewable energy target will continue to chase a receding target and there is also a risk the oldest and dirtiest coal-fired power stations may remain in operation even with a CPRS. So we could end up with the same level of emissions and the same generation mix but just be paying more for it.
Aggressive energy efficiency is the rationale behind the White Certificate Scheme (WCS). A WCS is the “white” policy patch on the energy market, which would allow energy retailers to make a profit from energy efficiency. Energy efficiency becomes another commodity to be sold and marketed to clients (householders, businesses, commercial properties and industry). The other difference with WCS is that inefficient appliances must be retired: that 2nd fridge which is belching in the garage that we kid ourselves is keeping the beer cold will have to be unplugged and go.
WCS had its genesis in Australia with the New South Wales Greenhouse Abatement Scheme in 2003 and a strengthened WCS was proposed as part of the COAG endorsed National Framework for Energy Efficiency in 2004. While the recommendation was for a national scheme this was not supported by the Howard government. The South Australian, New South Wales and Victorian governments went ahead with state-based schemes as energy security measures. A national WCS would be an opportunity to harmonise the state schemes but also an opportunity to include Queensland which is struggling with large growth in energy demand.
The green, black and white markets are distinct and not fungible. The black ETS market has carbon intensity measured at the smoke stack; the white energy efficiency market is measured at the meter. The green renewable energy market is carbon neutral generation and includes commercially competitive renewable energy technologies such as wind, hydro and solar. Each market has its own cost curve and technologies.
WCS is also an opportunity to help the ailing Renewable Energy Target which is currently experiencing a market price collapse. Current REC price is $28: enough to deliver investment in solar water heaters but not enough for the more expensive renewable energy of wind, solar thermal or geothermal. Water heating was a contentious inclusion in the green market. It has long been argued that water heaters are an energy efficiency measure. A better outcome would be to take water heaters out of the green market and include them in the white market (and building codes).
With the time frame we have to decarbonise our energy sector we need to push on each of the three policy fronts - the green, black and white - at the same time.
Other governments have taken this approach. The European Union, the United States and the United Kingdom are just a few examples:
  • Europe Union through the “20, 20, 20 by 2020”: 20 per cent reduction in GHE, 20 per cent increase in renewable energy, 20 per cent improvement in energy efficiency by 2020.
  • USA with the California loading order and the Waxman Markey Bill.
  • UK and its recent Energy White paper.
Vikki McLeod is an engineer and independent energy and carbon consultant who was responsible for the original policy design of the National Energy Efficiency Target for the COAG endorsed National Framework on Energy Efficiency. Vikki McLeod was a former Senior Adviser to Senator Lyn Allison who tabled a private members Bill for a white certificate scheme, “National Market Driven Energy Efficiency Target Bill 2007” and which has been re-tabled by the Australian Greens as “Safe Climate (Energy Efficiency Target) Bill 2009”.

Alternatives to urban water restrictions (Science Alert)

Alternatives to urban water restrictions (Science Alert)

A Better Way to Teach?

Any physics professor who thinks that lecturing to first-year students is the best way to teach them about electromagnetic waves can stop reading this item. For everybody else, however, listen up: A new study shows that students learn much better through an active, iterative process that involves working through their misconceptions with fellow students and getting immediate feedback from the instructor.
The research, appearing online today in Science, was conducted by a team at the University of British Columbia (UBC), Vancouver, in Canada, led by physics Nobelist Carl Wieman. First at the University of Colorado, Boulder, and now at an eponymous science education initiative at UBC, Wieman has devoted the past decade to improving undergraduate science instruction, using methods that draw upon the latest research in cognitive science, neuroscience, and learning theory.
In this study, Wieman trained a postdoc, Louis Deslauriers, and a graduate student, Ellen Schelew, in an educational approach, called “deliberate practice,” that asks students to think like scientists and puzzle out problems during class. For 1 week, Deslauriers and Schelew took over one section of an introductory physics course for engineering majors, which met three times for 1 hour. A tenured physics professor continued to teach another large section using the standard lecture format.
The results were dramatic: After the intervention, the students in the deliberate practice section did more than twice as well on a 12-question multiple-choice test of the material as did those in the control section. They were also more engaged—attendance rose by 20% in the experimental section, according to one measure of interest—and a post-study survey found that nearly all said they would have liked the entire 15-week course to have been taught in the more interactive manner.
“It’s almost certainly the case that lectures have been ineffective for centuries. But now we’ve figured out a better way to teach” that makes students an active participant in the process, Wieman says. Cognitive scientists have found that “learning only happens when you have this intense engagement,” he adds. “It seems to be a property of the human brain.”
The “deliberate practice” method begins with the instructor giving students a multiple-choice question on a particular concept, which the students discuss in small groups before answering electronically. Their answers reveal their grasp of (or misconceptions about) the topic, which the instructor deals with in a short class discussion before repeating the process with the next concept.
While previous studies have shown that this student-centered method can be more effective than teacher-led instruction, Wieman says this study attempted to provide “a particularly clean comparison ... to measure exactly what can be learned inside the classroom.” He hopes the study persuades faculty members to stop delivering traditional lectures and “switch over” to a more interactive approach. More than 55 courses at Colorado across several departments now offer that approach, he says, and the same thing is happening gradually at UBC. Deslauriers says that the professor whose students fared worse on the test initially resisted the findings, “but this year, after 30 years of teaching, he’s learning how to transform his course.”
Jere Confrey, an education researcher at North Carolina State University in Raleigh, said the value of the study goes beyond the impressive exam results. “It provides evidence of the benefits of increasing student engagement in their own learning,” she says. “It’s not just gathering data that matters but also using it to generate relevant discussion of key questions and issues.” She also note

Mice Reject Reprogrammed Cells

Scientists have high hopes that stem cells called induced pluripotent stem (iPS) cells can be turned into replacement tissues for patients with injury or disease. Because these cells are derived from a patient’s own cells, scientists had assumed that they wouldn’t be rejected—a common problem with organ transplants. But a new study suggests that the cells can trigger a potentially dangerous immune reaction after all.
To make iPS cells, scientists use a technique called cellular reprogramming. By activating a handful of genes, they turn the developmental clock backward in adult cells, converting them into an embryolike state. The reprogrammed cells become pluripotent, which means they have the ability to differentiate into all of the body’s cell types. Scientists are already using these iPS cells to study diseases and test drugs.
Induced pluripotent stem cells have a couple of advantages over embryonic stem (ES) cells. They don’t require the use of embryos, so they avoid some of the ethical and legal issues that have complicated research with embryonic stem cells. They also allow researchers to make genetically matched cell lines from patients. Many scientists have assumed that would provide a source of transplantable cells that wouldn’t require the immune system to be suppressed to avoid rejection, as is necessary with organ transplants.
That assumption might not be correct, however. Immunologist Yang Xu of the University of California, San Diego, and his colleagues tested what happened to several kinds of pluripotent cells when they were transplanted into genetically matched mice. Inbred mouse strains are the genetic equivalent of identical twins, and they can serve as organ donors for each other without any immune suppression. The researchers used two popular inbred strains, called B6 and 129, for their experiments.
When the researchers implanted ES cells from a B6 mouse embryo into a B6 mouse, it formed a typical growth, called a teratoma, which is a mixture of differentiating cell types. (Teratoma formation is a standard test of ES and iPS cells’ pluripotency.) ES cells from a 129 mouse, on the other hand, were unable to form teratomas in B6 mice because the animals’ immune systems attacked the cells, which they recognized as foreign.
The researchers then implanted iPS cells made from B6 mouse cells into B6 mice. To their surprise, many of the cells failed to form teratomas at all—similar to what the researchers saw when they transplanted ES cells from one mouse strain to another. The teratomas that did grow were soon attacked by the recipient’s immune system and were rejected, the team reports online today in Nature. The immune response “is the same as that triggered by organ transplant between individuals,” Xu says.
The immune reaction was less severe when the researchers used iPS cells made with a newer technique. The new method ensures that the added genes that trigger reprogramming turn off after they’ve done their job. But the reaction didn’t go away completely. The researchers showed that the iPS cell teratomas expressed high levels of certain genes that could trigger immune cells to attack. That is probably due to incomplete reprogramming that leaves some genes misexpressed, Xu says.
The results add to a series of findings that iPS cells differ in subtle but potentially important ways from ES cells. George Daley, a stem cell scientist at Children’s Hospital Boston, says the new study is “fascinating,” but he doesn’t think immune rejection will be an insurmountable problem for iPS cells. Once iPS cells have differentiated into the desired tissue type, they may not express the problematic genes, he notes. And dozens of labs are working on ways to improve the reprogramming process so that the stray gene expression is eliminated. In principle, he says, “we should be able to make iPS cells that are the same as ES cells.”
In the meantime, both Xu and Daley say the results underscore the need to continue work with ES cells so that researchers can fully understand—and try to overcome—the differences. “It’s a reminder that we can’t dismiss ES cells,” Daley says.

Artificial grammar learning reveals inborn language sense, study shows



 Psychology & Psychiatry
Parents know the unparalleled joy and wonder of hearing a beloved child's first words turn quickly into whole sentences and then babbling paragraphs. But how human children acquire language-which is so complex and has so many variations-remains largely a mystery. Fifty years ago, linguist and philosopher Noam Chomsky proposed an answer: Humans are able to learn language so quickly because some knowledge of grammar is hardwired into our brains. In other words, we know some of the most fundamental things about human language unconsciously at birth, without ever being taught.
Now, in a groundbreaking study, cognitive scientists at The Johns Hopkins University have confirmed a striking prediction of the controversial hypothesis that human beings are born with knowledge of certain syntactical rules that make learning human languages easier.
"This research shows clearly that learners are not blank slates; rather, their inherent biases, or preferences, influence what they will learn. Understanding how language is acquired is really the holy grail in linguistics," said lead author Jennifer Culbertson, who worked as a doctoral student in Johns Hopkins' Krieger School of Arts and Sciences under the guidance of Geraldine Legendre, a professor in the Department of Cognitive Science, and Paul Smolensky, a Krieger-Eisenhower Professor in the same department. (Culbertson is now a postdoctoral fellow at the University of Rochester.)
The study not only provides evidence remarkably consistent with Chomsky's hypothesis but also introduces an interesting new approach to generating and testing other hypotheses aimed at answering some of the biggest questions concerning the language learning process.
In the study, a small, green, cartoonish "alien informant" named Glermi taught participants, all of whom were English-speaking adults, an artificial nanolanguage named Verblog via a video game interface. In one experiment, for instance, Glermi displayed an unusual-looking blue alien object called a "slergena" on the screen and instructed the participants to say "geej slergena," which in Verblog means "blue slergena." Then participants saw three of those objects on the screen and were instructed to say "slergena glawb," which means "slergenas three."
Although the participants may not have consciously known this, many of the world's languages use both of those word orders-that is, in many languages adjectives precede nouns, and in many nouns are followed by numerals. However, very rarely are both of these rules used together in the same human language, as they are in Verblog.
As a control, other groups were taught different made-up languages that matched Verblog in every way but used word order combinations that are commonly found in human languages.
Culbertson reasoned that if knowledge of certain properties of human grammars-such as where adjectives, nouns and numerals should occur-is hardwired into the human brain from birth, the participants tasked with learning alien Verblog would have a particularly difficult time, which is exactly what happened.
The adult learners who had had little to no exposure to languages with word orders different from those in English quite easily learned the artificial languages that had word orders commonly found in the world's languages but failed to learn Verblog. It was clear that the learners' brains "knew" in some sense that the Verblog word order was extremely unlikely, just as predicted by Chomsky a half-century ago.
The results are important for several reasons, according to Culbertson.
"Language is something that sets us apart from other species, and if we understand how children are able to quickly and efficiently learn language, despite its daunting complexity, then we will have gained fundamental knowledge about this unique faculty," she said. "What this study suggests is that the problem of acquisition is made simpler by the fact that learners already know some important things about human languages-in this case, that certain words orders are likely to occur and others are not."
This study was done with the support of a $3.2 million National Science Foundation grant called the Integrative Graduate Education and Research Traineeship grant, or IGERT, a unique initiative aimed at training doctoral students to tackle investigations from a multidisciplinary perspective.
According to Smolensky, the goal of the IGERT program in Johns Hopkins' Cognitive Science Department is to overcome barriers that have long separated the way that different disciplines have tackled language research.
"Using this grant, we are training a generation of interdisciplinary language researchers who can bring together the now widely separated and often divergent bodies of research on language conducted from the perspectives of engineering, psychology and various types of linguistics," said Smolensky, principal investigator for the department's IGERT program.
Culbertson used tools from experimental psychology, cognitive science, linguistics and mathematics in designing and carrying out her study.
"The graduate training I received through the IGERT program at Johns Hopkins allowed me to synthesize ideas and approaches from a broad range of fields in order to develop a novel approach to a really classic question in the language sciences," she said.
Provided by Johns Hopkins University
"Artificial grammar learning reveals inborn language sense, study shows." May 13th, 2011. http://medicalxpress.com/news/2011-05-artificial-grammar-reveals-inborn-language.html
Comment:If verblog is contrived by English speakers then it will unwittingly incorporate elements of English, not German, Hopi or Swahili, syntax and grammar.  What they have discovered is the 'language specific' grammar formed during the earliest years of life and not the innate form, assuming that there is one, which would be below that level and not specific to any language.
Posted by
Robert Karl Stonjek

வைகாசி விசாகத்தன்று ஓம்சிவசிவஓம் ஜபிப்போம்


by Keyem Dharmalingam on Sunday, 15 May 2011 at 12:05
வைகாசி விசாகத்தன்று ஓம்சிவசிவஓம் ஜபிப்போம்

எதிர்வரும் 16.5.2011 திங்களும், 17.5.2011 செவ்வாயும் பவுர்ணமி திதி வருகிறது.திங்கள் மாலை முதல் செவ்வாய் மாலை சுமார் 5 மணி வரையிலும் வைகாசி பவுர்ணமி வருவதால், 16.5.11 திங்கள் இரவுதான் பவுர்ணமி என கணக்கில் எடுத்துக்கொள்ள வேண்டும். தவிர, இந்த வைகாசி மாதம் இரண்டு பவுர்ணமிகள் வருகின்றன. ஆமாம். வைகாசி மாதத்தின் இறுதியிலும் ஒரு பவுர்ணமி வருகிறது.ஆனால்,அது விசாக நட்சத்திரத்தில் வரவில்லை;கேட்டை நட்சத்திரத்தில் வருகிறது. எனவே, முதல் பௌர்ணமியே வைகாசி விசாகம். பவுர்ணமியன்று ஏதாவது ஒரு அம்மன் சன்னதியில் இரவு 9 மணி முதல் 12 மணி வரை அமர்ந்து, (பகலில் முடிந்தால் எதுவும் சாப்பிடாமல் இருந்து) அல்லது ஒரு மணி நேரமாவது ஓம்சிவசிவஓம் ஜபிப்போம்;நமது ஒவ்வொரு ஓம்சிவசிவஓம் ஜபமும் ஒரு கோடி தடவை ஜபிப்பததற்கான பலனை நமக்குத் தரும்; கூடவே, இரண்டு கைகளிலும் தலா ஒரு ஐந்துமுக ருத்ராட்சம் வைத்து ஜபிப்பதால்,ஒரு ஓம்சிவசிவஓம் ஜபம்,100 கோடி தடவை ஜபித்தமைக்கான பலனைத் தரும்.

இருந்தும், ஏன் நமது நியாயமான கோரிக்கை அல்லது ஆசை விரைவில் நிறைவேறுவதில்லை?

நாம் குறைந்தது கடந்த ஏழு ஜன்மங்களில் செய்த பாவங்களை / கர்மங்களை இந்த ஜன்மத்தில் அனுபவிக்கிறோம்.புண்ணியத்தையும் தான்.இதில் பாவ அல்லது கர்மக்கணக்கு அதிகமாக இருப்பதால் கஷ்டப்படுகிறோம். இந்த கர்மக்கணக்கினை கரைக்க கலியுகத்தில் இறைநாம ஜபமே ஏற்றது.சுலபமானது; எளியது; தவிர, நமது கர்மத்தை நாம் மட்டுமே கரைக்க முடியும்;வேறு யாராலும் கரைக்க முடியாது!!!

எனவே, நமது ஓம்சிவசிவஓம் மந்திரஜப எண்ணிக்கை ஒரு லட்சத்தைத் தாண்ட வேண்டும்; ஒரு நாளுக்கு இரண்டு வேளை வீதம்,ஒரு வேளைக்கு ஒரு மணி நேரம் என ஓம்சிவசிவஓம் ஜபித்து வந்தால், நாம் ஒரு நாளுக்கு 400 முறையே ஓம்சிவசிவஓம் ஜபித்திருப்போம்.(எண்ணிப்பார்த்தாலும் சரி, எண்ணிப்பார்க்காமல் இருந்தாலும் சரி) நமது ஓம்சிவசிவஓம் மந்திரத்தின் ஜப எண்ணிக்கை ஐந்தாயிரத்தைத் தாண்டியதும், சிறு சிறு அதிசயங்களை நாம் உணரத்துவங்குவோம்; இரண்டு வார செய்முறையால் இந்த அனுபவத்தை நாம் பெறமுடியுமானால் ஏன் நாமும் முயற்சி செய்யக்கூடாது. ...நன்றி வலைத்தளம் : ஓம்சிவசிவஓம்

தெற்கு ஸ்பெயினில் நிலநடுக்கம்- 10க்கும் மேற்பட்டோர் பலி!

தெற்கு ஸ்பெயினில் நிலநடுக்கம்- 10க்கும் மேற்பட்டோர் பலி!

சட்டவிரோத ஆட்கடத்தல்காரரை இந்தோனேசியா நாடுகடத்தியது- கைது செய்தது ஒஸ்ரேலியா!

சட்டவிரோத ஆட்கடத்தல்காரரை இந்தோனேசியா நாடுகடத்தியது- கைது செய்தது ஒஸ்ரேலியா!

அமெரிக்க குடிவரவு சட்டத்தில் சீர்திருத்தம் செய்ய வேண்டும்- ஓபாமா!

அமெரிக்க குடிவரவு சட்டத்தில் சீர்திருத்தம் செய்ய வேண்டும்- ஓபாமா!

ஐரோப்பிய ஒன்றிய நாடுகளுக்கு நெருக்கடியை கொடுத்துள்ள செங்கன்விசா!

ஐரோப்பிய ஒன்றிய நாடுகளுக்கு நெருக்கடியை கொடுத்துள்ள செங்கன்விசா!

கொங்கிரஷ் படுதோல்வி- தங்கபாலு பதவி விலகினார்!

கொங்கிரஷ் படுதோல்வி- தங்கபாலு பதவி விலகினார்!

திமுக தோல்வி – தொண்டர் தீக்குளித்து தற்கொலை!

திமுக தோல்வி – தொண்டர் தீக்குளித்து தற்கொலை!

ஜெயலலிதா நாளை 12.15க்கு முதலமைச்சராக பதவி ஏற்கிறார்!

ஜெயலலிதா நாளை 12.15க்கு முதலமைச்சராக பதவி ஏற்கிறார்!

பாலியல் பலாத்கார குற்றச்சாட்டில் (IMF) சர்வதேச நாணய நிதியத்தலைவர் கைது!

பாலியல் பலாத்கார குற்றச்சாட்டில் (IMF) சர்வதேச நாணய நிதியத்தலைவர் கைது!