Search This Blog

Sunday, May 15, 2011

Learning to understand others' actions

Learning to understand others' actions

  1. Clare Press1,2,*,
  2. Cecilia Heyes3 and
  3. James M. Kilner1
+ Author Affiliations
  1. 1Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
  2. 2School of Psychology and Clinical Language Sciences, University of Reading, Whiteknights, Reading RG6 6AL, UK
  3. 3All Souls College and Department of Experimental Psychology, University of Oxford, High Street, Oxford OX1 4AL, UK
  1. * Author for correspondence (c.m.press@reading.ac.uk).

Abstract

Despite nearly two decades of research on mirror neurons, there is still much debate about what they do. The most enduring hypothesis is that they enable ‘action understanding’. However, recent critical reviews have failed to find compelling evidence in favour of this view. Instead, these authors argue that mirror neurons are produced by associative learning and therefore that they cannot contribute to action understanding. The present opinion piece suggests that this argument is flawed. We argue that mirror neurons may both develop through associative learning and contribute to inferences about the actions of others.

1. Introduction

Mirror neurons, which have been discovered in the premotor area F5 [1] and inferior parietal lobule, area PF [2] of macaque monkeys, discharge not only when the monkey executes an action of a certain type (e.g. precision grip), but also when it observes the experimenter performing the same action. A number of neuroimaging studies have provided evidence that a similar system also exists in humans (e.g. [3]). A matter of much debate is whether activity in the so-called ‘mirror neuron system’ (MNS) reflects neural processes engaged in ‘action understanding’, that is, inferences about the goals and intentions driving an observed action. It has been suggested that mirror neurons are simply the result of learned sensorimotor associations, as proposed in the associative sequence learning (ASL) model [4,5], and that this ontogeny is inconsistent with a role in understanding the actions of others [6,7]. In contrast, we argue that mirror neurons may develop through associative learning and subsequently contribute to action understanding.

2. ASL model

The ASL model [4,5] proposes that the mirror properties of the MNS emerge through sensorimotor associative learning. Under this hypothesis, we are not born with an MNS. Rather, experience in which observation of an action is correlated with its execution establishes excitatory links between sensory and motor representations of the same action. We have abundant experience of matching relationships between observed and executed actions during our lives [8]. Following such experience, observation of an action is sufficient to activate its motor representation. Therefore, representations that were originally motor become ‘mirror’ (activated when observing and executing the same action, figure 1).
Figure 1.
Associative sequence learning. Before learning, sensory neurons (S1, S2 and Sn) which are responsive to different high-level visual properties of an observed action are weakly and unsystematically connected (dashed arrows) to some motor neurons (M1, M2 and Mn), which discharge during the execution of actions. The kind of learning that produces mirror neurons occurs when there is correlated (i.e. contiguous and contingent) activation of sensory and motor neurons that are each responsive to similar actions.
If the ASL model is correct, mirror neurons do not have an ‘adaptive function’, they did not evolve ‘for’ action understanding or to meet the demands of any other cognitive task [5]. However, as a by-product of associative learning, mirror neurons could still be recruited in the course of development to play some part in a variety of cognitive tasks. Therefore, according to the ASL model, they could be useful without being essential, and without their utility explaining their origins. Specifically, mirror neurons could play a part in action understanding even if this functional role was not favoured by natural selection in the course of phylogenetic evolution.
So why has the ASL hypothesis been interpreted as evidence against a functional role of mirror neurons in action understanding? Hickok [6] argued that some of the evidence that has been published in support of ASL is inconsistent with the hypothesis that the MNS is involved in action understanding. The studies in question require participants to observe actions while systematically executing non-matching actions, and subsequently record indices of MNS functioning. The rationale for these experiments assumes that, if the MNS develops through associative learning, then experiences that differ from those typically encountered during life should reconfigure the MNS and change the way it operates. Consistent with this prediction, it has been found that training in which participants are required to perform index finger actions when they see little finger actions, and vice versa, results in activation of primary motor cortical representations of the index finger when passively observing little finger actions, and activation of representations of the little finger when observing index finger actions [9,10]. Catmur et al. [11] demonstrated that such training effects are likely to be mediated by cortical circuits that overlap with areas of the MNS. They required one group of participants to lift their hand when they saw a hand lift, and to lift their foot when they saw a foot lift (matching group). Another group was required to lift their hand when they saw a foot lift, and to lift their foot when they saw a hand lift (non-matching group). Following such training, voxels in premotor and inferior parietal cortices that responded more when observing hand than foot actions in the matching group responded more to foot than hand actions in the non-matching group. This finding suggests that, following non-matching training, observation of hand actions activates motor representations of foot actions. Similar ‘counter-mirror’ training effects have also been observed in behavioural paradigms (e.g. [12,13], see also [14,15] for ‘logically related’ activations that may have been generated through naturally occurring non-matching experience).
Hickok [6] argued that these studies provide evidence that mirror neurons cannot underlie action understanding. Embracing the idea that counter-mirror training reconfigures the MNS—making it responsive to the sight of one action and the execution of a different action—he reasoned that, if the MNS contributes to action understanding, this reconfiguration should have an impact on action understanding. However, he considered that participants who showed counter-mirror activation (e.g. stronger activation of the index finger muscle during observation of little than of index finger movement) ‘presumably did not mistake the perception of index finger movement for little finger movement and vice versa’ ([6], p.1236). The key word here is ‘presumably’. Neither the focal study by Catmur et al. [9], nor any other study, has examined the effects of counter-mirror training on indices of action understanding.

3. Predictive coding and action understanding

The aim of the predictive coding (PC) account [16,17] was to answer the question ‘if mirror neurons enable the observer to infer the intention of an observed action, how might they do this’? In many accounts of the MNS, it is assumed that mirror neurons are driven by the sensory data and that when the mirror neurons discharge, the action is ‘understood’. However, within this scheme mirror neurons could only enable action understanding if there was a one-to-one mapping between the sensory stimulus and the intention of the action. This is not the case. If you see someone in the street raise their hand, they could be hailing a taxi or swatting a wasp. The context must establish which intention is more likely to drive an action. Consistent with the PC account, the empirical evidence does not support the view that mirror neurons are driven solely by sensory data from focal action stimuli. For example, Umilta et al. [18] found that neurons in F5, which fire both when the monkey executes and observes grasping actions, also fired when the monkey observed the experimenter's grasping action disappear behind a screen. That is, the premotor neurons represented a grasping action in its entirety, but where the grasping phase was not actually seen. Therefore, mirror neurons could not be driven entirely by the focal stimulus input. The PC account provides a framework that resolves these issues.
The essence of the PC account is that, when we observe someone else executing an action, we use our own motor system to generate a model of how we would perform that action to understand it [19,20]. PC enables inference of the intentions of an observed action by assuming that the actions are represented at several different levels [21] and that these levels are organized hierarchically such that the description of one level will act as a prior constraint on sub-ordinate levels. These levels include: (i) the intention level that defines the long-term desired outcome of an action, (ii) the goal level that describes intermediate outcomes that are necessary to achieve the long-term intention, (iii) the kinematic level that describes, for example, the shape of the hand and the movement of the arm in space and time. Therefore, to understand the intentions or goals of an observed action, the observer must be able to represent the observed movement at either the goal level or the intention level, having access only to a visual representation of the kinematic level.
PC proposes that contextual cues generate a prior expectation about the intention of the person we are observing. In the above example of the hand-raising action, these cues could be the presence of a taxi or wasp, or a facial expression. On the basis of these intentions, we can generate a prior expectation of the person's intermediate goals. Given their intermediate goals, we can predict the perceptual kinematics. Backward connections convey the prediction to the lower level where it is compared with the representation at this sub-ordinate level to produce a prediction error. This prediction error is then sent back to the higher level, via forward connections, to update the representation at this level (figure 2). By minimizing the prediction error at all the levels of action representation, the most likely cause of the action, at both the intention and the intermediate goal level, will be inferred. Thus, the PC process uses information, supplied by the MNS, about which goals are most likely, given a certain intention, and which kinematics are most likely, given a certain goal, to test hypotheses about the observed actors' intentions.
Figure 2.
Predictive coding. Each level of the hierarchy predicts representations in the level below, via backward connections. These predictions are compared with the representations at the sub-ordinate level to produce a prediction error. This prediction error is then sent back to the higher level, via forward connections, to update the representation. By minimizing the prediction error at all the levels of the MNS, the most likely cause of the action will be inferred. Dotted line, prediction error; thick line, prediction.
The assumptions of the PC model are consistent with those of ASL. If both models are correct, the MNS develops through associative learning and subsequently supports inferences about the goals and intentions driving others' actions. Therefore, it remains an open and important empirical question whether any intervention that systematically changes the MNS has correlated effects on action understanding.

4. Conclusion

PC and ASL accounts of the MNS address different questions and offer compatible answers. The PC account considers the requirements that are necessary to enable goal or intention inference during action observation. It assumes that the sensorimotor connection strengths have been learned, but does not propose a mechanism by which these are learned. ASL provides an associative mechanism for such learning. Although ASL does not provide a mechanistic account of how such learning could enable action understanding, it allows for the possibility that the MNS, once acquired, could support such functions. In other words, the MNS could enable inferences about the intentions of others, even if this function is not an evolutionary adaptation. Therefore, if both the PC and ASL hypotheses are correct, we learn, via the principles specified in associative learning theory, to predict others' intentions using our own motor systems.

Acknowledgements

C.P. was funded by an Interdisciplinary Postdoctoral Fellowship awarded by the MRC and ESRC. J.M.K. was funded by the Wellcome Trust. C.H. is a Senior Research Fellow of All Souls College, University of Oxford.

This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A Moon on Fire




A Moon on Fire

sn-jupitersmoon.jpg
Credit: Xianzhe Jia/ U. of Michigan and Krishan Khurana/UCLA

The jovian moon Io harbors a globe-girdling pool of molten rock beneath its volcano-riddled surface. That’s the conclusion of a reanalysis of decade-old data from the Galileo spacecraft that once orbited Jupiter, reported online today in Science. Theoreticians had long predicted that Jupiter’s massive gravity must raise tides in Io that knead its solid but still malleable rock to produce heat until at least part of the interior melts. And planetary geologists had seen signs in the moon’s surface lavas that indicate that its 100 known volcanic hot spots are fed by a deep magma “ocean.” But high-flying volcanic debris frustrated space physicists’ attempts to use Jupiter’s powerful magnetic field as a probe of Io’s interior. Now researchers report that they have finally sorted through the interference to reveal a magnetic signature that Io could only produce if it contains an electrically conductive layer of magma—or crystal-laden magma mush—50 kilometers or more thick (thin orange layer) beneath its rocky crust. The find is reminiscent of the solar system’s earliest days, when most large, rocky bodies sported a magma ocean until they cooled down.

The beginnings of the brain



The beginnings of the brain Figure 1: mES cells cultured in serum (left) are exposed to diverse factors that generally inhibit neural development. However, forced overexpression of Zfp521 in serum-exposed mES cells strongly induces production of Sox1 (green) and N-cadherin (red), two proteins closely associated with neural differentiation (right). Blue stain indicates cell nuclei. Credit: Reproduced from Ref. 1 2011 Daisuke Kamiya et al.
All of the tissues and organs of the body arise from one of three embryonic precursors: the ectoderm, mesoderm and endoderm. The ectoderm contributes to several tissues, including the nervous system and the skin, but some studies have suggested that development into neurons requires nothing more than the absence of specific inhibitory signals. 
This phenomenon has led biologists to formulate what is called the ‘neural default model’. “The simplest interpretation of the neural default model is that the neural fate is a ‘left-over’ choice, passively determined by the elimination of other pathways of differentiation,” explains Yoshiki Sasai of the RIKEN Center for Developmental Biology in Kobe. This model fails to address the identities of the factors that actively drive neuronal development, but new findings from Sasai and colleagues have spotlighted a single protein that appears to set this process into motion.
His team had previously designed a culture system that promotes neural differentiation of mouse embryonic stem (mES) cells2, and they used this technique to identify genes that are specifically switched on in these cells. They identified one intriguing candidate, Zfp521, which activated several other genes involved in neural development, even when the mES cells were cultured in the presence of factors that would normally curb this process (Fig. 1).
When Sasai and colleagues examined expression in developing mouse embryos, they noted that the spatial and temporal distribution of Zfp521 activity closely mirrored known sites of neural differentiation. Likewise, early stage mouse embryos injected with mES cells in which Zfp521 expression was abrogated largely failed to incorporate these cells into the developing nervous system. By systematically identifying the genes whose expression is disrupted in the absence of Zfp521, the researchers were able to determine that this gene acts as a driver for the maturation of ectodermal cells into neuroectoderm, the developmental stage that immediately precedes formation of actual neural progenitors.
“The most important message of this study is that the neural fate is acquired by an active determination process,” says Sasai. Understanding how this developmental switch works could ultimately provide scientists with a powerful tool for efficiently transforming human stem cells into mature nervous tissue suitable for experimental use or even transplantation, although it remains to be determined whether human ES cells obey the exact same principles. “We have preliminary data showing a conserved essential role for Zfp521 in both species,” says Sasai, “but we need to analyze the similarities and differences in greater depth.”
More information: Kamiya, D., et al. Intrinsic transition of embryonic stem-cell differentiation into neural progenitors. Nature 470, 503–509 (2011).
Watanabe, K., et al. Directed differentiation of telencephalic precursors from embryonic stem cells. Nature Neuroscience 8, 288–296 (2005).
Provided by RIKEN
"The beginnings of the brain." May 13th, 2011. http://medicalxpress.com/news/2011-05-brain.html
Posted by
Robert Karl Stonjek

New test may help distinguish between vegetative and minimally conscious state



New test may help distinguish between vegetative and minimally conscious state(PhysOrg.com) -- In a new study published in Science, researchers from the University of Liege in Belgium, led by Dr. Melanie Boly, share the discovery of a new test that could aid physicians in differentiating between vegetative and minimally conscious states in patients with brain damage. It is currently difficult to differentiate between a vegetative state where patients lack cognitive function yet display wakefulness and those in a minimally conscious state. Recent awareness to this issue arose in the Terri Schiavo case in Florida. Shiavo had been in a vegetative state for 15 years and on life support before a judge issued a court order to take her off life support. The question of brain function and the possibility of recovery was an issue in the court battle and this test could aid physicians in making that determination.
Using an electroencephalogram to record brain activity, Boly and her team looked at brain activity in 43 patients with 22 healthy individuals and 21 brain-damaged patients. The patients ranged in age from 16-83. Of the brain-damaged patients, 13 were in a minimally conscious state and 8 were in a vegetative state.

New test may help distinguish between vegetative and minimally conscious stateThe subjects were played a series of tones, varying in pitch. With the change in pitch being a surprising event, the temporal cortex of the brain would send the frontal cortex a message for it to consider a reaction. This occurred in all of the subjects, regardless of the level of brain-damage. After the frontal cortex receives the message, it should send back a message to the temporal cortex. However, while this did occur in the healthy and minimally conscious patients, those in a vegetative state did not show that backwards communication.
Combined with the current Coma Recovery Scale, an assessment test currently administered to determine the level of consciousness, Boly hopes to be able to better distinguish between the actual level of consciousness.
More information: "Preserved feedforward but impaired top-down processes in the vegetative state" Boly M, Garrido MI, Gosseries O, Bruno MA, Boveroux P Schnakers C, Massimini M, Litvak V, Laureys S, Friston K, Science 13 May 2011. DOI: 10.1126/science.1202043
ABSTRACT
Frontoparietal cortex is involved in the explicit processing (awareness) of stimuli. Frontoparietal activation has also been found in studies of subliminal stimulus processing. We hypothesized that an impairment of top-down processes, involved in recurrent neuronal message-passing and the generation of long-latency electrophysiological responses, might provide a more reliable correlate of consciousness in severely brain-damaged patients, than frontoparietal responses. We measured effective connectivity during a mismatch negativity paradigm and found that the only significant difference between patients in a vegetative state and controls was an impairment of backward connectivity from frontal to temporal cortices. This result emphasizes the importance of top-down projections in recurrent processing that involve high-order associative cortices for conscious perception.
© 2010 PhysOrg.com
"New test may help distinguish between vegetative and minimally conscious state." May 13th, 2011. http://medicalxpress.com/news/2011-05-distinguish-vegetative-minimally-conscious-state.html
Posted by
Robert Karl Stonjek

The protein that makes us remember pain


(PhysOrg.com) -- New research by scientists in Arizona in the US has demonstrated that an enzyme makes the body remember and remain sensitive to pain after an injury has healed.
Research in 2006 by Professor Todd C. Sacktor of the State University of New York Downstate Medical Center found that the protein kinase M zeta (PKMzeta) appears at the synapses (gaps between neurons) and must be continually recreated at the synapses. If it disappears, so do memories of the pain. Sacktor’s team were able to irreversibly erase memories of pain in rats by using a chemical called zeta-inhibiting peptide (ZIP) which inhibits PKMzeta. In later research the showed that extra PKMzeta affected the brains of rats by boosting old memories.
Now new research by Marina Asiedu and Dipti Tillu and colleagues from the University of Arizona Medical School has shown that PKMzeta is also responsible for the lingering pain and sensitivity felt after an injury. The researchers knew that when pain is experienced the neurons carrying the pain signals develop stronger connections, especially in the dorsal horn section of the spinal cord. The same thing happens in the brain when we learn something new, and so they decided to test the hypothesis that PKMzeta is involved in both processes.
The team injected mice in the paw with Interleukin-6 (IL-6), a protein that produces mild swelling and makes the paw more sensitive for up to three days. They later injected prostaglandin E2 (PGE2) into the paw, and the mice reacted to the chemical, but only if they had previously been injected with IL-6. If the mice were injected with ZIP at the same time as IL-6 or up to three days afterwards, their paws never became more sensitive to PGE2, indicating they had not developed a memory for the pain. When they injected a protein that mimics PKMzeta, the sensitivity returned.
Researchers in Korea made similar discoveries for chronic pain in research published in 2010. Dr Xiang-Yao Li and colleagues found that PKMzeta creates memories in chronic pain caused by nerve damage, and in this research they found the protein affects the anterior cingulated cortex (ACC) part of the brain. An injection of ZIP was found to ease the pain, but only for a few hours and not permanently.
If the protein kinase M zeta produces the same effects in humans, new treatments could be developed that target PKMzeta to treat severe or chronic pain, and conditions such as central neuropathic pain syndrome, in which people retain the memory of a pain long after the injury has healed. PKMzeta may also play a role in other conditions such as addictions and post traumatic stress disorder.
More information: Spinal Protein Kinase M ζ Underlies the Maintenance Mechanism of Persistent Nociceptive Sensitization, The Journal of Neuroscience, 4 May 2011, 31(18): 6646-6653; doi:10.1523/​JNEUROSCI.6286-10.2011
Abstract
Sensitization of the pain pathway is believed to promote clinical pain disorders. We hypothesized that the persistence of a sensitized state in the spinal dorsal horn might depend on the activity of protein kinase M ζ (PKMζ), an essential mechanism of late long-term potentiation (LTP). To test this hypothesis, we used intraplantar injections of interleukin-6 (IL-6) in mice to elicit a transient allodynic state that endured ∼3 d. After the resolution of IL-6-induced allodynia, a subsequent intraplantar injection of prostaglandin E2 (PGE2) or intrathecal injection of the metabotropic glutamate receptor 1/5 (mGluR1/5) agonist DHPG (dihydroxyphenylglycol) precipitated allodynia and/or nocifensive responses. Intraplantar injection of IL-6 followed immediately by intrathecal injection of a PKMζ inhibitor prevented the expression of subsequent PGE2-induced allodynia. Inhibitors of protein translation were effective in preventing PGE2-induced allodynia when given immediately after IL-6, but not after the initial allodynia had resolved. In contrast, spinal PKMζ inhibition completely abolished both prolonged allodynia to hindpaw PGE2 and enhanced nocifensive behaviors evoked by intrathecal mGluR1/5 agonist injection after the resolution of IL-6-induced allodynia. Moreover, spinal PKMζ inhibition prevented the enhanced response to subsequent stimuli following resolution of hypersensitivity induced by plantar incision. The present findings demonstrate that the spinal cord encodes an engram for persistent nociceptive sensitization that is analogous to molecular mechanisms of late LTP and suggest that spinally directed PKMζ inhibitors may offer therapeutic benefit for injury-induced pain states.
via Discover
© 2010 PhysOrg.com
"The protein that makes us remember pain." May 13th, 2011. http://medicalxpress.com/news/2011-05-protein-pain.html
Posted by
Robert Karl Stonjek

A giant interneuron for sparse coding



A single "giant", non-spiking, GABAergic interneuron (right, labelled by intracellular injection of fluorescent dye) forms an all-to-all negative feedback loop with a population of about 50,000 Kenyon cells, principal neurons of the mushroom bodies, a structure involved in olfactory memory in the insect brain. This normalizing feedback loop serves to ensure relatively constant sparseness of mushroom body output across varying input strengths. Sparseness is an important feature of sensory representations in areas involved in memory formation. Credit: MPI for Brain Research A single interneuron controls activity adaptively in 50,000 neurons, enabling consistently sparse codes for odors.
The brain is a coding machine: it translates physical inputs from the world into visual, olfactory, auditory, tactile perceptions via the mysterious language of its nerve cells and the networks which they form. Neural codes could in principle take many forms, but in regions forming bottlenecks for information flow (e.g., the optic nerve) or in areas important for memory, sparse codes are highly desirable. Scientists at the Max Planck Institute for Brain Research in Frankfurt have now discovered a single neuron in the brain of locusts that enables the adaptive regulation of sparseness in olfactory codes. This single giant interneuron tracks in real time the activity of several tens of thousands of neurons in an olfactory centre and feeds inhibition back onto all of them, so as to maintain their collective output within an appropriately sparse regime. In this way, representation sparseness remains steady as input intensity or complexity varies.
Signals from the world (electromagnetic waves. pressure, chemicals etc) are converted to electrical activity in sensory neurons and processed by neuronal networks in the brain. Insects sense smells via their antennae. Odours are detected by sensory neurons there, and olfactory data are then sent to and processed by the antennal lobes and a region of the brain known as the mushroom bodies. Neurons in the antennal lobes tend to be "promiscuous": odours are thus represented by specific combinations of neuronal activity. Neurons in the mushroom bodies—they are called Kenyon cells—, however, respond with great specificity and thus extremely rarely. In addition, they generally respond with fewer than three electrical impulses when stimulated with the right odour. This "sparse coding" strategy has the advantage that it simplifies the task of storing odour representations in memory.
Surprisingly, each Kenyon cell is connected on average to half of all possible presynaptic neurons in the antennal lobes. So how do the Kenyon cells manage to respond only extremely rarely, and with a sparseness that varies little over large ranges of stimulation conditions? Gilles Laurent of the Max Planck Institute for Brain Research and his group found that a single giant interneuron plays a key role. Along with colleagues in his lab (formerly at Caltech) and Great Britain, he has discovered that this neuron, with its extensive arbour, is activated by the entire Kenyon cell population and in turn inhibits them all back. "The giant interneuron and the Kenyon cells form a simple negative feed-back loop: the more strongly it is activated by the Kenyon cell population, the more strongly it curtails their activity in return", explains Laurent. The interneuron itself does not generate any action potentials, but inhibits Kenyon cells via nonspiking and graded release of the neurotransmitter GABA (gamma aminobutyric acid). This smooth, graded property enables this giant interneuron to do a kind of real-time, population averaging, thus carrying out an operation that might otherwise require the involvement of hundreds or thousands of individual spiking neurons.
The effectiveness of the giant interneuron is such that it can actually turn off the Kenyon cell population completely. But the research team also discovered that the giant interneuron is, in turn, controlled by another inhibitory neuron. "This allows the network activity to be potentiated or attenuated, and the sensitivity of this feedback loop to be adjusted", says Gilles Laurent. This is an important feature for brain regions such as the mushroom bodies, which are responsible not only for olfactory processing, but also for learning and memory. Mushroom bodies are where smells can be associated with other sensory modalities, enabling the formation of complex representations.
The scientists' findings show how massive negative feed-back loops can be formed in neuronal networks and what roles they can play. In vertebrates, the piriform cortex, part of the olfactory cortical complex, sits in a position equivalent to the mushroom bodies. "It is very likely that mammals have similar all-to-all control mechanisms in cortical and other circuits. They might not consist of single interneurons, however, but rather of populations of inhibitory neurons with means to couple their responses and actions", surmises Laurent. "Insect brains never cease to give us insights about neural computation, and to put elegant solutions right on our laps, if we know where to look and are a bit lucky."
More information: Normalization for sparse encoding of odours by a wide-field interneuron, Maria Papadopoulou, Stijn Cassenaer, Thomas Nowotny, Gilles Laurent, Science, 6 May 2011. DOI: 10.1126/science.1201835
ABSTRACT
Sparse coding presents practical advantages for sensory representations and memory storage. In the insect olfactory system, the representation of general odors is dense in the antennal lobes but sparse in the mushroom bodies, only one synapse downstream. In locusts, this transformation relies on the oscillatory structure of antennal lobe output, feed-forward inhibitory circuits, intrinsic properties of mushroom body neurons, and connectivity between antennal lobe and mushroom bodies. Here we show the existence of a normalizing negative-feedback loop within the mushroom body to maintain sparse output over a wide range of input conditions. This loop consists of an identifiable “giant” nonspiking inhibitory interneuron with ubiquitous connectivity and graded release properties.
Provided by Max-Planck-Gesellschaft
"A giant interneuron for sparse coding." May 13th, 2011. http://medicalxpress.com/news/2011-05-giant-interneuron-sparse-coding.html
Comment:
There is two ways that the amount of information processed can be controlled: constant resolution and constant density.  A visual analogue can demonstrate this for us.
A constant resolution requires ever greater density of information as the amount of information increases eg consider a nice clear photograph of a face.  Let's say you are using a 10 megapixel camera.  Now we take a picture of a scene in which the face of the person is seen, say in a grandstand containing 5,000 people.  For the resolution of the face to be retained at 10 mega pixels, what resolution camera do you now require?  The answer is that it would have to be in the order of 100 times higher as, when you blow up the picture so that just the face is seen, as in the first shot, we are looking at only a tiny area of the CCD.
Now consider a constant density.  The first photograph is the same as before but when we now photograph that grand stand the density of information remains the same, but the detail of the face falls precipitously ~ you'd be hard pressed to even recognise the face in that huge crowd.
Thus constant density (photograph size) will retain the same size pictures regardless of what is photographed but the constant resolution (constant for every object in the scene) changes the amount of information when there is more detail.
Note that for the ten megapixel camera mentioned earlier, the resolution of a face will fall as the person you are photographing is ever further away from your camera (or as you zoom out).
A further dimension that occurs in practice is the change in set density in the constant density model eg when you are very tired the set point falls and when you are distracted the density of information from any given modality eg senses also falls.  Concentrating on something allows the density to rise and so the resolution of the thing concentrated on also rises.
Posted by
Robert Karl Stonjek

MIND

In current English usage the word "mind" means something entirely
subjective. This usage is comparatively recent, probably not more than
about 400 years. The ancient Greek word "nous" is often translated as
"mind" but this is inaccurate. "Nous" meant something better conveyed
as "intellect" (that which thinks) but that automatically implied the
objective part of the psyche. There simply is no equivalent in Ancient
Greek for our use of the word "mind".

According to Joe Sachs in his enlightening translation of "On the
Soul", Green Lion Press, 2001; whereas Aristotle uses over two dozen
words for 'thinking' - one primary, the "energeia nous" (often
translated as "actual mind" or "active mind" but far better as
"being-at-work thinking") and many degradations and broadenings from
this. Degradations is an accurate word because the energeia nous alone
is permanent and true, in Aristotle's book. I say these things because
Sachs description of Aristotle is very similar to my learning from
Steiner. Sachs writes on page 201-2

<< thinking (noein, noesis) This is Aristotle's broadest word for
thinking of any kind, from the contemplative act that merges with the
thing it thinks (429b 3-7, 430a 19-20 431b 17), through all the ways
of dividing up and putting back together those intelligible wholes
(430b 1-4), to mere imagining (427a 27-28); but it is also used in its
most governing sense for the primary kind of thinking that underlies
them all (430a 25), as a synonym for contemplation (theoria, an
intellectual _seeing - MMcC) ... Modern philosphers such as Descartes
and Locke homogenize the objects of all these into the contents of
consciousness or "ideas in the mind". >>

It is this sense of the mind as a dogmatic abstraction from its
concrete reality that I struggle to overcome in myself. Essentially my
whole world view turns upon a single observation, one which it takes a
certain effort to make: namely that thinking (in the primary sense
given above) is an entirely self-sustaining essence. It does not
require me nor anyone else but, rather I and all others exist and know
we exist through it. I insist that is an observation, an experience
and not therefore a matter of faith or belief. Unlike sense perception
which gives us observations for which we ourselves make no special
effort, this one requires that we do. Yet without it each of us is
trapped in our single world views and cannot appreciate philosophy as
a whole, comprising all world views, each with its own time and place.
That thinking is a self-sustaining essence ought to be the fundamental
proposition of all philosophy and it is so entirely irrespective of
the world view.

To move back to what you wrote, I had describe mind a potential (from
Sachs I would use the better word potency, a sort of inner force or
energy) and you thought it might be like a reservoir.

Valtermar:
<< From your description, I gather you take the word "mind" as
representing something like a "container" where memories are stored in
an organized way. It starts as an empty reservoir (the central "dot"
alone) and it grows as experiences are registered there in an
associative way. >>

Perhaps if I'd thought of the word "potency" then the spatial metaphor
of a reservoir might not have been so seductive. In one way it has its
clarity but it is important to me that the sense of movement and
action and so I am uncomfortable that the image of reservoir does not
convey what I intend. Yet I agree that in, at least a one-sided way,
it has merit.

Again the idea of the relation between the mind and brain as similar
to software and hardware has many strengths, yet there is something
which disturbs me and I have returned to this time and again without
ever becoming clear just what. For one thing, software and hardware
each require a designer and usually they are separate people.
Evolution gives us the appearance of design without a designer but I
do not get how it divides into two in the manner to create a mind and
brain. It is a matter I need to think on again.

The nature of knowledge is the most central question of all.

Best Wishes
Maurice

U.S. Government Backs Concentrated Photovoltaics

Big solar: Massive solar panels like the 24-meter-wide ones shown here will be installed at a 30-megawatt solar farm being supported by the U.S. Department of Energy.
Credit: Amonix

Energy


A 30-megawatt plant will be one of the largest to use the technology.
A relatively new type of solar power called concentrated photovoltaic (CPV) technology is getting a $90.6 million boost in the form of a conditional loan guarantee from the U.S. Department of Energy. The government backing will help with financing for a 30-megawatt facility near Alamosa, Colorado, which will be one of the largest concentrated-photovoltaics plants ever built.
The project is part of a surge in photovoltaic projects in the United States over the last few years. A total of 878 megawatts' worth of solar panels were installed last year, up from just 79 megawatts in 2005. This year total installation is expected to double 2010 levels, according to the Solar Energy Industries Association. The industry is starting to approach the scale of the wind industry, which saw over 5,000 megawatts of capacity installed last year (down from over 10,000 the year before).
Concentrated photovoltaics is different from concentrated solar power, which is also known as solar thermal. In solar thermal plants, mirrors and lenses concentrate sunlight to generate the temperatures needed to produce steam that drives a turbine and generator.
In CPV, arrays of lenses are used to focus sunlight onto small solar cells. The concentrated light improves the efficiency of the cells and reduces the amount of expensive solar cell material needed to produce a given  amount of electricity. Amonix, the company that will be supplying the concentrated photovoltaic systems for the project, says its system can generate twice as much power per acre as conventional solar panel technology. It uses 23.5-meter-wide panels with more than 1,000 pairs of lenses and solar cells on each. The panels are mounted on tracking systems that keep the lenses pointed within 0.8 degrees of the angle of the sun throughout the day, to ensure that light falls on the system's 0.7-square-centimeter solar cells.

CPV accounts for a small part of the solar market now—just 0.1 percent. That's largely because it's newer than ordinary photovoltaic technology and has been more expensive; it's more complex, since the lenses have to precisely track the sun. Lowering the cost of CPV will require scaling up. The biggest CPV plants built so far have been in the range of one or two megawatts, while the largest flat-panel plants are 85 and 92 megawatts.
Some analysts expect the CPV market to more than double every year through 2015 as more companies scale up production. At least one other company, Soitec, is planning a 200-megawatt CPV plant in the next few years.

Kids’ skin infections on the rise

University of Otago
leventkonuk_-_skin_infection.jpg
More kids are admitted to hospitals every
year with serious skin infections.
Image: LeventKonuk/iStockphoto
Serious skin infection rates in New Zealand children have increased markedly over the last two decades according to new research from the University of Otago, Wellington.

More than 100 children a week are now being admitted to New Zealand hospitals for treatment of skin infections with most needing intravenous antibiotics and one-third requiring surgery.

The study by Associate Professor Michael Baker, Dr Cathryn O’Sullivan and colleagues has been published in the international journal Epidemiology and Infection. For the first time it comprehensively details the high rate of serious skin infections amongst New Zealand children.

“It’s a distressing picture for our children,” says Associate Professor Baker. “We already had high rates of these infections compared to other similar countries. This research shows a large rise in children being admitted to hospital every year with serious skin infections like cellulitis, abscesses and impetigo.”

The fundamental finding of this new study is that serious skin infections, caused mainly by the bacteria Staphylococcus aureus and Streptococcus pyogenes, have almost doubled since 1990; from 298 cases per 100,000 to 547 cases.

There is now an average of 4,450 overnight hospital admissions a year for children 0-14 years of age, plus a further 850 children admitted as day patients.

“This burden of disease is important for several reasons. Firstly, these infections are very distressing for the children affected. The average length of hospital stay is three to four days. Two-thirds of these children need intravenous antibiotics, and one-third need surgical drainage under general anaesthetic.”

“Secondly, these infections should be highly preventable, particularly with early primary care treatment by GPs.”

“Thirdly, skin infections are filling up hospital wards and reducing their capacity to treat other serious surgical conditions. The direct cost to DHBs is around $15 million a year, so this is a major cost to the health system.”

The research also makes the point that serious skin infections are only the ‘tip of the iceberg’ as they do not take account of the thousands of other cases which do not result in hospitalisation. In addition to the 4,450 overnight admissions and 850 day cases admitted to hospital, an estimated 60,000 children visit GPs every year for treatment of skin infections.

Other key findings in this study are:
  • Boys have a significantly greater risk of infection than girls
  • Incidence is highest in pre-school children, with children under five years having more than double the rate of 5-9 year olds.
  • The rate of serious infections is almost three times higher for Maori children and over four times higher for Pacific children compared with other ethnicities.
  • Incidence of infection increases markedly with socio-economic deprivation. The rate for children from the most deprived areas is 4.3 times greater than those from the least deprived neighbourhoods.
  • Serious skin infections are more than 1.5 times more North Island DHBs have much higher rates than South Island DHBs.
Although this study did not examine reasons for the increase in serious skin infections, some of the factors may be linked to barriers in accessing primary healthcare including cost. Factors relating to socio-economic deprivation may include access to adequate hot water for washing, diet and nutrition, and household crowding.

Associate Professor Baker says this latest study fits with previous epidemiological research by the University of Otago, Wellington which has shown a marked increase in rates of hospitalisation for infectious diseases in NZ, along with rising inequalities. However the exact causes of the increased rates are still not known. Much of these increases happened during the 1990s when income inequalities were also rising.

“There’s an urgent need for action to prevent serious skin infections in children. More research is essential so we can identify the causes of this health problem, introduce preventative measures and improve early treatment,” says Associate Professor Baker.

Climate change impacts: the next decade


Projecting the future, even only a decade ahead, can be achieved by extrapolating current trends and/or using insight concerning the fundamental processes of the involved systems. Either way, uncertainties will exist.

This is certainly true with climate change and the likely emerging impacts of that change. Nevertheless, foresight has enormous potential benefits for seizing opportunities and avoiding pitfalls. In the end, it is about the management of risk, albeit recognising that in some cases anticipation of outcomes will turn out to be useful, if not essential, while in others, in the light of experience, it may be seen as having been a waste of effort.

Both the changes that have occurred to the global and regional climate over recent decades and our theoretical understanding of the climate system make it likely that for the next decade the trend towards warmer global average temperatures will continue. There will be year-to-year variations in average temperatures and even more so in climatic parameters at the regional level. The natural climate system is variable and that variability will continue.

The challenge through this coming decade will be to cope with both the variability and the change with as little impact on human and natural systems as possible.

The past century has already seen an inexorable increase in the pressure of the high pressure ridge that lies over the southern half of Australia. There is growing observational evidence that this reflects a strengthening of the Hadley circulation, the movement of warm tropical air pole-wards in the upper atmosphere to the mid latitudes where it descends and is responsible for the aridity across these latitudes in both hemispheres.

These observations agree with many theoretical models of the climate, and are implicated in the long string of low rainfall years in the south west of Western Australia since the 1970s and in the Murray Darling and Victoria over the past decade or so. It is probable that this trend will continue with concomitant impacts on water supplies, power generation, potable water use, agricultural production and natural ecosystems.

In this regard, conflict over the use of a diminishing resource, as already apparent in the Murray Darling Basin, is likely to only grow.

Through this next decade we may also see some of the first signs of other climate impacts in Australia, including more extreme sea-levels events associated with both higher sea levels and also more intense storms. Exposures around the national coastline including sandy beaches and in the major cities will occur with little predictability in terms of exact timing, but consistent with a steadily changing frequency of such events. Similarly it is likely there will be a change to the frequency of those occasions conducive to bush fires.

Lower water availability will demand engineering responses: pipelines, desalination, dams, and ground-water options. These options will likely expose sectoral differences and needs across the economy and conflicting purposes. In addition there will be a need for ongoing improvement for reduced human demand for water. It is likely this will evoke serious rethinking of long-held views about such things as regional development, the role and nature of agriculture in the economy, trading as a market force in managing diminishing resources, and ownership over natural resources including water – as well as natural ecosystems and their component species.

At all times knowledge will be accumulating in terms of our theoretical understanding of the climate system and the systems dependent on the state of the climate. Part of this will be observed impacts across the world with a likely ongoing loss of water from the major glaciers (currently contributing around a millimeter of sea-level rise per year), a non-zero chance of the entire loss of sea ice in the Arctic during the summer and the concomitant efforts by nations of that region to claim ownership of resources that become more readily accessible – already involving Canada, China, Norway, Russia and the United States.

It is likely that such pressure on international relationships and national security will not be confined to the polar regions. There will be ongoing evidence of change to ecosystems in migration, plant and animal behaviour, breeding times, etc. The impact on island nations of our region will grow in profile. Together, these observations will provide local stimuli for both adaptive and mitigative responses.

A consequence of both the observation of change and theoretical understanding may be that the magnitude of the risks associated with climate change will become more apparent, demanding stronger actions. For example a warming target of 2oC may become viewed as unacceptable risk, albeit perceived differently by different countries and sectors of the economy, heightening efforts to pursue a 350 ppm global concentration target.

While this may be driven by the falling water availability in some countries, it is possible that the poorly appreciated risk to the natural ecosystems that currently exists will become more apparent both from an ecosystem services and a planetary stewardship point of view.

Land management

This perspective may highlight the need for addressing methodologies for not only limiting the emissions of carbon, but tackling the task of removing greenhouse gases already in the atmosphere through land management technologies.

It will raise serious consideration of the possible need for geo-engineering of the climate system itself. Companies already exist around the world to invest in such technologies and reap the benefits of a future price on carbon. Such technologies vary from relatively small-scale land management projects, to global-scale engineering efforts to modify the energy budget of the planet.

The essential development over the next decade will be the formulation of a shared global view on appropriate research protocols and national actions in geo-engineering that truly reflect the very serious potential danger of some of these technologies – and the potential dangers of narrowly focussed researchers or nations acting according to their own interests rather than those of the wider global community.

A consequence of a drive towards a low-carbon future has ramifications for energy sourcing, production and infrastructure – and investment in existing energy generation methods. But it will open up enormous opportunities for new businesses in low-carbon emission and energy-efficiency technologies.

This transition has begun, but this next decade will see this intensify. Australia may have missed some of these opportunities, but many are still available for relatively early movers. The changes will be seen in a revolutionary move towards electric-drive vehicles, decentralisation of power supplies, diversification of electricity generation options such as geothermal, solar, wind, the development and deployment of energy storage systems, smart grids and energy management systems.

Huge improvements

Above all we will see huge improvements in energy efficiency of homes, commercial buildings and industrial processes and transport. This will create issues that will need resolution – such as the impact on disadvantaged members of the community of inevitably higher energy costs, disadvantaged companies and industrial sectors and the role of more controversial energy sources such as nuclear.

The climate change issue is about more than just whether the climate is changing and how it may physically impact on our societies. It is also about why the issue exists and why it is that managing the issue has, so far, been difficult. The connection to the drivers of change – human behaviour and societal institutions – has yet to be seriously explored (despite some early signs in the literature). This is likely to change through this decade.

Climate change results from the way we source and use energy and this in turn reflects our affluence, what we perceive as success and progress, livability, acceptable lifestyles and our cultures. It reflects our population size, our attitudes to immigration, and the nature of the way we build cities and communities and manage the land. It highlights the diverse methods we have for dealing with threats, such as avoidance, denial, resignation – to name a few “coping” mechanisms – and the barriers that exist for the incorporation of expert advice from all manner of experts into policy formulation.

In particular it highlights the sectoralisation of our communities, through the disciplinary base of knowledge generation, the targeted efforts of companies and the departmentalisation of governments, each tending to work against holistic considerations in policy formation and decision making. It stems from the way social institutions have evolved and how these, including our governance, financial, economic, and cultural systems, have countenanced the underpinning causes of climate change.

Climate change may indeed be illustrative of the non-strategic nature of social evolution – its development in largely incremental steps with little control imposed from longer-term strategic aspirations and needs especially from a society-wide perspective.

Through this decade we may find that the climate change issue becomes much more of a reflection on where this relatively directionless evolution has led us, it strengths and its non-sustainable weaknesses.

This will challenge our notions of the rationality of our decisions, the largely unconscious drivers of our aspirations and needs – and how fundamental to dealing with all issues of sustainability is a new focus on where we are directed.

Waking up to the dangers of radiation

By Lyn McLean Would you be willing to take a drug that had not been trialed before its release on the market? Would you take the drug if manufacturers assured you that it was ‘safe’ on the basis that it did not cause shocks, excessive heat or flashes of light in the eye? What if others who’d taken it developed problems ranging from headaches to life-threatening diseases?

Finally, would you give it to your children to take?

As ridiculous as this scenario may sound, the truth is that most people receive potentially harmful exposures like this every day – not necessarily from a drug – but from a risk of an entirely different sort.

The risk is electromagnetic pollution – the invisible emissions from all things electric and electronic. It is emitted by power lines, household wiring, electrical appliances and equipment, computers, wireless networks, mobile and cordless phones, mobile phone base stations, TV and radio transmitters and so on.

As engineers compete to develop an ever-diversifying range of radiating technologies to seduce a generation of addicts, and thereby ensure a lucrative return, there is an implicit assumption that these technologies are safe. They comply with international standards, we are told. But there the illusion of safety ends.

Sadly compliance with international standards is no more a guarantee of safety than being born rich is a guarantee of happiness.

For such standards protect only against a very few effects of radiation, and short-term effects at that (such as shocks, heating and flashes of light in the retina). They fail entirely to protect against the long-term effects of radiation which, of course, is the sort of radiation that you and I are exposed to if we use a mobile or cordless phone every day, live near a high voltage power line, use a wireless internet computer, or live under the umbrella of a mobile phone base station, TV or radio or satellite transmitter. In short, we’re all exposed.

Regulating to protect only against some of the effects of radiation is a bureaucratic nonsense. It’s like regulating a car’s airbags and not its brakes. It’s like regulating the colour of a pill and not its contents. It’s every bit as meaningless to public health protection.

Particularly when long-term exposure to electromagnetic radiation has been convincingly linked to problems such as leukemia, Alzheimer’s disease, brain tumours, infertility, genetic damage and cancerous effects, headaches, depression, sleep problems, reduced libido, irritability and stress.

Short-term protection is a short-sighted approach to public health protection. It may guarantee safety of the politicians as far as the next election. It may guarantee protection of a manufacturer as far as its next annual profit statement. But it does not guarantee the safety of the users of this technology, particularly those children who are powerless to make appropriate choices about technology and manage their exposure, who are more vulnerable to its emissions and who have a potential lifetime of exposure.

History is replete with examples of innovations that seemed like a good idea at the time but which eventually caused innumerable problems – to users, to manufacturers and to the public purse. Tobacco, asbestos and lead are but a few.

The risk is that electromagnetic pollution is a public health disaster unfolding before our eyes. By failing to implement appropriate standards; by ignoring signs of risk from science; by failing to ensure addictive technologies are safe before they’re released onto the market – our public health authorities have abrogated their responsibilities and chosen to play Russian roulette with our health.

It’s a gamble that not everyone assumes willingly.

Lyn McLean is author of The Force: living safely in a world of electromagnetic pollution published by Scribe Publications in February.

Why is no one talking about safe nuclear power?


By Julian Cribb This article was first published in the Canberra Times.
In the wake of the Fukushima nuclear disaster, the most extraordinary thing is the lack of public discussion and the disturbing policy silence - here and worldwide - over safe nuclear energy.

Yes, it does exist.

There is a type of nuclear reactor which cannot melt down or blow up, and does not produce intractable waste, or supply the nuclear weapons cycle. It's called a thorium reactor or sometimes, a molten salt reactor - and it is a promising approach to providing clean, reliable electricity wherever it is needed.

It is safe from earthquake, tsunami, volcano, landslide, flood, act of war, act of terrorism, or operator error. None of the situations at Fukushima, Chernobyl or Three Mile Island could render a thorium reactor dangerous. Furthermore thorium reactors are cheap to run, far more efficient at producing electricity, easier and quicker to build and don't produce weapons grade material.

The first thorium reactor was built in 1954, a larger one ran at Oak Ridge in the United States from 1964-69, and a commercial-scale plant in the 1980s - so we are not talking about radical new technology here. Molten salt reactors have been well understood by nuclear engineers for two generations.

They use thorium as their primary fuel source, an element four times more abundant in the Earth's crust than uranium, and in which Australia, in particular, is richly-endowed. Large quantities of thorium are currently being thrown away worldwide as a waste by-product of sand mining for rare earths, making it very cheap as a fuel source.

Unlike Fukushima, these reactors don't rely on large volumes of cooling water which may be cut off by natural disaster, error or sabotage. They have a passive (molten salt) cooling system which cools naturally if the reactor shuts down. There is no steam pressure, so the reactor cannot explode like Chernobyl did or vent radioactivity like Fukushima. The salts are not soluble and are easily contained, away from the public and environment. This design makes thorium reactors inherently safe, whereas the world's 442 uranium reactors are inherently risky (although the industry insists the risks are very low).

They produce a tenth the waste of conventional uranium reactors, and it is much less dirty, only having to be stored for three centuries or so, instead of tens of thousands of years.

Furthermore, they do not produce plutonium and it is much more difficult and dangerous to make weapons from their fuel than from uranium reactors.

An attractive feature is that thorium reactors are ''scalable'', meaning they can be made small enough to power an aeroplane or large enough to power a city, and mass produced for almost any situation.

Above all, they produce no more carbon emissions than are required to build them or extract their thorium fuel. They are, in other words, a major potential source of green electricity. According to researcher Benjamin Sovacool, there have been 99 accidents in the world's nuclear power plants from 1952-2009. Of these, 19 have taken human life or caused over $100million in property damage.

Such statistics suggest than mishaps with uranium power plants are unavoidable, even though they are comparatively rare. (And, it must be added, far fewer people die from nuclear accidents than die from gas-fired, hydroelectric or coal-fired power generation.)

But why have most people never heard of thorium reactors? Why is there not active public discussion of their pros and cons compared with uranium, solar, coal, wind, gas and so on? Why is the public, and the media especially, apparently in ignorance of the existence of a cheap, reliable, clean and far less risky source of energy? Above all - apart from one current trial of a 200MW unit by Japan, Russia and the US - why is almost nobody seeking to commercialise this proven source of clean energy? The situation appears to hold a strong analogy with the stubborn refusal of the world's oil and motor vehicle industries for more than 70 years to consider any alternative to the petrol engine, until quite recently.

Industries which have invested vast sums in commercialising or supplying a particular technology are always wary of alternatives that could spell its demise and will invest heavily in the lobbying and public relations necessary to ensure the competitor remains off the public agenda.

It is one of the greatest of historical ironies that the world became hooked on the uranium cycle as a source of electrical power because those sorts of reactors were originally the best way to make weapons materials, back in the '50s and '60s. Electricity was merely a by-product. Today, the need is for clean power rather than weapons, and Fukushima is a plain warning that it is high time to migrate to a safer technology. Whether or not it ever adopts nuclear electricity, Australia will continue to be a prominent player as a source of fuel to the rest of the world - be it uranium or thorium.

So why this country is not doing leading-edge research and development for the rapid commercialisation of safe nuclear technology is beyond explanation. There is good money to be made both in extracting thorium and in exporting reactors (we bought our most recent one from Argentina).

As a science writer, I do not argue the case for thorium energy over any other source, but it must now be seriously considered as an option in our future energy mix. Geoscience Australia estimates Australia has 485,000 tonnes of thorium, nearly a quarter of the total estimated world reserves. Currently they are worthless but there is a strong argument to invest some of our current coal and iron ore prosperity in developing a new safe, clean energy source for our own and humanity's future.
Julian Cribb is a Canberra science writer.

How to measure learning


Belinda Probet The new Tertiary Education Quality and Standards Agency will not be fully operational until 2012.

Understandably, universities want to make sure it will focus adequate attention on the risky bits of the industry while not strangling it with red tape.
But perhaps more significant in the longer run will be the way it implements one of the most radical recommendations from the Bradley review, namely that universities report on direct measures of learning outcomes.

Earlier attempts to measure the quality of university teaching relied on indicators that had little research-based validity, leading to rankings that were uniformly rejected by the sector.

Six months ago the Bradley-inspired Department of Education, Employment and Workplace Relations' discussion paper on performance indicators proposed that cognitive learning outcomes ideally would include discipline-specific measures as well as measures of higher-order generic skills such as communication and problem-solving so valued by employers.

As recently suggested by Richard James, from the Centre for the Study of Higher Education at the University of Melbourne, the public has a right to know not just whether groups of graduates met a threshold standard but also whether their skills were rated good or excellent (HES, July 7).

The difficulty with his seemingly sensible suggestion is that there is almost no data on what students are actually learning.

Even the toughest accreditation criteria focus on inputs such as hours in class, words written, content covered, credit points earned and the status of teachers.
Tools such as the Course Experience Questionnaire and the increasingly popular Australian Survey of Student Experience provide data that can be used to good effect by academics with a serious interest in pedagogy. None of these measures learning, however.

Nearly every Australian university proclaims a set of graduate attributes that includes communication, problem-solving and teamwork.

But none defines the standards to be achieved or the method by which they will be assessed, despite pilgrimages to Alverno, the tiny private US college that knows how to do this.

And it would probably be unwise to hold our collective breath until the Organisation for Economic Co-operation and Development completes its Assessment of Higher Education Learning Outcomes feasibility study.

Does the absence of agreed measures and standards mean TEQSA should abandon this key Bradley recommendation and resort to input measures of the kind used to allocate the Learning and Teaching Performance Fund, together with some kind of graduate skills test?

If we agree with Bradley that learning is what we should be measuring, then what we have called Design for Learning at La Trobe University may be of help. Like most universities we have agreed on six graduate capabilities that all undergraduate programs should develop. But we also have agreed they will be defined in appropriate discipline or field-specific terms and be assessed against agreed standards of student achievement.

To develop these explicit standards of achievement, academic staff in each faculty are looking at real examples of student work, to define not just the standards but the indicators, measures and procedures for gathering and evaluating evidence of student learning. This is relatively straightforward for writing or quantitative reasoning, but it is not so easy when it comes to problem solving or teamwork, which may look rather different for physiotherapists and engineers.

We are not asking for spurious degrees of fine judgment (is this worth 64 or 65 marks), but for robust definitions that allow an evidence-based, university-wide judgment that the student has produced work that is good enough, better than good enough or not yet good enough.

If we expect students to demonstrate these capabilities at graduation, then we also have a responsibility to show where, in any particular course of study, they are introduced, developed, assessed and evaluated.

Most such capabilities require development across several years and are not skills that can be picked up in a single subject. Nor is there any point telling students they are not good enough if you cannot show them where and when they will have the opportunity to improve their capabilities.

For these reasons we need to be able to assess and provide feedback very early on in the course (in a cornerstone), somewhere towards the middle, as well as at the end, in a capstone experience.

It would be a lost opportunity and a backward step if TEQSA concludes that measuring student learning is too difficult and resorts to the suggested generic graduate skills assessment test -- which measures little of value about what students have learned -- or relies on students' assessments of their generic skills as captured in the CEQ. Students' assessments of their capabilities are no substitute for the skilled independent assessment, against explicit standards, of academic staff.

Would it not be better if TEQSA gave universities the opportunity to develop explicit, not minimum, standards for student learning, defined through their chosen institutional graduate capabilities?

Such a first step also would provide the foundation for setting measurable targets for improving this learning and would support the government's goal of encouraging diversity of institutional mission, by requiring not only explicitness of purpose but also of standards.

Having defined and mapped where La Trobe's capabilities are developed and assessed across the curriculum, we expect to be able to set targets for improvement, such as increasing the percentage of our graduates who meet the better-than-good-enough standard by improving the design of particular programs of study.

Or we may plan to raise the bar for what constitutes good enough by evaluating and revising parts of the curriculum.

In a diversified sector the standards chosen will vary from university to university but, once developed, the potential for benchmarking is obvious.

Nano-vaccine beats cattle virus

The University of Queensland
mikedabell_-_cows.jpg
Bovine Viral Diarrhoea Virus is the industry's
most devastating virus.
Image: Mikedabell/iStockphoto
A world-first cattle vaccine based on nanotechnology could provide protection from the Bovine Viral Diarrhoea Virus (BVDV), which costs the Australian cattle industry tens of millions of dollars in lost revenue each year.

The new BVDV vaccine that constitutes a protein from the virus loaded on nanoparticles, has been shown to produce an immune response against the industry's most devastating virus.

A group of Brisbane scientists has shown that the BVDV nanoformulation can be successfully administered to animals without the need of any additional helping agent making a new ‘nanovaccine' a real possibility for Australian cattle industries.

Scientists Dr Neena Mitter and Dr Tim Mahony from the Queensland Alliance for Agriculture and Food Innovation (QAAFI) a UQ Institute recently established in partnership with the Queensland Department of Employment Economic Development and Innovation (DEEDI), partnered with nanotechnology experts Professor Max Lu and Associate Professor Shizang Qiao from the UQ Australian Institute of Bioengineering & Nanotechnology (AIBN) to develop the vaccine.

Dr Neena Mitter said the multidisciplinary team applied the latest in nanotechnology to develop a safe and effective vaccine that has the potential to be administered more readily and cost effectively than traditional vaccines by using nanoparticles as the delivery vehicles.

“The vaccine is exciting as it could feasibly enable better protection against the virus, can be stored at room temperature and has a long shelf life,” said Dr Mitter.

According to Dr Mahony, BVDV is of considerable concern with regard to the long-term profitability of cattle industries across Australia. Cattle producers can experience productivity losses of between 25 and 50 per cent following discovery of BVDV in previously uninfected herds.

“In Queensland alone the beef cattle industry is worth approximately $3.5 billion per year and the high-value feedlot sector experiences losses of over $60 million annually due to BVDV-associated illness,” he said.

Further trials of the nanovaccine will now be conducted with plans to develop a commercial veterinary product in the near future.

The white, green and black of energy

The white, green and black of energy
By Vikki McLeod With the inclusion of a National White Certificate Scheme in the coalitions CPRS amendments, we need to ask what is it? And what is it good for?
Australia’s stationary energy sector is responsible for more than 50 per cent of Australia’s greenhouse emissions. Government policy to transition our energy sector from carbon high to carbon lite is the key to protecting both our economy and the environment.
Internationally there is consensus: the least cost and most economic secure path to a sustainable energy future is aggressive energy efficiency (the white), permanent shift to renewable energy (the green) and strategic use of fossil fuels (the black). But this is not currently the direction the energy market is taking us.
The energy market was deregulated in the 1990s and before greenhouse abatement was a priority. It is a commodity market and generators and retailers profit by selling more energy (either green or black). The challenge is to “decouple” energy sales from energy services.
Energy sales and energy use is growing at about 2 per cent per annum. Sure, this reflects economic and population growth but it is also growth in energy waste. Australia is the bottom of the class when it comes to energy efficient economies. We could learn from California, where they have been maintaining high levels of economic growth and yet they have stabilised energy growth.
A compounding problem is that our growth in renewable energy generation is much less than 2 per cent. Consequently, the additional growth in demand is not even being met by green generation but by black generation. So despite almost ten years of a green target, renewable energy is losing market share.
Without aggressively pursuing energy efficiency, the renewable energy target will continue to chase a receding target and there is also a risk the oldest and dirtiest coal-fired power stations may remain in operation even with a CPRS. So we could end up with the same level of emissions and the same generation mix but just be paying more for it.
Aggressive energy efficiency is the rationale behind the White Certificate Scheme (WCS). A WCS is the “white” policy patch on the energy market, which would allow energy retailers to make a profit from energy efficiency. Energy efficiency becomes another commodity to be sold and marketed to clients (householders, businesses, commercial properties and industry). The other difference with WCS is that inefficient appliances must be retired: that 2nd fridge which is belching in the garage that we kid ourselves is keeping the beer cold will have to be unplugged and go.
WCS had its genesis in Australia with the New South Wales Greenhouse Abatement Scheme in 2003 and a strengthened WCS was proposed as part of the COAG endorsed National Framework for Energy Efficiency in 2004. While the recommendation was for a national scheme this was not supported by the Howard government. The South Australian, New South Wales and Victorian governments went ahead with state-based schemes as energy security measures. A national WCS would be an opportunity to harmonise the state schemes but also an opportunity to include Queensland which is struggling with large growth in energy demand.
The green, black and white markets are distinct and not fungible. The black ETS market has carbon intensity measured at the smoke stack; the white energy efficiency market is measured at the meter. The green renewable energy market is carbon neutral generation and includes commercially competitive renewable energy technologies such as wind, hydro and solar. Each market has its own cost curve and technologies.
WCS is also an opportunity to help the ailing Renewable Energy Target which is currently experiencing a market price collapse. Current REC price is $28: enough to deliver investment in solar water heaters but not enough for the more expensive renewable energy of wind, solar thermal or geothermal. Water heating was a contentious inclusion in the green market. It has long been argued that water heaters are an energy efficiency measure. A better outcome would be to take water heaters out of the green market and include them in the white market (and building codes).
With the time frame we have to decarbonise our energy sector we need to push on each of the three policy fronts - the green, black and white - at the same time.
Other governments have taken this approach. The European Union, the United States and the United Kingdom are just a few examples:
  • Europe Union through the “20, 20, 20 by 2020”: 20 per cent reduction in GHE, 20 per cent increase in renewable energy, 20 per cent improvement in energy efficiency by 2020.
  • USA with the California loading order and the Waxman Markey Bill.
  • UK and its recent Energy White paper.
Vikki McLeod is an engineer and independent energy and carbon consultant who was responsible for the original policy design of the National Energy Efficiency Target for the COAG endorsed National Framework on Energy Efficiency. Vikki McLeod was a former Senior Adviser to Senator Lyn Allison who tabled a private members Bill for a white certificate scheme, “National Market Driven Energy Efficiency Target Bill 2007” and which has been re-tabled by the Australian Greens as “Safe Climate (Energy Efficiency Target) Bill 2009”.

Alternatives to urban water restrictions (Science Alert)

Alternatives to urban water restrictions (Science Alert)

A Better Way to Teach?

Any physics professor who thinks that lecturing to first-year students is the best way to teach them about electromagnetic waves can stop reading this item. For everybody else, however, listen up: A new study shows that students learn much better through an active, iterative process that involves working through their misconceptions with fellow students and getting immediate feedback from the instructor.
The research, appearing online today in Science, was conducted by a team at the University of British Columbia (UBC), Vancouver, in Canada, led by physics Nobelist Carl Wieman. First at the University of Colorado, Boulder, and now at an eponymous science education initiative at UBC, Wieman has devoted the past decade to improving undergraduate science instruction, using methods that draw upon the latest research in cognitive science, neuroscience, and learning theory.
In this study, Wieman trained a postdoc, Louis Deslauriers, and a graduate student, Ellen Schelew, in an educational approach, called “deliberate practice,” that asks students to think like scientists and puzzle out problems during class. For 1 week, Deslauriers and Schelew took over one section of an introductory physics course for engineering majors, which met three times for 1 hour. A tenured physics professor continued to teach another large section using the standard lecture format.
The results were dramatic: After the intervention, the students in the deliberate practice section did more than twice as well on a 12-question multiple-choice test of the material as did those in the control section. They were also more engaged—attendance rose by 20% in the experimental section, according to one measure of interest—and a post-study survey found that nearly all said they would have liked the entire 15-week course to have been taught in the more interactive manner.
“It’s almost certainly the case that lectures have been ineffective for centuries. But now we’ve figured out a better way to teach” that makes students an active participant in the process, Wieman says. Cognitive scientists have found that “learning only happens when you have this intense engagement,” he adds. “It seems to be a property of the human brain.”
The “deliberate practice” method begins with the instructor giving students a multiple-choice question on a particular concept, which the students discuss in small groups before answering electronically. Their answers reveal their grasp of (or misconceptions about) the topic, which the instructor deals with in a short class discussion before repeating the process with the next concept.
While previous studies have shown that this student-centered method can be more effective than teacher-led instruction, Wieman says this study attempted to provide “a particularly clean comparison ... to measure exactly what can be learned inside the classroom.” He hopes the study persuades faculty members to stop delivering traditional lectures and “switch over” to a more interactive approach. More than 55 courses at Colorado across several departments now offer that approach, he says, and the same thing is happening gradually at UBC. Deslauriers says that the professor whose students fared worse on the test initially resisted the findings, “but this year, after 30 years of teaching, he’s learning how to transform his course.”
Jere Confrey, an education researcher at North Carolina State University in Raleigh, said the value of the study goes beyond the impressive exam results. “It provides evidence of the benefits of increasing student engagement in their own learning,” she says. “It’s not just gathering data that matters but also using it to generate relevant discussion of key questions and issues.” She also note

Mice Reject Reprogrammed Cells

Scientists have high hopes that stem cells called induced pluripotent stem (iPS) cells can be turned into replacement tissues for patients with injury or disease. Because these cells are derived from a patient’s own cells, scientists had assumed that they wouldn’t be rejected—a common problem with organ transplants. But a new study suggests that the cells can trigger a potentially dangerous immune reaction after all.
To make iPS cells, scientists use a technique called cellular reprogramming. By activating a handful of genes, they turn the developmental clock backward in adult cells, converting them into an embryolike state. The reprogrammed cells become pluripotent, which means they have the ability to differentiate into all of the body’s cell types. Scientists are already using these iPS cells to study diseases and test drugs.
Induced pluripotent stem cells have a couple of advantages over embryonic stem (ES) cells. They don’t require the use of embryos, so they avoid some of the ethical and legal issues that have complicated research with embryonic stem cells. They also allow researchers to make genetically matched cell lines from patients. Many scientists have assumed that would provide a source of transplantable cells that wouldn’t require the immune system to be suppressed to avoid rejection, as is necessary with organ transplants.
That assumption might not be correct, however. Immunologist Yang Xu of the University of California, San Diego, and his colleagues tested what happened to several kinds of pluripotent cells when they were transplanted into genetically matched mice. Inbred mouse strains are the genetic equivalent of identical twins, and they can serve as organ donors for each other without any immune suppression. The researchers used two popular inbred strains, called B6 and 129, for their experiments.
When the researchers implanted ES cells from a B6 mouse embryo into a B6 mouse, it formed a typical growth, called a teratoma, which is a mixture of differentiating cell types. (Teratoma formation is a standard test of ES and iPS cells’ pluripotency.) ES cells from a 129 mouse, on the other hand, were unable to form teratomas in B6 mice because the animals’ immune systems attacked the cells, which they recognized as foreign.
The researchers then implanted iPS cells made from B6 mouse cells into B6 mice. To their surprise, many of the cells failed to form teratomas at all—similar to what the researchers saw when they transplanted ES cells from one mouse strain to another. The teratomas that did grow were soon attacked by the recipient’s immune system and were rejected, the team reports online today in Nature. The immune response “is the same as that triggered by organ transplant between individuals,” Xu says.
The immune reaction was less severe when the researchers used iPS cells made with a newer technique. The new method ensures that the added genes that trigger reprogramming turn off after they’ve done their job. But the reaction didn’t go away completely. The researchers showed that the iPS cell teratomas expressed high levels of certain genes that could trigger immune cells to attack. That is probably due to incomplete reprogramming that leaves some genes misexpressed, Xu says.
The results add to a series of findings that iPS cells differ in subtle but potentially important ways from ES cells. George Daley, a stem cell scientist at Children’s Hospital Boston, says the new study is “fascinating,” but he doesn’t think immune rejection will be an insurmountable problem for iPS cells. Once iPS cells have differentiated into the desired tissue type, they may not express the problematic genes, he notes. And dozens of labs are working on ways to improve the reprogramming process so that the stray gene expression is eliminated. In principle, he says, “we should be able to make iPS cells that are the same as ES cells.”
In the meantime, both Xu and Daley say the results underscore the need to continue work with ES cells so that researchers can fully understand—and try to overcome—the differences. “It’s a reminder that we can’t dismiss ES cells,” Daley says.

Artificial grammar learning reveals inborn language sense, study shows



 Psychology & Psychiatry
Parents know the unparalleled joy and wonder of hearing a beloved child's first words turn quickly into whole sentences and then babbling paragraphs. But how human children acquire language-which is so complex and has so many variations-remains largely a mystery. Fifty years ago, linguist and philosopher Noam Chomsky proposed an answer: Humans are able to learn language so quickly because some knowledge of grammar is hardwired into our brains. In other words, we know some of the most fundamental things about human language unconsciously at birth, without ever being taught.
Now, in a groundbreaking study, cognitive scientists at The Johns Hopkins University have confirmed a striking prediction of the controversial hypothesis that human beings are born with knowledge of certain syntactical rules that make learning human languages easier.
"This research shows clearly that learners are not blank slates; rather, their inherent biases, or preferences, influence what they will learn. Understanding how language is acquired is really the holy grail in linguistics," said lead author Jennifer Culbertson, who worked as a doctoral student in Johns Hopkins' Krieger School of Arts and Sciences under the guidance of Geraldine Legendre, a professor in the Department of Cognitive Science, and Paul Smolensky, a Krieger-Eisenhower Professor in the same department. (Culbertson is now a postdoctoral fellow at the University of Rochester.)
The study not only provides evidence remarkably consistent with Chomsky's hypothesis but also introduces an interesting new approach to generating and testing other hypotheses aimed at answering some of the biggest questions concerning the language learning process.
In the study, a small, green, cartoonish "alien informant" named Glermi taught participants, all of whom were English-speaking adults, an artificial nanolanguage named Verblog via a video game interface. In one experiment, for instance, Glermi displayed an unusual-looking blue alien object called a "slergena" on the screen and instructed the participants to say "geej slergena," which in Verblog means "blue slergena." Then participants saw three of those objects on the screen and were instructed to say "slergena glawb," which means "slergenas three."
Although the participants may not have consciously known this, many of the world's languages use both of those word orders-that is, in many languages adjectives precede nouns, and in many nouns are followed by numerals. However, very rarely are both of these rules used together in the same human language, as they are in Verblog.
As a control, other groups were taught different made-up languages that matched Verblog in every way but used word order combinations that are commonly found in human languages.
Culbertson reasoned that if knowledge of certain properties of human grammars-such as where adjectives, nouns and numerals should occur-is hardwired into the human brain from birth, the participants tasked with learning alien Verblog would have a particularly difficult time, which is exactly what happened.
The adult learners who had had little to no exposure to languages with word orders different from those in English quite easily learned the artificial languages that had word orders commonly found in the world's languages but failed to learn Verblog. It was clear that the learners' brains "knew" in some sense that the Verblog word order was extremely unlikely, just as predicted by Chomsky a half-century ago.
The results are important for several reasons, according to Culbertson.
"Language is something that sets us apart from other species, and if we understand how children are able to quickly and efficiently learn language, despite its daunting complexity, then we will have gained fundamental knowledge about this unique faculty," she said. "What this study suggests is that the problem of acquisition is made simpler by the fact that learners already know some important things about human languages-in this case, that certain words orders are likely to occur and others are not."
This study was done with the support of a $3.2 million National Science Foundation grant called the Integrative Graduate Education and Research Traineeship grant, or IGERT, a unique initiative aimed at training doctoral students to tackle investigations from a multidisciplinary perspective.
According to Smolensky, the goal of the IGERT program in Johns Hopkins' Cognitive Science Department is to overcome barriers that have long separated the way that different disciplines have tackled language research.
"Using this grant, we are training a generation of interdisciplinary language researchers who can bring together the now widely separated and often divergent bodies of research on language conducted from the perspectives of engineering, psychology and various types of linguistics," said Smolensky, principal investigator for the department's IGERT program.
Culbertson used tools from experimental psychology, cognitive science, linguistics and mathematics in designing and carrying out her study.
"The graduate training I received through the IGERT program at Johns Hopkins allowed me to synthesize ideas and approaches from a broad range of fields in order to develop a novel approach to a really classic question in the language sciences," she said.
Provided by Johns Hopkins University
"Artificial grammar learning reveals inborn language sense, study shows." May 13th, 2011. http://medicalxpress.com/news/2011-05-artificial-grammar-reveals-inborn-language.html
Comment:If verblog is contrived by English speakers then it will unwittingly incorporate elements of English, not German, Hopi or Swahili, syntax and grammar.  What they have discovered is the 'language specific' grammar formed during the earliest years of life and not the innate form, assuming that there is one, which would be below that level and not specific to any language.
Posted by
Robert Karl Stonjek

வைகாசி விசாகத்தன்று ஓம்சிவசிவஓம் ஜபிப்போம்


by Keyem Dharmalingam on Sunday, 15 May 2011 at 12:05
வைகாசி விசாகத்தன்று ஓம்சிவசிவஓம் ஜபிப்போம்

எதிர்வரும் 16.5.2011 திங்களும், 17.5.2011 செவ்வாயும் பவுர்ணமி திதி வருகிறது.திங்கள் மாலை முதல் செவ்வாய் மாலை சுமார் 5 மணி வரையிலும் வைகாசி பவுர்ணமி வருவதால், 16.5.11 திங்கள் இரவுதான் பவுர்ணமி என கணக்கில் எடுத்துக்கொள்ள வேண்டும். தவிர, இந்த வைகாசி மாதம் இரண்டு பவுர்ணமிகள் வருகின்றன. ஆமாம். வைகாசி மாதத்தின் இறுதியிலும் ஒரு பவுர்ணமி வருகிறது.ஆனால்,அது விசாக நட்சத்திரத்தில் வரவில்லை;கேட்டை நட்சத்திரத்தில் வருகிறது. எனவே, முதல் பௌர்ணமியே வைகாசி விசாகம். பவுர்ணமியன்று ஏதாவது ஒரு அம்மன் சன்னதியில் இரவு 9 மணி முதல் 12 மணி வரை அமர்ந்து, (பகலில் முடிந்தால் எதுவும் சாப்பிடாமல் இருந்து) அல்லது ஒரு மணி நேரமாவது ஓம்சிவசிவஓம் ஜபிப்போம்;நமது ஒவ்வொரு ஓம்சிவசிவஓம் ஜபமும் ஒரு கோடி தடவை ஜபிப்பததற்கான பலனை நமக்குத் தரும்; கூடவே, இரண்டு கைகளிலும் தலா ஒரு ஐந்துமுக ருத்ராட்சம் வைத்து ஜபிப்பதால்,ஒரு ஓம்சிவசிவஓம் ஜபம்,100 கோடி தடவை ஜபித்தமைக்கான பலனைத் தரும்.

இருந்தும், ஏன் நமது நியாயமான கோரிக்கை அல்லது ஆசை விரைவில் நிறைவேறுவதில்லை?

நாம் குறைந்தது கடந்த ஏழு ஜன்மங்களில் செய்த பாவங்களை / கர்மங்களை இந்த ஜன்மத்தில் அனுபவிக்கிறோம்.புண்ணியத்தையும் தான்.இதில் பாவ அல்லது கர்மக்கணக்கு அதிகமாக இருப்பதால் கஷ்டப்படுகிறோம். இந்த கர்மக்கணக்கினை கரைக்க கலியுகத்தில் இறைநாம ஜபமே ஏற்றது.சுலபமானது; எளியது; தவிர, நமது கர்மத்தை நாம் மட்டுமே கரைக்க முடியும்;வேறு யாராலும் கரைக்க முடியாது!!!

எனவே, நமது ஓம்சிவசிவஓம் மந்திரஜப எண்ணிக்கை ஒரு லட்சத்தைத் தாண்ட வேண்டும்; ஒரு நாளுக்கு இரண்டு வேளை வீதம்,ஒரு வேளைக்கு ஒரு மணி நேரம் என ஓம்சிவசிவஓம் ஜபித்து வந்தால், நாம் ஒரு நாளுக்கு 400 முறையே ஓம்சிவசிவஓம் ஜபித்திருப்போம்.(எண்ணிப்பார்த்தாலும் சரி, எண்ணிப்பார்க்காமல் இருந்தாலும் சரி) நமது ஓம்சிவசிவஓம் மந்திரத்தின் ஜப எண்ணிக்கை ஐந்தாயிரத்தைத் தாண்டியதும், சிறு சிறு அதிசயங்களை நாம் உணரத்துவங்குவோம்; இரண்டு வார செய்முறையால் இந்த அனுபவத்தை நாம் பெறமுடியுமானால் ஏன் நாமும் முயற்சி செய்யக்கூடாது. ...நன்றி வலைத்தளம் : ஓம்சிவசிவஓம்

தெற்கு ஸ்பெயினில் நிலநடுக்கம்- 10க்கும் மேற்பட்டோர் பலி!

தெற்கு ஸ்பெயினில் நிலநடுக்கம்- 10க்கும் மேற்பட்டோர் பலி!

சட்டவிரோத ஆட்கடத்தல்காரரை இந்தோனேசியா நாடுகடத்தியது- கைது செய்தது ஒஸ்ரேலியா!

சட்டவிரோத ஆட்கடத்தல்காரரை இந்தோனேசியா நாடுகடத்தியது- கைது செய்தது ஒஸ்ரேலியா!

அமெரிக்க குடிவரவு சட்டத்தில் சீர்திருத்தம் செய்ய வேண்டும்- ஓபாமா!

அமெரிக்க குடிவரவு சட்டத்தில் சீர்திருத்தம் செய்ய வேண்டும்- ஓபாமா!

ஐரோப்பிய ஒன்றிய நாடுகளுக்கு நெருக்கடியை கொடுத்துள்ள செங்கன்விசா!

ஐரோப்பிய ஒன்றிய நாடுகளுக்கு நெருக்கடியை கொடுத்துள்ள செங்கன்விசா!

கொங்கிரஷ் படுதோல்வி- தங்கபாலு பதவி விலகினார்!

கொங்கிரஷ் படுதோல்வி- தங்கபாலு பதவி விலகினார்!

திமுக தோல்வி – தொண்டர் தீக்குளித்து தற்கொலை!

திமுக தோல்வி – தொண்டர் தீக்குளித்து தற்கொலை!

ஜெயலலிதா நாளை 12.15க்கு முதலமைச்சராக பதவி ஏற்கிறார்!

ஜெயலலிதா நாளை 12.15க்கு முதலமைச்சராக பதவி ஏற்கிறார்!

பாலியல் பலாத்கார குற்றச்சாட்டில் (IMF) சர்வதேச நாணய நிதியத்தலைவர் கைது!

பாலியல் பலாத்கார குற்றச்சாட்டில் (IMF) சர்வதேச நாணய நிதியத்தலைவர் கைது!

OM (aum)

Lord Shiva ( Awesome Bhajan ) *****

Om Namah Shivaya (DHUN) (Must Listen)

Shiv chalisa