Search This Blog

Wednesday, March 21, 2012

What You Don't Know About Kaizen Budgeting May Shock You



Do you know the name of the budgetary plan which is used normally in a successful retail outlet industry? If your response is WalMart, then you thought accurately. WalMart makes use of Kaizen budgeting to bring down its expenses and to maximize its earnings. The Kaizen budgeting process helps Wal-Mart to reduce its overhead by maximizing its efficiency and work flow. This is a consequence of steady improvement of their process and work. Wal-Mart is a good example for Kaizen budgeting accounting. There are also a couple of points against it, but in comparison to others, it is a gem of a approach. This helps to increase the productiveness as well as efficiency of employees and the organization.
"Kaizen" is a term in Japanese meaning "continuous improvement." In the business world the term frequently refers to making small improvements in throughout the organization. Improvements accumulate little by little to make a huge difference. Business,marketing, manufacturing - every part of the company should constantly be seeking ways to improve their work and processes so that they are easier, more profitable, and more efficient. Doing this results in increased revenue and decreased costs, while providing the company the edge on competing companies.
Whilst the concept of Kaizen is very good, the company cannot simply rely on an idea as a driving force. In such a situation, the Kaizen budgeting approach comes into play as a budgetary paradigm to drive the company forward by determining improvement. Kaizen budgeting accounting works by assuming that improvements will bemade and thus more funds are allocated earlier. Consequently, the implemented improvements increase efficiency later, meaning the requisite level of funding will be reduced later on, and therefore less money will be allocated to the last period. Allocating less money to the current month also ensures improvements are made or the department’s budget will be exceeded.
All people working for a Kaizen business should have the ability to think differently and look to achieve their objective. The entire teamwill find it very easy to progress and learn new things as they start making improvements in their lifestyles to suit their and the company's requirements in terms of efficiency and total output. Thismanagement process is very similar to that of a growing muscle which cuts fat the more there is. Workers, like muscles, must also be very well maintained and nurtured for the company to progress fast or else it may stop progressing.
The Kaizen benefits include a fund used to reward employees who make significant contributions to the workplace. The fund has the dual purpose of motivating employees to come up with ideas, and helping to identify those employees to are not contributing effectively. The overall effect is to make the company healthier,stronger, and more profitable and efficient.
A big or small business can benefit by using this solid business tool that is the revolutionary technique of innovation. A great example is there in Japan, we can look its successful business sector or at the ever-expanding Wal-Mart Corporation. They increase their profit margins and efficiency, reduce their expenditures and waste, they aremotivated and stimulated workers, and they strengthen the company by employing it with their patience, attentiveness, creative spirit etc. It shows a jewel of a strategy and it is not to be missed.

Better Organic Electronics: Researchers Show the Way Forward for Improving Organic and Molecular Electronic Devices



Electron diffraction patterns provide a wealth of information about the morphology, structure, and quality of monolayer organic thin films. (Credit: Berkeley Lab’s Molecular Foundry)

Science Daily  — Future prospects for superior new organic electronic devices are brighter now thanks to a new study by researchers with the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab). Working at the Lab's Molecular Foundry, a DOE nanoscience center, the team has provided the first experimental determination of the pathways by which electrical charge is transported from molecule-to-molecule in an organic thin film. Their results also show how such organic films can be chemically modified to improve conductance.

"We have shown that when the molecules in organic thin films are aligned in particular directions, there is much better conductance," says Miquel Salmeron, a leading authority on nanoscale surface imaging who directs Berkeley Lab's Materials Sciences Division and who led this study. "Chemists already know how to fabricate organic thin films in a way that can achieve such an alignment, which means they should be able to use the information provided by our methodology to determine the molecular alignment and its role on charge transport across and along the molecules. This will help improve the performances of future organic electronic devices."
Salmeron and Shaul Aloni, also of the Materials Sciences Division, are the corresponding authors of a paper in the journalNanoLetters that describes this work. The paper is titled "Electron Microscopy Reveals Structure and Morphology of One Molecule Thin Organic Films." Other co-authors were Virginia Altoe, Florent Martin and Allard Katan.
Organic electronics, also known as plastic or polymer electronics, are devices that utilize carbon-based molecules as conductors rather than metals or semiconductors. They are prized for their low costs, light weight and rubbery flexibility. Organic electronics are also expected to play a big role in molecular computing, but to date their use has been hampered by low electrical conductance in comparison to metals and semiconductors.
"Chemists and engineers have been using their intuition and trial-and-error testing to make progress in the field but at some point you hit a wall unless you understand what is going on at the molecular level, for example, how electrons or holes flow through or across molecules, how the charge transport depends on the structure of the organic layers and the orientation of the molecules, and how the charge transport responds to mechanical forces and chemical inputs," Salmeron says. "With our experimental results, we have shown that we can now provide answers for these questions."
In this study, Salmeron and his colleagues used electron diffraction patterns to map the crystal structures of molecular films made from monolayers of short versions of commonly used polymers containing long chains of thiophene units. They focused specifically on pentathiophene butyric acid (5TBA) and two of its derivatives (D5TBA and DH5TBA) that were induced to self-assemble on various electron-transparent substrates. Pentathiophenes -- molecules containing a ring of four carbon and one sulfur atoms -- are members of a well-studied and promising family of organic semiconductors.
Obtaining structural crystallographic maps of monolayer organic films using electron beams posed a major challenge, as Aloni explains.
"These organic molecules are extremely sensitive to high energy electrons," he says. "When you shoot a beam of high energy electrons through the film it immediately affects the molecules. Within few seconds we no longer see the signature intermolecular alignment of the diffraction pattern. Despite this, when applied correctly, electron microscopy becomes essential tool that can provide unique information on organic samples."
Salmeron, Aloni and their colleagues overcame the challenge through the combination of a unique strategy they developed and a transmission electron microscope (TEM) at the Molecular Foundry's Imaging and Manipulation of Nanostructures Facility. Electron diffraction patterns were collected as a parallel electron beam was scanned over the film, then analyzed by computer to generate structural crystallographic maps.
"These maps contain uncompromised information of the size, symmetry and orientation of the unit cell, the orientation and structure of the domains, the degree of crystallinity, and any variations on the micrometer scale," says first author Altoe. "Such data are crucial to understanding the structure and electrical transport properties of the organic films, and allow us to track small changes driven by chemical modifications of the support films."
In their paper, the authors acknowledge that to gain structural information they had to sacrifice some resolution.
"The achievable resolution of the structural map is a compromise between sample radiation hardness, detector sensitivity and noise, and data acquisition rate," Salmeron says. "To keep the dose of high energy electrons at a level the monolayer film could support and still be able to collect valuable information about its structure, we had to spread the beam to a 90 nanometer diameter. However a fast and direct control of the beam position combined with the use of fast and ultrasensitive detectors should allow for the use of smaller beams with a higher electron flux, resulting in a better than 10 nanometer resolution."
While the combination of organic molecular films and substrates in this study conduct electrical current via electron holes (positively-charged energy spaces), Salmeron and his colleagues say their structural mapping can also be applied to materials whose conductance is electron-based.
"We expect our methodology to have widespread applications in materials research," Salmeron says.
Aloni and Altoe say this methodology is now available at the Imaging and Manipulation of Nanostructures Facility for users of the Molecular Foundry.
This research was supported by the DOE Office of Science.

New Method for Cleaning Up Nuclear Waste




Science Daily  — While the costs associated with storing nuclear waste and the possibility of it leaching into the environment remain legitimate concerns, they may no longer be obstacles on the road to cleaner energy.

If one considers that the radionuclide technetium (99Tc) is present in the nuclear waste at most storage sites around the world, the math becomes simple. There are more than 436 nuclear power plants operating in 30 countries; that is a lot of nuclear waste. In fact, approximately 305 metric tons of 99Tc were generated from nuclear reactors and weapons testing from 1943 through 2010. Its safe storage has been an issue for decades.A new paper by researchers at the University of Notre Dame, led by Thomas E. Albrecht-Schmitt, professor of civil engineering and geological sciences and concurrent professor of chemistry and biochemistry, showcases Notre Dame Thorium Borate-1 (NDTB-1) as a crystalline compound that can be tailored to safely absorb radioactive ions from nuclear waste streams. Once captured, the radioactive ions can then be exchanged for higher-charged species of a similar size, recycling the material for re-use.
"The framework of the NDTB-1 is key," says Albrecht-Schmitt. "Each crystal contains a framework of channels and cages featuring billions of tiny pores, which allow for the interchange of anions with a variety of environmental contaminants, especially those used in the nuclear industry, such as chromate and pertechnetate."
Albrecht-Schmitt's team has concluded successful laboratory studies using the NDTB-1 crystals, during which they removed approximately 96 percent of 99Tc. Additional field tests conducted at the Savannah River National Laboratory in Aiken, S.C., and discussed in the paper have shown that the Notre Dame compound successfully removes 99Tc from nuclear waste and also exhibits positive exchange selectivity for greater efficiency.

Greenhouse Gas Can Find a Home Underground



The MIT researchers analyzed several specific deep saline aquifers in the United States. They determined their potential storage capacity by analyzing how the liquified gas would spread from a particular set of deep injection wells. (Credit: Michael Szulczewski, of the Juanes Research Group, MIT) 

Science Daily  — A new study by MIT researchers shows there is enough capacity in deep saline aquifers in the United States to store at least a century's carbon dioxide emissions from the nation's coal-fired power plants. Though questions remain about the economics of systems to capture and store such gases, this study addresses a significant issue that has overshadowed such proposals.

The MIT team's analysis -- led by Ruben Juanes, the ARCO Associate Professor in Energy Studies in the Department of Civil and Environmental Engineering, and part of the doctoral thesis work of graduate students Christopher MacMinn PhD '12 and Michael Szulczewski -- is published this week in the Proceedings of the National Academy of Sciences.
Coal-burning power plants account for about 40 per cent of worldwide carbon emissions, so climate change "will not be addressed unless we address carbon dioxide emissions from coal plants," Juanes says. "We should do many different things", such as developing new, cleaner alternatives, he says, "but one thing that's not going away is coal" because it's such a cheap and widely available power source.
Efforts to curb greenhouse gases have primarily focused on the search for practical, economical sources of clean energy, such as wind or solar power. But human emissions are now so vast that many analysts think it's unlikely that these technologies alone can solve the problem. Some have proposed systems for capturing emissions- mainly carbon dioxide from burning fossil fuels- then compressing and storing the waste in deep geological formations. This approach is known as carbon capture and storage, or CCS.
Deep saline aquifers are one of the most promising places to store the gas: more than half a mile below the surface, far below the freshwater sources used for human consumption and agriculture. But estimates of the capacity of such formations in the United States have ranged from enough to store just a few years' worth of coal-plant emissions to thousands of years' worth.
The reason for the vast disparity in estimates is twofold. First, because deep saline aquifers have no commercial value, there needs to be more exploration to determine their extent. Second, the fluid dynamics of how concentrated, liquefied carbon dioxide would spread through such formations is very complex and challenging to model. Most analyses have simply estimated the overall volume of the formations without considering the dynamics of how the CO2 would infiltrate them.
The MIT team modelled how the carbon dioxide would percolate through the rock, accounting not only for the ultimate capacity of the formations but also the rate of injection that could be sustained over time. "The key is capturing the essential physics of the problem," Szulczewski says, "but simplifying it enough so it could be applied to the entire country." That meant looking at the details of trapping mechanisms in the porous rock at a scale of microns and then applying that understanding to formations that span hundreds of miles.
"We started with the full complicated set of equations for the fluid flow and then simplified it," MacMinn says. Other estimates have tended to oversimplify the problem, "missing some of the nuances of the physics," he says. While this analysis focused on the United States, MacMinn says similar storage capacities likely exist worldwide.
Howard Herzog, a senior research engineer with the MIT Energy Initiative and a co-author of the PNAS paper, says this study "demonstrates that the rate of injection of CO2 into a reservoir is a critical parameter in making storage estimates."
When liquefied carbon dioxide is dissolved in salty water, the resulting fluid is denser than either of the constituents, so it naturally sinks. It's a slow process, but "once the carbon dioxide is dissolved, you've won the game," Juanes says, because the dense, heavy mixture would rarely return to the atmosphere.
While this study did not address the cost of CCS systems, many analysts have concluded that they could add 15 to 30 per cent to the cost of coal-generated electricity. It would only be viable if a carbon tax or a limit on carbon emissions was implemented.
While uncertainties remain, "I really think CCS has a role to play," Juanes says. "It's not an ultimate salvation; it's a bridge, but it may be essential because it can really address the emissions from coal and natural gas."

Red meat could help mental health



DEAKIN UNIVERSITY   



Deakin University health researchers have found that eating less than the recommended amount of red meat is related to depression and anxiety in women.

Associate Professor Felice Jacka and colleagues from Deakin’s Barwon Psychiatric Research Unit based at Barwon Health investigated the relationship between the consumption of beef and lamb and the presence of depressive and anxiety disorders in more than 1000 women from the Geelong region. The results are published in the current edition of the journal Psychotherapy Psychosomatics.

“We had originally thought that red meat might not be good for mental health, as studies from other countries had found red meat consumption to be associated with physical health risks, but it turns out that it may be quite important,” Associate Professor Jacka said.

“When we looked at women consuming less than the recommended amount of red meat in our study, we found that they were twice as likely to have a diagnosed depressive or anxiety disorder as those consuming the recommended amount.

“Even when we considered the overall healthiness of the women’s diets and other factors such as their socioeconomic status, physical activity levels, smoking, weight and age, the relationship between low red meat intake and mental health remained.

“Interestingly, there was no relationship between other forms of protein, such as chicken, pork, fish or plant-based proteins, and mental health.

“Vegetarianism was not the explanation either. Only 19 women in the study were vegetarians, and the results were the same when they were excluded from the study analyses.”

Associate Professor Jacka said that it didn’t seem to be a good idea to overeat red meat either.

“We found that regularly eating more than the recommended amount of red meat was also related to increased depression and anxiety,” she said.

Given the results of this study, Associate Professor Jacka believes following the recommended weekly intake of red meat could boost our mental health.

“We already know that the overall quality of your diet is important to mental health. But eating a moderate amount of lean red meat, which is roughly 3-4 small, palm-sized serves a week, may also be important,” she said.

Associate Professor Jacka also suggests sticking with grass-fed meats whenever possible.

“We know that red meat in Australia is a healthy product as it contains high levels of nutrients, including the omega-3 fatty acids important to mental and physical health. This is because cattle and sheep in Australia are largely grass-fed. In many other countries, the cattle are kept in feedlots and fed grains rather than grass. This results in a much less healthy meat with more saturated fat and fewer healthy fats.”
Editor's Note: Original news release can be found here.

Sea levels could rise 22 metres



VICTORIA UNIVERSITY OF WELLINGTON   


Even if we manage to limit global warming to 2°C, as the Intergovernmental Panel on Climate Change recommends, future generations could face sea levels 12 to 22 metres higher than present, according to new research.

The research was published today in the journalGeology, by Professor Ken Miller of Rutgers University (New Jersey) and an international team including New Zealander Professor Tim Naish from Victoria University of Wellington.

The researchers studied sediment cores in Virginia in the United States, Enewetak Atoll in the Pacific and the Whanganui region of New Zealand.

They investigated the late Pliocene epoch — 2.7 million to 3.2 million years ago — which is the last time the carbon dioxide level in the atmosphere was at its current level, and atmospheric temperatures were two degrees higher than they are now.

"We know that global sea levels at this time were higher than present, but estimates have varied from five to over 40 metres higher," says Professor Naish. 

He says the team analysed the position of the sea level 3 million years ago and concluded that it was extremely likely — with 95 percent confidence — that sea level peaked 10 to 30 metres above present, with a best estimate of 22 metres.

"Whanganui holds one of the world’s best geological archives of global sea-level during the warm climate of the Pliocene and is a key data set in this new study," says Professor Naish, who has been conducting research there for the last 20 years.

Professor Naish also led an international team to Antarctica as part of the ANDRILL Project to drill beneath the floor of the Ross Sea in 2006 and discovered that the Antarctic ice sheets retreated significantly during the Pliocene epoch.

"What we’re seeing is that the evidence of Antarctic ice sheet collapse is consistent with evidence for sea-level rise in this new study," says Professor Naish.

Professor Ken Miller, who led the study, says that sea-level rise would take time.

"You don’t need to sell your beach real estate yet, because melting of these large ice sheets will take from centuries to a few thousand years," he says.

"The current trajectory for the 21st century global rise of sea level is 2 to 3 feet (0.8 to 1 metre) due to warming of the oceans, partial melting of mountain glaciers, and partial melting of Greenland and Antarctica."

Still, says Professor Naish, the study calls into question the sensitivity of the earth’s large ice sheets to temperature change and shows that the natural state of the earth under carbon dioxide already attained in the atmosphere is one with sea levels around 20 metres above present. 

"If the present levels of carbon dioxide in the atmosphere are not abated, and humans were to disappear from the planet and return in 2,000 years, they would find a world where the oceans have risen 20 metres," says Professor Naish.
Editor's Note: Original news release can be found here.

Better solar cell developed



FLINDERS UNIVERSITY   


Imagine a world where the windows of high-rise office buildings are influential energy producers, offering its inhabitants much more than some fresh air, light and a view.

For the past four years, a team of researchers from Flinders University has been working to make this dream a reality – and now the notion of solar-powered windows could be coming to a not-too-distant future near you.

As part of his just-completed PhD, Dr Mark Bissett from the School of Chemical and Physical Sciences has developed a revolutionary solar cell using carbon nanotubes.

Carbon nanotubes are a promising alternative to traditional silicon-based solar cells, which are cheaper and more efficient than their energy-sapping silicon counterparts.

“Solar power is the most expensive type of renewable energy. The silicon solar cells we see on peoples’ roofs are costly to produce and use a lot of electricity to purify,” Dr Bissett said.

“The overall efficiency of silicon solar cells is about 10 per cent, and even when they’re operating at optimal efficiency, it could take eight to 15 years to make back the energy that it took to produce them in the first place because they’re produced using fossil fuels,” he said.

Dr Bissett said the new, low-cost carbon nanotubes are transparent, meaning they can be 'sprayed' onto windows without blocking light, and they are also flexible so they can be weaved into a range of materials. One is fabric – a concept already being explored by advertising companies.

While the amount of power generated by solar windows would not be enough to completely offset the energy consumption of a standard office building, Dr Bissett said they still had many financial and environmental advantages.

“In a new building, or one where the windows are being replaced anyway, adding transparent solar cells to the glass would be a relatively small cost since the cost of the glass, frames, and installation would be the same with or without the solar component,” Dr Bissett said.

“It’s basically like tinting the windows except they’re able to produce electricity, and considering office buildings don’t have a lot of roof space for solar panels, it makes sense to utilise the many windows they have instead.”

Dr Bissett said the technology mimics photosynthesis, the process whereby plants obtain energy from the sun.

“A solar cell is created by taking two sheets of electrically conductive glass and sandwiching a layer of functionalised single-walled carbon nanotubes between the glass sheets,” he said.

“When light shines on the cell, electrons are generated within the carbon nanotubes, which can be used to power electrical devices.”

Although small prototypes have been developed in the lab, he said the next step would be to test the carbon cells on an 'industrial stage'.

The material could be on the market within 10 years if all goes to plan.

“When we first started the research, we had no idea if it would work because we were the first in the world to try it, so it’s pretty exciting that we’ve proved the concept, and hopefully, it will be commercially available in a few years,” Dr Bissett said.

Dr Bissett is a winner of Flinders's inaugural Best Student Paper Award, a now annual program which aims to recognise excellence in student research across the University.
Editor's Note: Original news release can be found here.

Friendly to a fault, yet tense: Personality traits traced in brain



The severity of abnormalities in the insula (red structure near bottom of brain), gray matter volume (left) and brain activity (right) predicted the extent of aberrant personality traits in Williams syndrome patients -- as reflected in their scores (red dots) on personality rating scales (WSPP). Credit: Karen Berman, M.D., NIMH Clinical Brain Disorders Branch
A personality profile marked by overly gregarious yet anxious behaviour is rooted in the abnormal development of a circuit hub buried deep in the front centre of the brain, say scientists at the National Institutes of Health. They used three different types of brain imaging to pinpoint the suspect brain area in people with Williams syndrome, a rare genetic disorder characterised by these behaviours. Matching the scans to scores on a personality rating scale revealed that the more an individual with Williams syndrome showed these personality/temperament traits, the more abnormalities there were in the brain structure, called the insula.
"Scans of the brain's tissue composition, wiring, and activity produced converging evidence of genetically-caused abnormalities in the structure and function of the front part of the insula and in its connectivity to other brain areas in the circuit," explained Karen Berman, M.D., of the NIH's National Institute of Mental Health (NIMH).
Berman, Drs. Mbemda Jabbi, Shane Kippenham, and colleagues, report on their imaging study in Williams syndrome online in the journal Proceedings of the National Academy of Sciences.
"This line of research offers insight into how genes help to shape brain circuitry that regulates complex behaviors – such as the way a person responds to others – and thus holds promise for unraveling brain mechanisms in other disorders of social behavior," said NIMH Director Thomas R. Insel, M.D.

Long distance connections, white matter, between the insula and other brain parts are aberrant in Williams syndrome. Neuronal fibers of normal controls (left) extend further than those of Williams syndrome patients (right). Picture shows diffusion tensor imaging data from each patient superimposed on the anatomical MRI of the median patient. Credit: Karen Berman, M.D., NIMH Clinical Brain Disorders Branch
Williams syndrome is caused by the deletion of some 28 genes, many involved in brain development and behavior, in a particular section of chromosome 7. Among deficits characteristic of the syndrome are a lack of visual-spatial ability – such as is required to assemble a puzzle – and a tendency to be overly-friendly with people, while overly anxious about non-social matters, such as spiders or heights. Many people with the disorder are also mentally challenged and learning disabled, but some have normal IQs.
Previous imaging studies by the NIMH researchers found abnormal tracts of the neuronal fibers that conduct long-distance communications between brain regions -- likely resulting from neurons migrating to the wrong destinations during early development.
Evidence suggests that genes influence our temperament and the development of mental disorders via effects on brain circuits that regulate behavior. Yet direct demonstration of this in humans has proven elusive. Since the genetic basis of Williams syndrome is well known, it offers a unique opportunity to explore such effects with neuroimaging, reasoned the researchers.
Although the insula had not previously been studied in such detail in the disorder, it was known to be related to brain circuitry and certain behaviors, such as empathy, which is also highly prominent in the disorder. Berman and colleagues hypothesized that the insula's anatomy, function and connectivity would predict patients' scores for Williams syndrome-associated traits on personality rating scales. Fourteen intellectually normal Williams syndrome participants and 23 healthy controls participated in the study.
Magnetic resonance imaging (MRI) revealed that patients had decreased gray matter – the brain's working tissue – in the bottom front of the insula, which integrates mood and thinking. By contrast, they had increased gray matter in the top front part of the insula, which has been linked to social/emotional processes.
Diffusion tensor imaging, which by detecting the flow of water in nerve fibers can identify and measure the connections between brain areas, showed reduced white matter – the brain's long-distance wiring – between thinking and emotion hubs.
Tracking radioactively-tagged water in order to measure brain blood flow at rest, via positron emission tomography (PET), exposed activity aberrations consistent with the MRI abnormalities. The PET scans also revealed altered functional coupling between the front of the insula and key structures involved in thinking, mood and fear processing. These structural and functional abnormalities in the front of the insula correlated with the Williams syndrome personality profile.
"Our findings illustrate how brain systems translate genetic vulnerability into behavioral traits" explained Berman.
More information: The Williams syndrome chromosome 7q11.23 hemideletion confers hypersocial, anxious personality coupled with altered insula structure and function. Jabbi M, Kippenhan JS, Kohn P, Marenco S, Mervis CB, Morris CA, Meyer-Lindenberg A, Berman KF. Proc Natl Acad Sci. 2012 Mar 12. [Epub ahead of print] PMID: 22411788
Provided by National Institutes of Health
"Friendly to a fault, yet tense: Personality traits traced in brain." March 20th, 2012. http://medicalxpress.com/news/2012-03-friendly-fault-tense-personality-traits.html
Posted by
Robert Karl Stonjek

Publication bias involving psychiatric medications may provide physicians with an incomplete picture




Physicians who prescribe antipsychotic medications may be basing their decisions on incomplete information, according to new research published by scientists at Oregon Health & Science University. The study is published in PLoS Medicine, a peer-reviewed open-access journal published by the Public Library of Science.
This latest research follows a highly publicized 2008 report in the New England Journal of Medicine demonstrating that antidepressant drug trials were selectively published, exaggerating their apparent effectiveness. This follow-up study suggests that similar concerns exist, though to a somewhat lesser extent, with antipsychotic drugs.
The authors reached these conclusions by reviewing 24 FDA-registered premarketing trials for eight second-generation antipsychotics—aripiprazole (Abilify), iloperidone (Fanapt), olanzapine (Zyprexa), paliperidone (Invega), quetiapine (Seroquel), risperidone (Risperdal), risperidone long-acting injection (Consta), and ziprasidone (Geodon). They then compared the results in the FDA's review documents to the results presented to clinicians and researchers in medical journals.
The authors found that four premarketing trials submitted to the FDA remained unpublished and that all of them yielded unflattering results. Three showed the new antipsychotic had no significant advantage over placebo. In the fourth, the drug was superior to placebo, but it was significantly inferior to a much less expensive competing drug.
In the published trials, there was some evidence that the journal articles over-emphasized efficacy of the new drug. For example, an FDA review revealed that one of the newer drugs, iloperidone (Fanapt), was statistically inferior to three different competing drugs, but this information was not mentioned in the corresponding journal articles.
On the other hand, when the authors used meta-analysis to combine trial data and compare all eight drugs to placebo, they found that publication bias had little impact on the drugs' overall apparent efficacy. This stood in contrast to the researchers' previous study on antidepressants, for which publication bias had a much more substantial impact.
"When you compare between drug classes and use FDA data, it's clear that, overall, antipsychotics are more effective than antidepressants. But when you rely on the data in medical journals, the difference between these two drug classes is obscured," said Erick Turner, M.D., an assistant professor in the Department of Psychiatry and the Department of Pharmacology in the OHSU School of Medicine. Turner also serves as a staff psychiatrist at the Portland Veterans Affairs Medical Center's Mood Disorders Program.
The authors wrote in the paper, "Publication bias can blur distinctions between effective and ineffective drugs."
The authors concluded: "With further studies investigating publication bias in other drug classes, a more accurate evidence base can emerge."
More information: Turner EH, Knoepflmacher D, Shapley L (2012) Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database. PLoS Med 9(3): e1001189. doi:10.1371/journal.pmed.1001189 .http://www.plosmed … pmed.1001189
Provided by Oregon Health & Science University
"Publication bias involving psychiatric medications may provide physicians with an incomplete picture." March 20th, 2012.http://medicalxpress.com/news/2012-03-bias-involving-psychiatric-medications-physicians.html
Posted by
Robert Karl Stonjek

What was B.F. Skinner really like? A new study parses his traits




March 20th marks the birthday of famed behavioral psychologist B.F. Skinner, who would have turned 108 today. Besides Sigmund Freud, B.F. Skinner was the most famous and perhaps the most influential psychologist of the 20th century. But his own "radical behaviorism"—the idea that behavior is caused solely by environmental factors, never by thoughts or feelings—made him a magnet of controversy, which grew even more intense with the publication of his best-known book, Beyond Freedom & Dignity.
"He was looked at as beyond the pale by a lot of other psychologists, including me," says Dean Keith Simonton, a psychologist at the University of California Davis, who was a graduate student at Harvard when Skinner taught there. Some even called Skinner a fascist for his radical views of human malleability. But, says Simonton, "people who knew him would also say, 'You really should talk to Skinner, because he's a much broader, more open person than you think.'"
Who was B.F. Skinner? University of Oslo psychologists Geir Overskeid and Cato Grønnerød, along with Simonton, used a variety of source material plus an instrument that scores people on five major personality factors, to describe him and compare him with other eminent scientists. The study, which appears in Perspectives in Psychological Science, a journal published by the Association for Psychological Science, reveals a complex man—but nothing like the monster his detractors called him.
To draw an objective picture of Skinner, the psychologists first combed through published sources both biographical and autobiographical, archival material, and sketches written by people who knew him. From these they culled 118 descriptive words and phrases, from "fanatic" to "afraid of the police." Five raters blind to the subject's identity categorized each descriptor under the Big Five traits that psychologists use to describe personality—Openness, Conscientiousness, Agreeableness, Extroversion, and Neuroticism—and assigned to the descriptor a degree from -2 to +2. The authors chose the 81 descriptors on which four of the five raters agreed; there was almost complete agreement as to degree.
The results: Skinner was highly conscientious—scoring 1.8—working tirelessly and meticulously toward ambitious goals. Indeed, he wrote that he aimed to remake the "entire field of psychology" and viewed relaxation as dangerous. And those Harvard students were right about Skinner's openness to experience. Besides being a psychologist, he painted, wrote a novel, played saxophone and piano, and enjoyed all kinds of music. He was also somewhat neurotic and extroverted: known as charming, funny—and a womanizer.
In many respects, Skinner's is the profile of an eminent scientist—for his drive and discipline, creative versatility, and also for his neuroticism, a trait shared by as many as 45% of leading scientists, according to one analysis. What the profile does not represent: an evil authoritarian. "This article makes Skinner more human," says Simonton—not just a "consolidation" of traits but also an array of nuanced detail. Though objective, it's not "a polarizing treatment. You don't have to love or hate him."
Provided by Association for Psychological Science
"What was B.F. Skinner really like? A new study parses his traits." March 20th, 2012. http://medicalxpress.com/news/2012-03-bf-skinner-parses-traits.html
Comment:
'Fred' to his friends...
Posted by
Robert Karl Stonjek

'Anti-alcoholism' drug clears key test hurdle




A drug designed to treat nervous spasms has cleared an important early test in a project to see whether it can also cure alcoholism, French doctors said on Tuesday.
Baclofen -- the lab name for a medication branded as Kemstro, Lioresal and Gablofen -- was successful in a preliminary test among a small group of alcoholics, a result that opens the way to formal clinical trials, they said.
The history of the drug goes back 50 years. It was originally designed for epilepsy before becoming licensed to treat spasticity, but researchers are now interested in using it to ease alcoholic craving.
Interest was sparked in 2008 by a book, Le Dernier Verre (The Last Glass), by cardiologist Olivier Ameisen, who self-treated his alcoholism with high doses of baclofen.
The new test entailed enrolling 132 heavy drinkers who were given baclofen at high doses over a year.
Eighty percent either became abstinent or became moderate drinkers. By comparison, two drugs that are commonly used to treat alcoholics, naltrexon and acamprosate, yield a success rate of 20-25 percent.
Side effects included fatigue, drowsiness, insomnia, dizziness and digestive troubles.
Lead researcher Philippe Jaury of the University of Paris-Descartes said the outcome opened the door to one-year clinical trials, expected to start in May, in which 320 alcoholics would be divided into two groups.
One batch will receive baclofen, progressively building in dosage until the craving symptoms subside, while the others will receive an inactive lookalike pill, or placebo.
France's health system is paying 750,000 euros ($469,000) of the 1.2-million-euro ($1.45-million) cost of the trial, and an unidentified donor is paying the rest, Jaury told AFP.
The pre-trial study is published in a specialist journal, Alcohol and Alcoholism.
(c) 2012 AFP
"'Anti-alcoholism' drug clears key test hurdle." March 20th, 2012. http://medicalxpress.com/news/2012-03-anti-alcoholism-drug-key-hurdle.html
Posted by
Robert Karl Stonjek

Sai...............................Mere Sai Mere Baba..Lucky Punjabi.King...

Tuesday, March 20, 2012

Top 20 Common Job Search Strategies:



Ready for the big game? An athlete prepares for months, even years, with a rigorous training schedule, healthy eating, keeping focused on the end result – the win. Perhaps you need to put your “play book” together and create a career action plan.

1. Network, network, network then network some more. Network using all tools available (personal network, LinkedIn, User Groups, Internet, attending industry events) etc. Tell everyone you meet you are looking for a job.

2. Build a solid resume—fill in gaps, call out successes, relay your value add, and customize it to cater to the job you are interviewing for. Resumes are like a painter’s canvas, there are no two paintings alike. You may take a class—how to interview, how to sell yourself, how to write a winning resume.

3. Build a skills inventory: Candidates need to know themselves and then know how to present themselves.

4. Have a clear idea why you are looking and what your short-term goals are. Clearly understand your long-term goals. Do they make sense with your short-term goals?

5. Do research on the company, position and person you are interviewing with inside and out before you interview. 
Cross-reference every lead you get with your network. For example, if you see a great looking job at XYZ Company, check your personal and LinkedIn networks to see if you know of anyone who works at XYZ Company. If so, they often can give you valuable insight into the company and job.
If you know someone who works at a company you are interviewing with, spend some time with them before you interview.

6. Be passionate and persistent. Be yourself and show enthusiasm.

7. Practice interviewing, get feedback, and revise your interview strategy. If you know any recruiters or managers who regularly interview prospective new employees, ask them to give you a mock interview and take their feedback on your résumé and your interview style. This will improve your confidence and performance in real interview situations.

8. Dress professionally and act professionally. Remember, everything counts!

9. Spend time preparing answers to questions that you can reasonably expect to be asked in an interview.

10. Think before answering a question: Take a breath, and then answer the question. This will help you digest the question and have a stronger answer, or ask a qualifying question back before answering the question.

11. Ask questions. You need to interview the company just as much as they need to interview you. Build a list of questions about the position you are interviewing for. If you don’t ask questions, you seem uninterested. The first question out of your mouth should not be about money. Seem interested in the company and the opportunity.

12. Don’t bring up bad experiences or bad-mouth past employers.

13. Don’t be negative. The interview process is slow and frustrating. Don’t let this shake you up.

14. Don’t be a name dropper.

15. Listen and engage: Don’t just talk in the interview.

16. Smile and have good eye contact.

17. Be positive and honest. Don’t try to answer interview question with the answers you think the interviewer wants to hear. Be yourself and be honest. Don’t exaggerate.

18. Build a strong list of references: Ask your reference, before you offer their name, if they would be able to provide a positive reference that would help you secure your next position.

19. At the end of the interview, ask the question, “Do you have any questions or concerns that would stop you from bringing me back for a second interview?”

20. Send follow-up thank-you notes outlining why you feel you are qualified and ask for the job.

That's, like, super cooool




That's, like, super coooolHeather Littlefield, the head adviser for the linguistics program the College of Science, explains why young women have become know as bellwethers for vocal trends and popular slang. Credit: Mike Mazzanti.
A study published in December in theJournal of Voice found that female college students have popularized a linguistic fad called “vocal fry,” which has been described as a “guttural fluttering of the vocal chords” with a “lazy, drawn-out effect.” Northeastern University news office asked Heather Littlefield, the head adviser for the linguistics program in the College of Science, to explain why young women have become known as bellwethers for vocal trends and popular slang.  
Pop singer Britney Spears, reality TV star Kim Kardashian and New York Times executive editor Jill Abramson are all famous for frying their words. Why have women in general — and young women in particular — become known as linguistic innovators? 
This is an interesting question and the answer is quite complex. In fact, we’re still working on understanding this. But one main reason is undoubtedly related to women’s general social status relative to men. Women need to use the currency that is available to them to obtain social status. So while physical strength, political power and money may not be as accessible to women to shape and affect their world to the same degree as men, language is. So they use this tool to their best advantage. Then, because it is an effective tool, others begin to adopt it.
But we should keep in mind two things. First, that there seem to be contexts in which women are more linguistically conservative. For example, when women marry and begin to have families, there is a trend that they uphold the “standard” forms for the language more than men. (For most of us, it was our mothers who corrected our grammar and made sure we didn’t swear).
Second, that while these tools can be very useful, they are still seen in a negative light by most speakers of the language. Take, for example, the use of “like,” which has several different meanings. These new ways of using “like” are very useful, and in fact these patterns of use have spread to other languages. But everyone makes fun of it and denigrates it, even if we all use it.
The Valley Girls of the 1980s popularized uptalk, a speech pattern in which statements are pronounced like questions. But 20 years later, both grandparents and American presidents alike began adopting the vocal pattern. What can you learn about an individual or a group of people by studying vocal trends?  
We can see that language is really a tool to be manipulated by its speakers. When we study a speaker or group of speakers, we can examine their language patterns to try to see what’s important to them. Because the use of language is largely subconscious, it reflects what speakers really believe and want, which can sometimes be different from what they say they believe and want. Linguistic patterns can be very useful for this type of study, but again, it can be very challenging to fully work out such complex patterns and why they occur.
How have text messaging and social networking sites such as Twitter influenced language trends and styles?
This is a new field of study, and we are just beginning to identify some of the trends and styles.  For instance, it seems like young children have a better sense of phonics because they often type things phonetically rather than with the standard orthography (writing “LOL” instead of “laugh out loud”). And of course, some new lexical items come from these domains, so people can now say “LOL,” and we have new verbs, such as “to friend” and “to text.” But overall, it looks like these technologies use a combination of the features of oral and written discourse; it seems unlikely that they will have a deep impact on changing the language. These technologies are really just tools for using language.
Think about the advent of the printing press or the telephone or TV; these were all new tools for the spread of language, but the core structures of the language didn’t change because of them.  There were, of course, introductions of new words to talk about the technologies, but there wasn’t any deep structural change to the language itself because of these new tools.
Provided by Northeastern University
"That's, like, super cooool." March 16th, 2012. http://www.physorg.com/news/2012-03-super-cooool.html
Posted by
Robert Karl Stonjek