Search This Blog

Wednesday, September 21, 2011

Introduction to Project Preparation


Introduction to Project Preparation- Dr. Gamini Abayasekara
 
  1. Introduction
·         The execution of projects is one of the most powerful means of speeding up development in desired ways.
·         They are accepted as the cutting edge of modern development, being a dominant tool that can enhance the ability of the administrators and managers to plan, direct, and execute the ways in which an organisation uses its resources .
·         Many in the public service have to undertake project work at some point, and hence it is important that they study project management, regardless of the academic qualifications and experience.
·         Projects differ from those of routine day-to-day activities of the administrators, and so need different structures, planning, controlling, coordination and closing approaches.
·         Therefore the main reason the public is interested in projects and want them to be managed successfully is to get the optimum out of scarce resources.

  1. Why Learn Project Management?
·         It is through project management that development takes place and so should be given priority.
·         Generally projects have failed to achieve their objectives, because unlike in day-to-day administration where risk minimization is the rule, projects need skills, attitudes and practices that can cope with risk and uncertainty.
·         Project managers should have the ability to challenge assumptions and check their validity, and base project selection partly on numerical indication of the value of costs and returns- this is unlike routine administrators.
·         The public sector also has to come up with new methods and approaches of financing projects that are cost recoverable (as in the private sector).
·         There is the need to adapt to the changing ways, the new skills necessary, and the changing demands of new management of the public service.
·         The new trend in project management is not to allow the sector to do it all alone, but to seek the best combination that will deliver the service satisfying the customer with  maximum efficiency and effectiveness.
·         Public servants must identify potential projects that address government policy and priorities, plus those which have long term rather than short term effects.
·         Unfortunately there are 4 reasons favouring short term to long term results of development.
·         Rapid changes in technology and knowledge have to also be adopted for successful project management, along with the consideration of gender issues and environmental issues.
·         Partnerships as well as co-petitions will form (where you compete as well as cooperate with other firms etc to the best advantage of your organisation).
·         Though the present public service is of varying knowledge, skills and attitudes, their unique strengths can be used to achieve success in projects.

  1. What is a Project?
·         There are many classical definitions of the term project.
·         Essential features of a project are:
·         A project is not a routine continuing activity
·         A project has:
§  well defined outputs or deliverables- e.g the construction of a dam 
§  these deliverables are defined by specifications
§  well defined objectives to be achieved through the outputs –e.g. increase in paddy production
§  some larger purpose to be realized or an expected outcome – e.g. reduction of poverty, achieving food security through self-sufficiency
·         A project has to start and be completed within a specific time frame. It has a start and an end
·         A project consumes resources
·         A few things a public servant must be aware of with regard to projects are
o   When using a resource, and because resources are scarce, priorities have to be set for their use by considering the entire public sector.
o   There is a waiting period for benefits of a project which invariably will incur costs on the project. This maybe felt as a loss to both or either the project in concern and other agencies involved.
o   Projects must work under a single financial and management frame work.
o   Projects have to be planned and appraised before implementation. A “definition workshop” is helpful for this.
o   Projects have limited budgets, therefore overruns are costly.
o   Projects have to be completed within the specified time period.
o   Benefits of the projects, in the case of the public sector, must go to the public (including the underprivileged).
o   Therefore criteria for project appraisal have to be different to those used in the private sector.
o    Part of project planning is to provide alternative ways of achieving the same objective.
·         Despite their value, use of projects for development (project approach) has its critics.  Reasons being:
o   High rate of project failure due to:
§  Optimistic and overestimated benefits
§  Limitations in forecasting models used (that of physical science and engineering – these models assume that there is a direct cause and effect relationship. However, social systems operate differently.)
§  Inconsistencies and uncertainties
§  Non-involvement of beneficiaries
§  Inappropriate intervention (e.g. political intervention)
§  Lack of data
§  Inadequate understanding of social and cultural conditions
·         Alternative approaches to the project approach that have been suggested are:
o   Adaptive management
o   Process approach
o   Learning process approach
o   Problem solving approach
·         However these approaches have not gained as much influence as the project approach.

  1. The project cycle
    • The sequence in which a project is initiated and carried out is called the project cycle.
    • The project cycle can be compared to a human lifecycle.
    • Resources used at different stages of a cycle vary with each stage.
    • There are many versions of the project cycle.
    • The Rondinelli project cycle have 12 steps:
1.      Project identification and definition:
§  In Sri Lanka, there are 3 main sources of projects identification , each having their own reasons for selecting a particular project:
o   Politicians: votes
o   Public: improvement of welfare
o   Professionals: experience in the field
§  The challenge of the manager of a public sector project is to see that all three groups are happy, or in other words, find the place where all 3 parties agree.
§  This is done at the definition workshop.
2.      Project formulation, preparation & feasibility analysis
§  Several things are defined and analysed at this stage:
o   Objectives
o   components of the projects
o   project size
o   potential location
o   preconditions for successful implementation
o   potential cost benefits
§  At this stage, a preliminary discussion with the relevant political personalities/potential beneficiaries/involved professionals is useful.
3.      Project design
§  The project design should provide:
·   detailed description of the selected design
·   list of strategies of operation
·   justifications of technology chosen
·   technical specifications of machinery to be used
·   technical procedures to be used
·   organization charts and facility requirements
§  the project design must define:
·                                                         project targets
·                                                         activities to take place
·                                                         timing of each activity
·                                                         expected outputs
·                                                         sequence of implementation
·                                                         control systems
§  the project design must contain  annual budgets and a project budget for the life span of the project
§  at this stage, designers must cross check whether the duration, financial resources and the tasks of the project are integrated.
4.      Project appraisal
§  This is done by an independent body
§  Normally, feasibility studies are cross checked with the design.
§  Alternatives are not looked into.
§  If required by the appraisal team, new information may need to be collected and supplied.
5.      Project selection, negotiation & approval
§  Selection is based on government policies on allocation of resources.
§  Government policy may depend on:
§  Urgency
§  Cost
§  Benefits
§  In the public sector, there are 2 types of projects:
§  Projects designed to help families – assessed on financial returns (e.g. Samurdhi project)
§  Projects designed for welfare of society – assessed on shadow prices (e.g. construction of a road)
§  After this stage, it is difficult and extremely time consuming to change components or objectives of a project.
§  Approval is usually given by the relevant funding authority
6.      Project activation & organisation
o   At this stage, several activities take place:
o   project office site is selected
o   resources needed to run the office are obtained
o   incentives are created to attract senior personnel to remote areas
o   different project teams are created to perform different functions
o   transport, housing and support personnel are organised
7.      Project implementation & operation
o   This is the most crucial phase of the cycle. It the point at which “the rubber hits the road” (i.e. where the theory is tested, when the action begins)
o   It is the development/construction phase before a project become fully operational.
o   Successful implementation depends on:
o   Preparation of an implementation schedule
o   Good supervision, monitoring and control
o   Capable staff
o   Timely release of funds
o   Many problems, anticipated and unanticipated, will surface at this point
o   Therefore the skill of decision making is crucial.
o   The main problems face is delay  and thus increases in budget. Delays may be due to:
§  Poor performance of manager
§  Poor performance of contractors
§  Delays in bidding and procurement
§  Technical problems
§  Poor design
o   However, half these problems can be solved by proper scheduling.
o   Use of Gantt charts, network analysis and computer software can be extremely useful for this purpose.
8.      Projects supervision, monitoring & control
o   Strictly speaking this is carried out throughout the cycle.
o   It can be applied from different levels of command (higher/middle/lower)
o   It keeps track of progress
o   It removes obstacles faced during implementation
o   It copes with risks and uncertainties
9.      Project completion or termination
o   Once the objective/s has/have been reached the project has to be handed over to the national or regional administration.
o   This is an aspect often neglected (here and abroad)
o   There has to be a clear understanding about salary scales right from the initiation of the project.
10.  Output diffusion & transition to normal administration
o   Barriers to handing over of completed projects are:
o   Lobbying
o   Vested interests
o   Lack of capacity building in the national or regional administration that is to take the project over.
o   Inability to consolidate salaries of project staff with that of normal staff.
11.  Project evaluation
o   Whoever does the evaluation must:
o   Consider the objectives and see if the plan has been properly conceived
o   See whether the assumptions were credible and predictions were reasonable
o   Consider the response of the management to changing circumstances.
o   In case of impact evaluation, it is necessary to assess:
o   Whether the target beneficiaries were being reached.
o   Whether identified constraints were removed – if not why?
o   Whether output targets were reached, if not why?
o   What lessons can be fed back into implementation and design of new projects.
12.  Follow up analysis & action
o   Ideally the lessons learnt should be diffused among policy makers and project managers and they should be made use of in selecting, implementing and designing of new projects.
o   However, this aspect has been highly neglected with repetition of the same mistakes over and over again.

5. Project Feasibility
Project feasibility generally looks at several aspects:
o   Site and location – reasons for siting a project in a given location must be justified.
o   Technical and technological studies – the study should justify technology chosen and machinery used. Technical factors normally investigated are:
o   Magnitude of the project.
o   The process, material equipment and reliability of the system.
o   Suitability of plans, layout and design.
o   The availability of various factors of production.
o   Availability and sufficiency of necessary infrastructure.
o   Proposed methods to implement, operate and maintain the project.
o   Contracting procedures and arrangements to procure goods, works and services
§  Financial studies –
o   Financial feasibility is concerned with profits
o   Most public sector projects do not fall into this category
o   Financial feasibility of public sector projects can be looked at from different points of view, i.e. that of:
o   project beneficiaries
o   the project as a whole
o   financial intermediaries
o   the government
§  Economic studies (Market studies)
o   This is virtually the lifeblood of every project feasibility report.
o   However, it is very complex in the public sector and often impossible.
o   In the private sector, this is determined on the basis of supply and demand and the price mechanism.
o   This is not possible as many goods/services do not enter the market (e.g. police service)
o   In the public sector one has to use social cost-benefit analysis.
§  Cultural / social marketing studies
o   This can be done from 2 view points:
o   The government – social cost-benefit analysis
o   The people/region
o   There are 3 factors that need to be taken into account when performing these studies:
o   Social-cultural demography
o   Social organization
o   Cultural acceptability

§  Institutional – organizational management studies

§  Environmental / gender and other mandated studies

6. Concerns for project success
§  Public sector managers need to keep the following points in mind when initiating, appraising, evaluating and assessing projects within their jurisdiction:
o   Be clear on objectives and the stakeholders.
o   Observe whether analysis is done at the appropriate level, preferably along with other projects conducted in the same sector/region/etc
o   Explore outcomes of the project (cost recovery/citizen satisfaction/etc)
o   Explore critical factors of project success (timing/ competitive advantage/cost base/etc)
o   Evaluate key sources of uncertainty (change of government/etc)
o   Consider intangibles and positive and negative externalities.
o   Examine conditions on which value obtained is dependant
o   See that project short term goal do not hinder long term benefits.
o   Make decisions through past experiences
o   Be familiar with the pros and cons of various techniques and tools of project management.
o   Where possible have a top-down approach in assessment.

7. Beyond planning for project management

  • Most public projects are open systems
  • However much planning goes into the planning of  a public project, guarantying success is difficult.
  • This is because future outcomes have to be forecast in partial ignorance of the present information.
  • Effectiveness of project management depends on how well the manager has understood its objectives.
  • The most important quality of a project manager is the ability to make good judgements.
  •  The project manager can control the conversion of input to output, however, he has no control over the change from output to outcome.
  • A good manager needs to be able to interpret data correctly. He needs to have a wider view of the environment.
  • A good public project manager is one who can relate well to different personalities as he has to work between many interested parties.
  • A good manager must be able to manage risks and uncertainties.
  • He should also be able to add value or create new values to old and failing projects
  • One area widely neglected in the area of public projects is the compensation to victims of development. This area need to be revised.

ராதையின் நெஞ்சமே கண்ணனுக்கு Raathaiyin Nenjame

மாலை மலர் Maalai Malar MSV SPB Vani Jeyaram JaiGanesh KR Vijaya

Iraivan Endroru -SPB

Continents Influenced Ancient Human Migration, Spread of Technology


Researchers found that technology spread more slowly in the Americas than in Eurasia. (Credit: © Carolina K Smith MD / Fotolia)

Science Daily  — Researchers at Brown University and Stanford University have pieced together ancient human migration in North and South America. Writing in the American Journal of Physical Anthropology, the authors find that technology spread more slowly in the Americas than in Eurasia. Population groups in the Americas have less frequent exchanges than groups that fanned out over Europe and Asia.


















Genetic data carries the signature of ancient migrations.Using advanced genetic analysis techniques, evolutionary biologists at Brown University and Stanford University studied nearly 700 locations on human genomes drawn from more than five dozen populations. They say that technology spread more slowly in the Americas than in Eurasia and that the continents' orientation seems to explain the difference. After humans arrived in the Americas 20,000 to 40,000 years ago, genetic data shows, the migrating populations didn't interact as frequently as groups in Eurasia.How modern-day humans dispersed on the planet and the pace of civilization-changing technologies that accompanied their migrations are enduring mysteries. Scholars believe ancient peoples on Europe and Asia moved primarily along east-west routes, taking advantage of the relative sameness in climate, allowing technological advances to spread quickly. But what about in North and South America, with its long, north-south orientation and great variability in climate? How did people move and how quickly did societal innovations follow?
"If a lack of gene flow between populations is an indication of little cultural interaction," the authors write in the American Journal of Physical Anthropology, "then a lower latitudinal rate of gene flow suggested for North American populations may partly explain the relatively slower diffusion of crops and technologies through the Americas when compared with the corresponding diffusion in Eurasia."
"Our understanding of the peopling of the Americas will be refined by archaeological data and additional genetic samples," added Sohini Ramachandran, assistant professor of biology in the Department of Ecology and Evolutionary Biology at Brown and the paper's lead author. "But this is the signature of migration we see from genetic data."
To tease out the migration patterns, Ramachandran and fellow researcher Noah Rosenberg from Stanford gathered genetic markers for 68 indigenous populations from 678 genetic markers in Eurasia and the Americas. The goal was to study the distribution of genetic variation among populations. The similarity or difference in genetic makeup among populations gave the scientists insights about migrations long ago.
To illustrate, when one population breaks off from its parent group, the individuals in the new population take their genomes and any distinct genetic mutations with them. From there, the new population may remain independent of the parent group because of distance or other factors, and over time its genetic makeup diverges from the parent. However, if the new population reunites regularly with its parent population -- known as "back migration" -- the genetic makeup of the two populations remains relatively close.
"When populations do not share migrants with each other very often," Rosenberg explained, "their patterns of genetic variation diverge."
Armed with the genetic background of cultures spanning the Americas and Eurasia, the researchers could test whether the east-west orientation of Eurasia supported a rapid spread of agriculture and other societal innovations, while the dissemination of those advances was slower in the Americas due to the north-south orientation. They found that to be the case: The populations in North and South America are, for the most part, more different from each other than the populations in Eurasia. The reason has to do with the differing climates that migrating peoples in the Americas found when they moved north to south.
"It's harder to traverse those distances based on climate than it was in Eurasia," Ramachandran said. "We find greater genetic differences (in the Americas' populations) because of the difficulty in migration and the increased challenge of reuniting with neighboring populations."
"Our result that genetic differentiation increases more with latitudinal distance between Native American populations than with longitudinal distance between Eurasian populations supports the hypothesis of a primary influence for continental axes of orientation on the diffusion of technology in Eurasia and the Americas," the authors write.
The National Institutes of Health, the Burroughs Wellcome Fund and the William F. Milton Fund supported the research.

More Than a Sign of Sleepiness, Yawning May Cool the Brain



New research suggests that yawning could serve as a method for regulating brain temperature. (Credit: © Robert Kneschke / Fotolia)

Science Daily — Though considered a mark of boredom or fatigue, yawning might also be a trait of the hot-headed. Literally.








Gallup and Eldakar documented the yawning frequency of 160 people in the winter and summer in Tucson, Arizona, with 80 people for each season. They found that participants were more likely to yawn in the winter, as opposed to the summer when ambient temperatures were equal to or exceeding body temperature. The researchers concluded that warmer temperatures provide no relief for overheated brains, which, according to the thermoregulatory theory of yawning, stay cool via a heat exchange with the air drawn in during a yawn.
A study led by Andrew Gallup, a postdoctoral research associate in Princeton University's Department of Ecology and Evolutionary Biology, is the first involving humans to show that yawning frequency varies with the season and that people are less likely to yawn when the heat outdoors exceeds body temperature. Gallup and his co-author Omar Eldakar, a postdoctoral fellow in the University of Arizona's Center for Insect Science, report this month in the journal Frontiers in Evolutionary Neuroscience that this seasonal disparity indicates that yawning could serve as a method for regulating brain temperature.
Gallup describes the findings as follows:
"This provides additional support for the view that the mechanisms controlling the expression of yawning are involved in thermoregulatory physiology. Despite numerous theories posited in the past few decades, very little experimental research has been done to uncover the biological function of yawning, and there is still no consensus about its purpose among the dozen or so researchers studying the topic today.
"Enter the brain cooling, or thermoregulatory, hypothesis, which proposes that yawning is triggered by increases in brain temperature, and that the physiological consequences of a yawn act to promote brain cooling. I participated in a study [published in Frontiers in Evolutionary Neuroscience in September 2010] that confirmed this dynamic after we observed changes in the brain temperature of rats before and after the animals yawned. The cooling effect of yawning is thought to result from enhanced blood flow to the brain caused by stretching of the jaw, as well as countercurrent heat exchange with the ambient air that accompanies the deep inhalation.
"According to the brain cooling hypothesis, it is the temperature of the ambient air that gives a yawn its utility. Thus yawning should be counterproductive -- and therefore suppressed -- in ambient temperatures at or exceeding body temperature because taking a deep inhalation of air would not promote cooling. In other words, there should be a 'thermal window' or a relatively narrow range of ambient temperatures in which to expect highest rates of yawning.
"To test this theory in humans, I worked with Omar Eldakar to conduct a field-observational experiment that explored the relationship between ambient temperature and yawning frequency. We measured the incidence of yawning among people outdoors during the summer and winter months in Arizona. Summer conditions provided temperatures that matched or slightly exceeded body temperature (an average of 98.6 degrees Fahrenheit) with relatively low humidity, while winter conditions exhibited milder temperatures (71 degrees Fahrenheit on average) and slightly higher humidity. We randomly selected 160 pedestrians (80 for each season) and, because yawning is contagious, had them view images of people yawning.
"Our study accordingly showed a higher incidence of yawning across seasons when ambient temperatures were lower, even after statistically controlling for other features such as humidity, time spent outside and the amount of sleep the night before. Nearly half of the people in the winter session yawned, as opposed to less than a quarter of summer participants.
"Furthermore, when analyzing data for each season separately, we observed that yawning was related to the length of time a person spent outside exposed to the climate conditions. This was particularly true during the summer when the proportion of individuals yawning dropped significantly as the length of time spent outside increased prior to testing. Nearly 40 percent of participants yawned within the first five minutes outside, but the percentage of summertime yawners dropped to less than 10 percent thereafter. An inverse effect was observed in the winter, but the proportion of people who yawned increased only slightly for those who spent more than five minutes outdoors.
"This is the first report to show that yawning frequency varies from season to season. The applications of this research are intriguing, not only in terms of basic physiological knowledge, but also for better understanding diseases and conditions, such as multiple sclerosis or epilepsy, that are accompanied by frequent yawning and thermoregulatory dysfunction. These results provide additional support for the view that excessive yawning may be used as a diagnostic tool for identifying instances of diminished thermoregulation."
This research was supported by a grant from the National Institutes of Health.

New Technique Fills Gaps in Fossil Record


Whale skeleton on volcanic Isla Fernandina, Galapagos, Ecuador. (Credit: iStockphoto/Dawn Nichols)

Science Daily  — University of Pennsylvania evolutionary biologists have resolved a long-standing paleontological problem by reconciling the fossil record of species diversity with modern DNA samples.










The Penn team has developed a new technique for analyzing phylogenies and shown that the results stand up against the known fossil history of whale species, a gold standard in terms of fossil records.Cataloging the diversity of life on Earth is challenging enough, but when scientists attempt to draw a phylogeny -- the branching family tree of a group of species over their evolutionary history -- the challenge goes from merely difficult to potentially impossible. The fossil record is the only direct evidence scientists have about the history of species diversity, but it can be full of holes or totally nonexistent, depending on the type of organisms. The only hope in such cases is to infer historical diversity from modern DNA sequences, but such techniques have a fatal flaw: the results they provide are demonstrably incorrect.
"We've put contemporary molecular approaches on equal footing with classical paleontological approaches," said Joshua B. Plotkin of the Department of Biology in Penn's School of Arts and Sciences and the Department of Computer and Information Science in the School of Engineering and Applied Science.
Plotkin conducted the research along with postdoctoral fellows Helene Morlon and Todd Parsons, both of Biology.
Their work will appear in the journal Proceedings of the National Academy of Sciences.
The limitations of the fossil record -- and the lack of good alternatives -- represent a longstanding problem in paleontology. Some species, due to the makeup of their bodies or the geology of the areas where they lived, don't leave fossils. If they leave any legacy to the present, it must be inferred from the DNA of their modern descendants, or from the descendents of their relatives.
For a few decades, scientists have compared the DNA of modern species, making mathematical inferences about the history of species diversity in a group going back to their most recent common ancestor. This reconstructive technique held much promise for the field, but a problem with the approach is now evident.
"When scientists use these phylogenetic techniques, they always infer patterns of increasing diversity. In whatever group of species they inspect, they see virtually no extinctions and a steadily increasing number of species over time," Plotkin said. "This molecular inference is problematic because it's known to be false. The fossil record clearly shows extinctions and long periods of diversity loss."
The cetaceans, a group of species that includes whales, dolphins and porpoises, are ideal for testing ideas about evolutionary diversification, as their fossil record is especially clear. Because they are large animals, and the sea floor is well suited to fossilization, paleontologists are confident that the cetaceans came into existence about 35 million years ago and reached a peak of diversity about 10 million years ago. The number of cetaceans then crashed from about 150 species to the 89 species in existence today.
"The problem with phylogenetic inferences is that you get the opposite view when you apply it to the cetaceans. You would see the number of whale species increasing over time, so that the 89 species we have today is the apex. But we know that this is flat-out wrong because it's directly contradicted by the boom-then-bust pattern in the fossil record."
This realization was a major blow for the field; if molecular reconstructions can't be trusted, there would be no way for scientists to ever learn the history of species that don't have good fossil records. The only hope was that phylogenetic methods could be refined.
In their study, Plotkin and his colleagues added new variables to these methods. The flaw in existing techniques was the reliance on a static rate of diversification. Because that variable could never be negative, the number of species inferred necessarily increased over time.
"What we've done is a fairly modest extension of these techniques, but we allow for changing rates of speciation and extinction over time and among lineages," Plotkin said. "Most importantly, we allow for periods of time during which the extinction rate exceeds the speciation rate."
When applied to the DNA of the 89 whale species that survive today, Plotkin's molecular method closely matched the dynamics in the number of whale species during the last 35 million years as determined through traditional paleontological approaches.
"It's almost miraculous that we can inspect the DNA sequences of organisms living today and figure out how many such species were present millions of years ago," Plotkin said. "We're studying some of the largest species to have ever existed, and we are deciphering their evolutionary history based on information encoded in microscopic DNA molecules."
The research was supported by the Centre National de la Recherche Scientifique, Burroughs Wellcome Fund, David and Lucile Packard Foundation, Alfred P. Sloan Foundation and James S. McDonnell Foundation

Researchers Create First Human Heart Cells That Can Be Paced With Light



Oscar Abilez and a multidisciplinary team developed the first human heart cells that can pulse in response to specific types of light. (Credit: Norbert von der Groeben)

Science Daily  — In a compact lab space at Stanford University, Oscar Abilez, MD, trains a microscope on a small collection of cells in a petri dish. A video recorder projects what the microscope sees on a nearby monitor. The cells in the dish pulse rhythmically, about once a second. The cells are cardiomyocytes, which drive the force-producing and pacemaker functions of the human heart. They are programmed to pulse. They will beat this way until they die.




















In a paper to be published Sept. 21 in the Biophysical Journal, lead author Abilez, a postdoctoral scholar and PhD candidate in bioengineering, and a multidisciplinary team from Stanford describe how they have for the first time engineered human heart cells that can be paced with light using a technology called optogenetics. Abilez holds up a finger as if to say, "Wait," and reaches for a small lever hidden behind the microscope. With the same finger, he flips the lever up. A pale, blue light floods the petri dish. Abilez flicks the light off and then on; first fast and then slow. Each time his finger goes up, the heart cells contract in concert with the light.
In the near term, say the researchers, the advance will provide new insight into heart function. In the long term, however, the development could lead to an era of novel, light-based pacemakers and genetically matched tissue patches that replace muscle damaged by a heart attack.
To create the light-responsive heart cells, the researchers first inserted DNA encoding a light-sensitive protein called channelrhodopsin-2, or ChR2, into human embryonic stem cells. ChR2 controls the flow of electrically charged ions into the cell. For heart cells, the primary ion is sodium, which initiates an electrochemical cascade that causes the cell to contract. They then transformed the optogenetically engineered stem cells into cardiomyocytes unlike any others -- those that respond to light.
Like the new heart cells, optogenetics is a product of Stanford. Bioengineer and psychiatrist Karl Deisseroth, MD, PhD, a co-author of the new study, has played a key role in the technology's development. It is an increasingly common research technique that allows researchers to fashion all manner of mammalian tissues that are responsive to light.
While Deisseroth has focused his research primarily on neurons in order to study neurological illnesses ranging from depression to schizophrenia, Abilez is the first to create optogenetic human heart cells.
The all-important protein for the experiment is ChR2, which is sensitive to a very specific wavelength of blue light and regulates tiny channels in the cell surface. When ChR2 is illuminated by the right wavelength of blue light, the channels open to allow an influx of electrically charged sodium into the cell, producing a contraction.
After creating the cells in a laboratory dish, Abilez next turned to Ellen Kuhl, PhD, the study's senior author and an associate professor of mechanical engineering, whose specialty is sophisticated computer modeling of the human body.
Using her algorithms, they tested their new cells in a computer simulation of the human heart, injecting the light-sensitive cells in various locations in the heart and shining a virtual blue light on them to observe how the injections affected contraction as it moved across the heart.
"In a real heart, the pacemaking cells are on the top of the heart and the contraction radiates down and around the heart," Kuhl explained. "With these models we can demonstrate not only that pacing cells with light will work, but also where to best inject cells to produce the optimal contraction pattern."
The long-term goal is a new class of pacemakers. Today, surgically implanted electrical pacemakers and defibrillators are commonplace, regulating the pulses of millions of faulty hearts around the globe.
"But neither is without problems," said Abilez. "Pacemakers fail mechanically. The electrodes can cause tissue damage."
"Defibrillators, on the other hand," Kuhl said, "can produce tissue damage due to the large electrical impulses that are sometimes needed to restore the heart's normal rhythm."
The researchers foresee a day when bioengineers will use induced pluripotent stem cells fashioned from the recipient's own body, or similar cell types that can give rise to genetically matched replacement heart cells paced with light, circumventing the drawbacks of electrical pacemakers.
"We might, for instance, create a pacemaker that isn't in physical contact with the heart," said co-author Christopher Zarins, MD, professor emeritus of surgery and director of the lab where Abilez performed the experiments. "Instead of surgically implanting a device that has electrodes poking into the heart, we would inject these engineered light-sensitive cells into the faulty heart and pace them remotely with light, possibly even from outside of the heart."
The leads for such a light-based pacemaker might be placed outside the heart, but inside the pericardium -- the protective sack surrounding the heart. Or, someday, the researchers say, there might be a pacemaker placed inside the heart chambers, as with traditional pacemakers, whose light can travel through the intervening blood to pace light-sensitive heart cells implanted inside.
"And, because the new heart cells are created from the host's own stem cells, they would be a perfect genetic match," Abilez added. "In principle, tissue rejection wouldn't be an issue."
"Much work and many technical hurdles remain before this research might lead to real-world application," said Zarins. "But, it may one day lead to more reliable, less invasive devices."
In the near term, however, the advance is promising on other fronts, said Abilez.
"Optogenetics will make it easier to study the heart. Not only can researchers turn cells on with light, but off as well," Abilez said.
Scientists might use these tools to induce disease-like abnormalities and arrhythmias in sample tissues in order to better understand how to fix them. There are likewise advantages inherent in pacing with light versus electricity.
"Heart researchers are often seeking to measure electric response in the heart," said Abilez, "but it takes quite a lot of electricity to stimulate the heart and the resulting electrical signal is relatively weak. This makes it hard to distinguish stimulus from response. It's like trying to hear a whisper in a crowded room." Pacing with light would eliminate that challenge.
Optogenetics could lead to advances beyond the heart, as well, the authors concluded in their study. It might lead to new insights for various neuronal, musculoskeletal, pancreatic and cardiac disorders, including depression, schizophrenia, cerebral palsy, paralysis, diabetes, pain syndromes and cardiac arrhythmias.
Other Stanford co-authors were Jonathan Wong, a mechanical engineering PhD student in the Kuhl Lab; and Rohit Prakash, a neuroscience MD/PhD student in the Deisseroth lab.
This work was supported by a Stanford ARTS Fellowship, a Stanford Graduate Fellowship, the National Science Foundation, the National Institutes of Health and the California Institute for Regenerative Medicine.

Breakthrough Technology Identifies Prostate Cancer Cells


Cancerous and non-cancerous cells are incubated with silver nanoparticle biotags, and then analyzed by shining the red laser on them. The biotags are shown on the cells' surface. Those glowing red in the middle are the cancer biomarkers, and those glowing green are standard biomarkers that bind to many cell types. A high ratio of red to green is found on the cancer cells. (Credit: Gary Braun and Peter Allen/UCSB)
Science Daily — A team of researchers at UC Santa Barbara has developed a breakthrough technology that can be used to discriminate cancerous prostate cells in bodily fluids from those that are healthy.












While the new technology is years away from use in a clinical setting, the researchers are nonetheless confident that it will be useful in developing a microdevice that will help in understanding when prostate cancer will metastasize, or spread to other parts of the body.
The findings are published this week in the Proceedings of the National Academy of Sciences.
"There have been studies to find the relationship between the number of cancer cells in the blood, and the outcome of the disease," said first author Alessia Pallaoro, postdoctoral fellow in UCSB's Department of Chemistry and Biochemistry. "The higher the number of cancer cells there are in the patient's blood, the worse the prognosis.
"The cancer cells that are found in the blood are thought to be the initiators of metastasis," Pallaoro added. "It would be really important to be able to find them and recognize them within blood or other bodily fluids. This could be helpful for diagnosis and follow-ups during treatment."
The researchers explained that although the primary tumor does not kill prostate cancer patients, metastasis does. "The delay is not well understood," said Gary Braun, second author and postdoctoral fellow in the Department of Molecular, Cellular, and Developmental Biology. "There is a big focus on understanding what causes the tumor to shed cells into the blood. If you could catch them all, then you could stop metastasis. The first thing is to monitor their appearance."
The team developed a novel technique to discriminate between cancerous and non-cancerous cells using a type of laser spectroscopy called surface enhanced Raman spectroscopy (SERS) and silver nanoparticles, which are biotags.
"Silver nanoparticles emit a rich set of colors when they absorb the laser light," said Braun. "This is different than fluorescence. This new technology could be more powerful than fluorescence."
The breakthrough is in being able to include more markers in order to identify and study unique tumor cells that are different from the main tumor cells, explained Pallaoro. "These different cells must be strong enough to start a new tumor, or they must develop changes that allow them to colonize in other areas of the body," she said. "Some changes must be on the surface, which is what we are trying to detect."
The team is working to translate the technology into a diagnostic microdevice for studying cancer cells in the blood. Cells would be mixed with nanoparticles and passed through a laser, then discriminated by the ratio of two signals.
The two types of biotags used in this research have a particular affinity that is dictated by the peptide they carry on their surface. One type attaches to a cell receptor called neuropilin-1, a recently described biomarker found on the surface membrane of certain cancer cells. The other biotag binds many cell types (both cancerous and non-cancerous) and serves as a standard measure as the cells are analyzed.
In this study, the team mixed the two biotags and added them to the healthy and tumor cell cultures. The average SERS signal over a given cell image yielded a ratio of the two signals consistent with the cells' known identity.
Pallaoro said she believes the most important part of the new technique is the fact that it could be expanded by adding more colors -- different particles of different colors -- as more biomarkers are found. The team used a new biomarker discovered by scientists at UCSB and the Sanford Burnham Medical Research Institute.
The senior author of the paper is Martin Moskovits, professor in UCSB's Department of Chemistry and Biochemistry.

Oraayiram Kartpanai - V Kumar -SPB

Warren Buffett - 2nd richest man in the world


 So true .......
   
"I always knew I was going to be rich.
I don't think I ever doubted it for a minute"
- Warren Buffett

Warren Buffett is an American business magnate, investor, and philanthropist, widely regarded as one of the most successful investors in the world. Born on August 30, 1930, in Omaha, Nebraska, he is the chairman and CEO of Berkshire Hathaway, a multinational conglomerate holding company. Often referred to as the "Oracle of Omaha," Buffett is renowned for his value investing philosophy, emphasizing long-term investments in undervalued companies with strong fundamentals.

Buffett's investment strategy focuses on businesses he understands, with durable competitive advantages, competent management, and predictable earnings. Some of his most notable investments include Coca-Cola, American Express, and Apple. Despite his immense wealth, he is known for his frugal lifestyle and humility.

A significant portion of his wealth has been pledged to philanthropy, primarily through the Giving Pledge, which he co-founded with Bill and Melinda Gates. This initiative encourages billionaires to donate the majority of their wealth to charitable causes. Buffett has donated billions of dollars to initiatives in healthcare, education, and poverty alleviation, primarily through the Gates Foundation.

Buffett’s annual letters to Berkshire Hathaway shareholders are widely read and valued for their insights into investing, business, and life lessons. As of 2023, he remains a global icon in finance, admired for his acumen, ethical approach, and commitment to giving back. 

THREE MOST SUCCESSFUL WAYS TO BECOME RICH




Would you like to retire wealthy? Here are some incredibly successful ways to join the rich elite! Get the best tips here!
Lifehack.org suggests…
1. Start your own business and eventually sell it.This is the most effective and proven way to become rich. If you can find a new approach to a customer need and build a profitablebusiness that addresses that need then you have created real value. It could be a cleaning business, a hairdresser’s, a consultancy or an investment bank. It will probably take years of very hard work to build up the enterprise. Most new businesses fail so the risks are high. You need all the skills, dynamism, perseverance and diligence of an entrepreneur. But if you can pull it off the potential rewards are huge. This is how many of the seriously wealthy people did it.
2. Join a start-up and get stock. If you can accumulate equity positions in one or more start-up companies then there is an opportunity for a serious capital gain if the company thrives and either floats or is sold to a larger enterprise. Only a small minority of start-ups succeed in realising large capital gains so the odds are not good. However, you can use your judgement to see which business idea and which management team are likely to succeed. Early employees in Apple, Google and Microsoft became millionaires on this basis.
3. Develop property. Buying, developing and selling property is a well-established way to build a significant capital position. One of the key elements is that by borrowing money you can gain leverage on your investment. Say you borrow $200,000 and put in $50,000 of your own to buy a property for $250,000. Then you develop the property and sell it for $400,000. The property has increased in value by 60% but your $50,000 has now grown fourfold to $200,000. You have to select the right properties in the right areas and develop them wisely. You are at risk from booms and busts in the property market. However, in the long term this remains a proven way to accumulate wealth.
Get more tips from Lifehack.org!

TOP 5 MONEY MISTAKES





Beware! These top 5 money mistakes may slip you up. If you want to accumulate the maximum amount of wealth possible then make sure to avoid these common errors!
About.com reveals…
1. Having a Thirty-year Mortgage
Where do you find the money to build wealth? Try looking at your mortgage. Millions of Americans think nothing of paying for their home over 30 years, even though the average homeowner ends up paying two-and-a-half times the purchase price of the home by stretching the payments out this long. Having a 15-year mortgage instead of a 30-year mortgage can save you large sums of money and help you build wealth. See page two of this article for examples of how you may be able to save over $100,000 over the life of your mortgage.
2. Giving Control of Your Money to Someone Else
If you’re not involved in your day-to-day family finances, you’re putting yourself at risk. If you’re married and you let your spouse handle all the financial matters, you’re at risk if your spouse dies or becomes seriously ill or if you divorce. Know the details of your family’s finances, investments, debts, retirement savings, etc. Don’t turn your investments and financial affairs over to a broker or financial consultant without keeping abreast of what is being done with your money and being involved in investment decisions. Never give total control of your money to someone else.
3. Not Controlling Spending Leaks
The reason so many people in America are in so much debt is because they dribble their money away in small, barely noticeable amounts. Like drops of water dribbling through the hole in the dike, the loss is barely noticeable, but over time the hole in the dike gets bigger and bigger. By the time the water is gushing through, the damage is done. The same is true with spending leaks. It’s a lot easier to plug a small hole than to ignore the drips and look over your shoulder later and see a huge tidal wave of water coming your way in the form of unmanageable debt. If you’re ever going to accumulate wealth, you must control spending leaks.
4. Incurring Too Much Debt
If you’re spending all your money paying interest on credit cards and installment debt, you won’t have enough left for savings. When you buy on credit and don’t pay the balance off at the end of the month, you end up paying much more for your purchases. A $1200 big-screen TV can end up costing you $2500, but you’ll never know it because the true cost is hidden in your credit card payments. Pay cash and stay away from credit card debt if you want to accumulate wealth.
5. Not Saving Enough for Retirement or Starting Too Late
When you’re in your 20s and 30s, it’s easy to think you have all the time in the world to accumulate wealth and save for retirement. The truth is, you’ll have to save a lot less if you start now and give your earnings time to compound. If you’re over 40 and you’re behind on your retirement savings, you’ll have to save much larger sums to ever catch up to where you should be. Start saving early, and save at least 10 to 15% of your income, and you’ll be well on your way to accumulating wealth.
Get more information at About.com!

SriSaiBabaTemple,Mahanandi,Kurnool Dist Part 3