Search This Blog

Tuesday, December 20, 2011

Researchers Create Living ‘Neon Signs’ Composed of Millions of Glowing Bacteria



 
by  

Thousands of fluorescent E. coli bacteria make up a biopixel. Photos/Hasty Lab, UC San DiegoIn an example of life imitating art, biologists and bioengineers at UC San Diego have created a living neon sign composed of millions of bacterial cells that periodically fluoresce in unison like blinking light bulbs.
Their achievement, detailed in this week’s advance online issue of the journal Nature,involved attaching a fluorescent protein to the biological clocks of the bacteria, synchronizing the clocks of the thousands of bacteria within a colony, then synchronizing thousands of the blinking bacterial colonies to glow on and off in unison.
A little bit of art with a lot more bioengineering, the flashing bacterial signs are not only a visual display of how researchers in the new field of synthetic biology can engineer living cells like machines, but will likely lead to some real-life applications.



Using the same method to create the flashing signs, the researchers engineered a simple bacterial sensor capable of detecting low levels of arsenic. In this biological sensor, decreases in the frequency of the oscillations of the cells’ blinking pattern indicate the presence and amount of the arsenic poison.
Because bacteria are sensitive to many kinds of environmental pollutants and organisms, the scientists believe this approach could be also used to design low cost bacterial biosensors capable of detecting an array of heavy metal pollutants and disease-causing organisms. And because the senor is composed of living organisms, it can respond to changes in the presence or amount of the toxins over time unlike many chemical sensors.
“These kinds of living sensors are intriguing as they can serve to continuously monitor a given sample over long periods of time, whereas most detection kits are used for a one-time measurement,” said Jeff Hasty, a professor of biology and bioengineering at UC San Diego who headed the research team in the university’s Division of Biological Sciences and BioCircuits Institute. “Because the bacteria respond in different ways to different concentrations by varying the frequency of their blinking pattern, they can provide a continual update on how dangerous a toxin or pathogen is at any one time.”
Tiny microfluidic chips allow the researchers to synchronize the bacteria to fluoresce or blink in unison.
“This development illustrates how basic, quantitative knowledge of cellular circuitry can be applied to the new discipline of synthetic biology,” said James Anderson, who oversees synthetic biology grants at the National Institutes of Health’s National Institute of General Medical Sciences, which partially funded the research. “By laying the foundation for the development of new devices for detecting harmful substances or pathogens, Dr. Hasty’s new sensor points the way toward translation of synthetic biology research into technology for improving human health.”
The development of the techniques to make the sensor and the flashing display built on the work of scientists in the Division of Biological Sciences and School of Engineering, which they published in two previous Nature papers over the past four years. In the first paper, the scientists demonstrated how they had developed a way to construct a robust and tunable biological clock to produce flashing, glowing bacteria. In the second paper, published in 2010, the researchers showed how they designed and constructed a network, based on a communication mechanism employed by bacteria, that enabled them to synchronize all of the biological clocks within a bacterial colony so that thousands of bacteria would blink on and off in unison.
“Many bacteria species are known to communicate by a mechanism known as quorum sensing, that is, relaying between them small molecules to trigger and coordinate various behaviors,” said Hasty, explaining how the synchronization works within a bacterial colony. “Other bacteria are known to disrupt this communication mechanism by degrading these relay molecules.”
But the researchers found the same method couldn’t be used to instantaneously synchronize millions of bacteria from thousands of colonies.
“If you have a bunch of cells oscillating, the signal propagation time is too long to instantaneously synchronize 60 million other cells via quorum sensing,” said Hasty. But the scientists discovered that each of the colonies emit gases that, when shared among the thousands of other colonies within a specially designed microfluidic chip, can synchronize all of the millions of bacteria in the chip. “The colonies are synchronized via the gas signal, but the cells are synchronized via quorum sensing.  The coupling is synergistic in the sense that the large, yet local, quorum communication is necessary to generate a large enough signal to drive the coupling via gas exchange,” added Hasty.

Watch the following video of the Biopixels:

Graduate students Arthur Prindle, Phillip Samayoa and Ivan Razinkov designed the microfluidic chips, which for the largest ones, contain 50 to 60 million bacterial cells and are about the size of a paper clip or a microscope cover slip. The smaller microfluidic chips, which contain approximately 2.5 million cells, are about a tenth of the size of the larger chips.
Each of the blinking bacterial colonies comprise what the researchers call a “biopixel,” an individual point of light much like the pixels on a computer monitor or television screen. The larger microfluidic chips contain about 13,000 biopixels, while the smaller chips contain about 500 pixels.
Hasty said he believes that within five years, a small hand-held sensor could be developed that would take readings of the oscillations from the bacteria on disposable microfluidic chips to determine the presence and concentrations of various toxic substances and disease-causing organisms in the field.
____________
Other UC San Diego scientists involved in the discovery were Tal Danino and Lev Tsimring.
The UC San Diego Technology Transfer Office has filed a patent application on the Hasty group’s invention. Anyone with commercial interest in the research or application should contact Eric Gosink, senior licensing officer, at egosink@ucsd.edu

New study finds soybean compounds enhances effects of cancer radiotherapy



 
by  

A Wayne State University researcher has shown that compounds found in soybeans can make radiation treatment of lung cancer tumors more effective while helping to preserve normal tissue.
A team led by Gilda Hillman, Ph.D., professor of radiation oncology at Wayne State University’s School of Medicine and the Barbara Ann Karmanos Cancer Institute, had shown previously that soy isoflavones, a natural, nontoxic component of soybeans, increase the ability of radiation to kill cancer cells in prostate tumors by blocking DNA repair mechanisms and molecular survival pathways, which are turned on by the cancer cells to survive the damage radiation causes.
At the same time, isoflavones act to reduce damage caused by radiation to surrounding cells of normal, noncancerous tissue. This was shown in a clinical trial conducted at WSU and Karmanos for prostate cancer patients treated with radiotherapy and soy tablets. Continue reading below…

In results published in the journal Nutrition and Cancer in 2010, those patients experienced reduced radiation toxicity to surrounding organs; fewer problems with incontinence and diarrhea; and better sexual organ function. Hillman’s preclinical studies in the prostate tumor model led to the design of that clinical trial.
Soy isoflavones can make cancer cells more vulnerable to ionizing radiation by inhibiting survival pathways that are activated by radiation in cancer cells but not in normal cells. In normal tissues, soy isoflavones also can act as antioxidants, protecting those tissues from radiation-induced toxicity.
During the past year, Hillman’s team achieved similar results in non-small cell lung cancer cells in vitro. She recently received a two-year, $347,000 grant from the National Cancer Institute, part of the National Institutes of Health, to investigate whether those results also proved true for non-small cell lung tumors in mice, and has found that they do. Her findings, which she called “substantial” and “very promising,” appear in the November 2011 edition of the journal Radiotherapy and Oncology.
Hillman emphasized that soy supplements alone are not a substitute for conventional cancer treatment, and that doses of soy isoflavones must be medically administered in combination with conventional cancer treatments to have the desired effects.
“Preliminary studies indicate that soy could cause radioprotection,” she said. “It is important to show what is happening in the lung tissue.”
The next step, she said, is to evaluate the effects of soy isoflavones in mouse lung tumor models to determine the conditions that will maximize the tumor-killing and normal tissue-protecting effects during radiation therapy.
“If we succeed in addressing preclinical issues in the mouse lung cancer model showing the benefits of this combined treatment, we could design clinical protocols for non-small cell lung cancer to improve the radiotherapy of lung cancer,” Hillman said. “We also could improve the secondary effects of radiation, for example, improving the level of breathing in the lungs.”
Once protocols are developed, she said, clinicians can begin using soy isoflavones combined with radiation therapy in humans, a process they believe will yield both therapeutic and economic benefits.
“In contrast to drugs, soy is very, very safe,” Hillman said. “It’s also readily available, and it’s cheap.
“The excitement here is that if we can protect the normal tissue from radiation effects and improve the quality of life for patients who receive radiation therapy, we will have achieved an important goal.”
___________
Courtesy Wayne State University.

Climate Change May Bring Big Ecosystem Shifts, NASA Says


Predicted percentage of ecological landscape being driven toward changes in plant species as a result of projected human-induced climate change by 2100. (Credit: NASA/JPL-Caltech)
Science Daily  — By 2100, global climate change will modify plant communities covering almost half of Earth's land surface and will drive the conversion of nearly 40 percent of land-based ecosystems from one major ecological community type -- such as forest, grassland or tundra -- toward another, according to a new NASA and university computer modeling study.

The model projections paint a portrait of increasing ecological change and stress in Earth's biosphere, with many plant and animal species facing increasing competition for survival, as well as significant species turnover, as some species invade areas occupied by other species. Most of Earth's land that is not covered by ice or desert is projected to undergo at least a 30 percent change in plant cover -- changes that will require humans and animals to adapt and often relocate.Researchers from NASA's Jet Propulsion Laboratory and the California Institute of Technology in Pasadena, Calif., investigated how Earth's plant life is likely to react over the next three centuries as Earth's climate changes in response to rising levels of human-produced greenhouse gases. Study results are published in the journal Climatic Change.
In addition to altering plant communities, the study predicts climate change will disrupt the ecological balance between interdependent and often endangered plant and animal species, reduce biodiversity and adversely affect Earth's water, energy, carbon and other element cycles.
"For more than 25 years, scientists have warned of the dangers of human-induced climate change," said Jon Bergengren, a scientist who led the study while a postdoctoral scholar at Caltech. "Our study introduces a new view of climate change, exploring the ecological implications of a few degrees of global warming. While warnings of melting glaciers, rising sea levels and other environmental changes are illustrative and important, ultimately, it's the ecological consequences that matter most."
When faced with climate change, plant species often must "migrate" over multiple generations, as they can only survive, compete and reproduce within the range of climates to which they are evolutionarily and physiologically adapted. While Earth's plants and animals have evolved to migrate in response to seasonal environmental changes and to even larger transitions, such as the end of the last ice age, they often are not equipped to keep up with the rapidity of modern climate changes that are currently taking place. Human activities, such as agriculture and urbanization, are increasingly destroying Earth's natural habitats, and frequently block plants and animals from successfully migrating.
To study the sensitivity of Earth's ecological systems to climate change, the scientists used a computer model that predicts the type of plant community that is uniquely adapted to any climate on Earth. This model was used to simulate the future state of Earth's natural vegetation in harmony with climate projections from 10 different global climate simulations. These simulations are based on the intermediate greenhouse gas scenario in the United Nations' Intergovernmental Panel on Climate Change Fourth Assessment Report. That scenario assumes greenhouse gas levels will double by 2100 and then level off. The U.N. report's climate simulations predict a warmer and wetter Earth, with global temperature increases of 3.6 to 7.2 degrees Fahrenheit (2 to 4 degrees Celsius) by 2100, about the same warming that occurred following the Last Glacial Maximum almost 20,000 years ago, except about 100 times faster. Under the scenario, some regions become wetter because of enhanced evaporation, while others become drier due to changes in atmospheric circulation.
The researchers found a shift of biomes, or major ecological community types, toward Earth's poles -- most dramatically in temperate grasslands and boreal forests -- and toward higher elevations. Ecologically sensitive "hotspots" -- areas projected to undergo the greatest degree of species turnover -- that were identified by the study include regions in the Himalayas and the Tibetan Plateau, eastern equatorial Africa, Madagascar, the Mediterranean region, southern South America, and North America's Great Lakes and Great Plains areas. The largest areas of ecological sensitivity and biome changes predicted for this century are, not surprisingly, found in areas with the most dramatic climate change: in the Northern Hemisphere high latitudes, particularly along the northern and southern boundaries of boreal forests.
"Our study developed a simple, consistent and quantitative way to characterize the impacts of climate change on ecosystems, while assessing and comparing the implications of climate model projections," said JPL co-author Duane Waliser. "This new tool enables scientists to explore and understand interrelationships between Earth's ecosystems and climate and to identify regions projected to have the greatest degree of ecological sensitivity."
"In this study, we have developed and applied two new ecological sensitivity metrics -- analogs of climate sensitivity -- to investigate the potential degree of plant community changes over the next three centuries," said Bergengren. "The surprising degree of ecological sensitivity of Earth's ecosystems predicted by our research highlights the global imperative to accelerate progress toward preserving biodiversity by stabilizing Earth's climate."
JPL is managed for NASA by the California Institute of Technology in Pasadena.

Timing is key in the proper wiring of the brain




Timing is key in the proper wiring of the brain: study(Medical Xpress) -- After birth, the developing brain is largely shaped by experiences in the environment. However, neurobiologists at Yale and elsewhere have also shown that for many functions the successful wiring of neural circuits depends upon spontaneous activity in the brain that arises before birth independent of external influences.
Now Yale researchers have shown in research published online Dec. 18 in the journal Nature Neuroscience that the timing of this activity is crucial to the development of vision — and perhaps to other key neural processes that have been implicated in autism and other neurodevelopmental disorders.
“This spontaneous activity is not dependent upon external sensory stimuli,” said Michael Crair, the William Ziegler III Associate Professor of Neurobiology and associate professor of ophthalmology and visual science and senior author of the paper. “We want to know where this activity comes from and how does it work.”
Yale researchers tried to interfere with this spontaneous activity in neonatal mice through a technique called optogenetics – or the manipulation of brain cells genetically engineered to be activated by light. The Yale team showed that proper wiring of connections between the eye and brain depended upon exactly when this spontaneous activity occurs. When the researchers simultaneously induced retinal activity in both eyes of a neonatal mouse, they found the visual connections did not develop properly. However, when they induced activity first in one eye and then the other, neural connections were unaffected or even enhanced.
Crair said that rhythmic spontaneous activity has been implicated in proper development of many brain areas, including the cortex, cerebellum, and spinal cord. He said it is possible that a disruption in the timing of this spontaneous activity could play a role in a host of developmental disorders.
“The genes thought to be involved in autism involve the formation and function of brain synapses and neural circuits, and that is exactly what is getting messed up when we interfere with brain activity early in development,” Crair said.
Provided by Yale University
"Timing is key in the proper wiring of the brain: study." December 19th, 2011. http://medicalxpress.com/news/2011-12-key-proper-wiring-brain.html
 

Posted by
Robert Karl Stonjek

Sreeman Narayana Chanting - Shirdi Sai Saranagati Prayer Service

                       There is an interesting story of how I got this picture of the holy Feet of Baba. In 2004 I went to the Chennai Mylapore temple of Shirdi Sri Sai Baba. I had visited that temple a couple of times earlier.
After worshipping the main deity, I was just strolling around the temple. Then I was casually reading the inscription in a pillar that listed the popular eleven promises of Baba. I knew it already but I was just passing time casually reading it.
Suddenly I heard a voice behind me: "You want this?" I turned around to face a six and a half feet tall, well-built man standing near me. I was too startled by the encounter of this rather strange visitor to answer him. But without expecting a reply he thrust a 4"x3" card into my right palm. Lowering my eyes I saw the card in my hand which had those eleven promises printed on its face. I felt I didn't need it. But when I raised my eyes, he had already disappeared in the crowd of devotees.
I wondered why Baba gave it to me through somebody or perhaps He Himself came to do the job. After all I believed in Him and I didn't require any promise of that sort. In fact throughout the previous year I was doing an extensive research in the doctrine of Saranagati (Surrender). I was spending much of my spare time in exploring the works of Sri Ramanuja and His disciples who lived in Srirangam for about five centuries. I was yet to get a satisfactory explanation and so my quest continued through 2004.
Therefore, the gift I expected from Baba at that time was to expound the theory and practice of the doctrine of Saranagati. And then I flipped the card that was thrust into my palm a few minutes before. I was thrilled to see the picture of the pair of holy Feet of Baba.
When I came back to Trichy from Chennai I felt I was accepted by Baba. And perhaps the reason was my quest to learn the doctrine of Saranagati. But later I understood that it is more important for one to accept Baba as one's Guru rather than Baba's accepting one as His disciple.
In any case, my spiritual progress went at a break-neck speed since then. In 2008 Baba gave me a permanent darsan of His all-pervading Form of Peace.
All interested devotees of Baba are warmly invited to participate in SHIRDI SAI SARANAGATI PRAYER SERVICE conducted by me every Thursday morning. You can pray from your home comfortably. For details you may visit my blog:
http://AwarenessChanting.blogspot.com

Monday, December 19, 2011

A Solar Trade War Could Put Us All in the Dark


Solar technology is the result of decades of global competition and collaboration—a trade war would undermine its future.

  • By Martin Green
The brewing solar trade war between the United States and China sullies what should be a triumphant moment in the global photovoltaic (PV) industry: the arrival of affordable solar electricity.

Brian Bailey
After decades of global competition and collaboration, many solar markets around the world have reached grid parity—the point at which generating solar electricity, without subsidies, costs less than the electricity purchased from the grid. In other words, solar technology is ready to be a major contributor to solving our planet's energy and environmental crisis.
However, trade protectionism threatens to inhibit the solar industry at the very time when it is breaking through to a new level of global interdependence, collaboration, and maturity.
On October 18, the U.S. government was asked to impose tariffs on imports of Chinese solar cells and modules, based on the argument that China-based producers have been heavily subsidized and are selling solar products at unfairly low prices. Perhaps not surprisingly, some Chinese companies have now asked the Chinese government to impose tariffs on imports of American solar products, arguing that U.S.-based producers have been heavily subsidized, too. And just like that, the production of affordable and competitive solar products has become a political liability in the world's two largest producers and consumers of energy.

The success of the entire solar industry hinges on the success of not one country or one company, but global competition and collaboration, which drives efficiency improvements and cost reductions worldwide. If trade barriers are imposed in the U.S., China, or Germany, it could cause a significant increase in the price of solar products and therefore solar electricity, globally. That could cause a further erosion of political support for the solar industry at a critical juncture.
Altogether, a solar trade war could undermine decades of international innovation and stall the global adoption of advanced solar technology.
Gordon Brinsen, president of the U.S. branch of Solar World, one company seeking the tariff imposition, argued that "Solar technology was invented here and we intend to keep it here." I strongly disagree. Just as the sun is a shared global resource, the history of solar technology, as well as solar industry development, has been equally global. And that's how it should remain.
The photovoltaic effect was first observed by a Frenchman, Alexandre-Edmond Becquerel, in 1839. Many others improved on Becquerel's research, such as Willoughby Smith in the United Kingdom. Ten years later, American Charles Fritts created the first working solar cell. Then, in 1887, German scientist Heinrich Hertz discovered the photoelectric effect. That work was further improved upon by Albert Einstein, who published a paper on the photoelectric effect that ultimately resulted in the award of the Nobel Prize in Physics and provided the theoretical basis for the understanding of photovoltaics.
The space race of the late 20th century financed extraordinary PV research, spurring a dramatic jump in the laboratory efficiencies of crystalline silicon solar cells. Russia's Sputnik 3 and America's Vanguard 1—satellites launched in 1958—were powered by solar cells. Soon after, solar research institutions around the world began to invest and develop proprietary cell designs to explore the theoretical limits of photovoltaics. For example, in 1985, our team at the University of New South Wales' (UNSW) School of Photovoltaic and Renewable Energy Engineering (SPREE), in Sydney, Australia, created the first silicon solar cell design to break the 20 percent efficiency threshold. In 1988, Stanford University's rear point contact cell demonstrated 22 percent efficiency. The next major improvement, demonstrated by the PERL cell from UNSW, produced the first 24 percent efficient silicon cell in 1994, and holds the current world record of 25 percent efficiency.
All of these laboratory accomplishments are critical to our understanding of photovoltaics. But in the laboratory alone, they aren't much use to humanity. The most exciting part of our work has being seeing the laboratory technology effectively commercialized and making its way into the mainstream market.
This commercialization of solar technology has been equally global.
In 1954, Bell Labs in the United States made PV technology marketable for the first time, with up to 6 percent efficient solar cells that cost roughly $250 per watt. Ten years later, Sharp Corporation in Japan produced one of the first viable solar modules for terrestrial applications. Since then, global competition spurred decades of efficiency improvements and cost reductions across a variety of PV technologies and industry segments. SunPower, based in the U.S., was the first company to effectively commercialize Stanford's rear point contact cell technology, which set several world records for commercial monocrystalline silicon module efficiency. Suntech, based in China, was the first company to effectively commercialize UNSW's PERL technology, which immediately set a world record for multicrystalline silicon module efficiency.
As a result of global competition, the cost of a solar module is now about $1 per watt. It's almost futile to generalize as to which regions contributed most to a certain industry, as there's so much overlap, and any delineation invites dozens of important exceptions. Many Germany-based manufacturing companies have created great high-precision equipment for mass-producing solar wafers, cells, and modules. Many U.S. companies have played a leading role in driving down the price of silicon—the key ingredient for PV—to now less than $40 per kilogram. China-based companies, several started by my former UNSW students, have done an incredible job developing innovative, low-cost methods for fabricating high-quality solar cells and modules.
Most importantly, all of these companies and countries rely heavily on each other to succeed. Together, they represent a formidable force that has relentlessly driven down the cost of solar electricity with remarkable consistency for more than 30 years. In isolation, they're relatively powerless.
Just like the sun's power, the solar industry belongs to us all. Now, the solar industry needs to rise above narrow-mindedness, and in one voice, oppose protectionism in the solar industry. We must remain unified in our commitment to making solar electricity affordable for everyone, everywhere.
Martin Green is the executive research director at the Photovoltaics Centre of Excellence at the University of New South Wales in Australia. Over the past few decades, his lab has made the world's most efficient silicon solar cells, and his students have gone on to found, or hold key positions at, China's top solar panel manufacturers, including the world's largest, Suntech Power.

The Camera That Changed Hollywood


Movie magic: A scene from The Hobbit being filmed in 3D using digital cameras made by Red.
thehobbitblog.com


How a sunglasses entrepreneur helped end the golden age of the 35-millimeter film camera.

  • By Lee Gomes
In Hollywood history, 2011 will go down as the year during which the last three companies still making traditional 35-millimeter film cameras—the gently whirring behemoths that directors sit next to on movie sets—all said, in effect, that they were getting out of the business. Film cameras would remain in inventory, but Panavision, ARRI, and Aaton announced that from here on out, all their new models will be digital.
The analog-to-digital transition that is occurring in industries around the world is largely responsible. But special mention should go to a small Southern California company whose technology has stirred the imagination of a roster of legendary directors. The innovation: a line of digital movie cameras that, almost miraculously, are smaller, lighter, and cheaper than film cameras, yet have comparable image quality.
Red Digital Cinema Camera Company, located in Irvine, California, was founded in 1999 by Jim Jannard, who had no experience in the movie business. He was, instead, an entrepreneur who had made a fortune with his line of Oakley sunglasses—must-haves among the California fun-and-sun crowd.
While Jannard is an active participant on Red's user forums, he rarely gives interviews to reporters. Ted Schilowitz, who is something like the CEO of the 400-person company (it eschews formal titles), says Jannard originally became intrigued by the idea of a digital camera that would be a no-compromise alternative for feature-movie makers.
That interest in cameras, says Schilowitz, was a logical extension of Jannard's Oakley business, which also sold prescription glasses and protective goggles for athletes. "Jim is obsessed with the way the world sees things," Schilowitz says.
In the "standard model" of technological disruption, a relatively inexpensive, low-end product, which at first might appeal only to entry-level users, slowly improves in performance until it meets the demands of even the most discriminating power customers. The PC is the prototypical example; current models have the horsepower that until recently was the exclusive province of supercomputers.
The path Red took was slightly different. Digital movie cameras were already on the market when the Red team began their work. But the image quality of early digital cameras was nowhere near what was required for a feature movie. Quality was improving—but Jannard wanted his first model to leapfrog past all current digital cameras and exceed the strictest performance specs, even for film.
That required several years of engineering, mostly related to the semiconductor chip that is the heart of any digital camera and converts photons into electrons. The Red team came up with a chip that was the same physical size as a frame of 35-mm film, the Hollywood standard, and produced an image that was virtually indistinguishable, albeit digital.
"When we looked around, we saw digital cameras slowly moving up the food chain," recalls Schilowitz. "But none of them were even close to living up to what we saw as the magic of film. We didn't really know what we were doing, so we started from zero, but that turned out to be a huge advantage."
The first Red model was introduced in 2007, and immediately attracted the attention of filmmakers like Peter Jackson and Steven Soderbergh. Since then, directors have used Red cameras to shoot some of Hollywood's biggest movies, including The Social NetworkThe Girl with the Dragon Tattoo, and installments of such blockbuster Hollywood franchises as The Lord of the Rings, Pirates of the Caribbean, and Spiderman.
The camera also has ardent fans outside the Hollywood mainstream. The last two winners of the Oscar for Best Foreign Film—The Secret In Their Eyes, from Argentina, and last year's In A Better World, from Denmark—were both shot on Reds.
Price comparisons between Red and traditional film cameras aren't especially informative, since most film cameras are rented rather than purchased. Schilowitz says that a fully-loaded version of the latest Red model costs between $45,000 and $60,000, perhaps a quarter as much as a new film camera—if anyone were still making them.

Lights, bytes, action!: The Epic digital camera starts at around $31,000. The camera has become popular among Hollywood directors, but now faces competition from electronics firms like Canon.
Red Digital Cinema Camera Company
The body of the Red camera isn't much bigger than a professional-sized still camera. All the same, it isn't as though the cinematographer walks around the movie set with the camera strapped around his neck, snapping pictures like a tourist. A fully configured Red system, with lenses, dollies, and the rest, can be as imposing as a traditional film camera.
But filmmakers say they like to take advantage of Red's greater portability when they need it. The lower price also means that some crews use multiple cameras. The crew filming The Hobbit in New Zealand is using 48 red cameras, including models configured for 3-D effects.
Digital cameras can also capture more images per second than standard film, enhancing the image quality. Jackson, who is directing The Hobbit, has said the effect is "like the back of the cinema has had a hole cut out of it where the screen is, and you are actually looking into the real world."
Digital movie cameras are one of the last steps towards a "film" industry in which actual celluloid film plays no role. Currently, even movies shot on film are usually digitized afterwards, so that editing and effects can be done on computer. The movies are then printed back onto film, and shipped to theaters, most of which still use traditional threaded film projectors.
But theaters are also in the midst of an epic transition to digital projectors, which could allow studios to simply transmit copies of movies to theaters using high-speed Internet connections. Not an ounce of celluloid will be required once big-screen movies are both filmed and projected digitally.
Exact figures on the film vs. digital split in Hollywood moviemaking are hard to come by, but there is little doubt that film's market is shrinking. Both Kodak and Fuji still sell movie-film stock, but many of Los Angeles's developing and transfer facilities for film are closing down or consolidating. Executives from film camera companies have been quoted in the trade press as saying they expect 85 percent of moviemaking to be digital a few years from now, but they aren't making predictions much beyond that.
As a private company, Red won't reveal information about sales or profits. Clearly, it will need more than an innovator's head start to remain a leader in what is becoming a very crowded market. Incumbents like Panavision, with deep roots in Hollywood, are busily promoting their digital models, and Sony is active in the market as well. Canon just checked in with a feature-caliber digital system of its own, recruiting no less a figure than Martin Scorsese to sing its praises.
Red will press on, of course. Schilowitz wants to make clear his company is not on any anti-film vendetta, even though its camera had been called the "Panavision killer." Schilowitz says, "It was never our goal to kill film. Instead, we wanted to evolve it."

History of Electrical Engineering:


The discoveries of Michael Faraday formed the foundation of electric motor technology.However, it was not until the 19th century that research into the subject started to intensify. Notable developments in this century include the work of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor, Michael Faraday, the discoverer of electromagnetic induction in 1831, and James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism.

Electricity has been a subject of scientific interest since at least the early 17th century. The first electrical engineer was probably William Gilbert who designed the versorium: a device that detected the presence of statically charged objects. He was also the first to draw a clear distinction between magnetism and static electricity and is credited with establishing the term electricity. In 1775 Alessandro Volta's scientific experimentations devised the electrophorus, a device that produced a static electric charge, and by 1800 Volta developed the voltaic pile, a forerunner of the electric battery.Thomas Edison built the world's first large-scale electrical supply network.


 Nikola Tesla made long-distance electrical transmission networks possible.During this period, the work concerning electrical engineering increased dramatically. In 1882, Edison switched on the world's first large-scale electrical supply network that provided 110 volts direct current to fifty-nine customers in lower Manhattan. In 1884 Sir Charles Parsons invented the steam turbine which today generates about 80 percent of the electric power in the world using a variety of heat sources. In 1887, Nikola Tesla filed a number of patents related to a competing form of power distribution known as alternating current. In the following years a bitter rivalry between Tesla and Edison, known as the "War of Currents", took place over the preferred method of distribution. AC eventually replaced DC for generation and power distribution, enormously extending the range and improving the safety and efficiency of power distribution.

The efforts of the two did much to further electrical engineering—Tesla's work on
induction motors and polyphase systems influenced the field for years to come, while Edison's work on telegraphy and his development of the stock ticker proved lucrative for his company, which ultimately became General Electric. However, by the end of the 19th century, other key figures in the progress of electrical engineering were beginning to emerge.
 The Darmstadt University of Technology founded the first chair and the first faculty of electrical engineering worldwide in 1882. In the same year, under Professor Charles Cross, the Massachusetts Institute of Technology began offering the first option of Electrical Engineering within a physics department. In 1883 Darmstadt University of Technology and Cornell University introduced the world's first courses of study in electrical engineering, and in 1885 the University College London founded the first chair of electrical engineering in the United Kingdom. The University of Missouri subsequently established the first department of electrical engineering in the United States in 1886.
 

~Top 10 Web Design Forums For Web Designers and Developers~


Here is the list of 10 best forums for Web Designers and Web Developers that may proved to be most helpful:

1.Digitalpoint – This is a place designed to discuss about website design and get help with, Photoshop, Flash, JQuery, HTML, CSS, DHTML, etc.
2.Sitepoint – You have great source of information on all platform related to design and development
3.Designerstalk – Anything to discuss about graphics, illustration, programming and more
4.Smashingmagazine – Discussions about current trends and recent developments: design reviews, showcases and interesting or unusual designs are all possible here.
5.Webmaster-talk -A cool place to discuss your knowledge on various platform and get help from the experts
6.Webdesignforums – This is more like a tutorial offering posts with great information.
7.Webdesignerforum – Get help and advice on planning your web design project. This is also a great source for news from the world of design & development and other exciting events.
8.Webdevforums – Web designers and web development professionals get your fundamental web master tools and information here. This is your source for basic website builder knowledge.
9.Webdesignchat – Discuss about your web designing problems, techniques, softwares, websites, latest tricks etc
10.Webdesignforum – You can discuss tips on web design including the use of popular web design software. And get help on any issues related to various platforms.

கணணி இயக்கம் (Dynamic Link Library) குறித்த மர்மங்களிலிருந்து விடுபட ?


கணணி இயக்கம் அல்லது இயங்குதளம் குறித்து அறிந்து கொள்கையில் டி.எல்.எல் கோப்பு என ஒரு சொல் தொடரை அடிக்கடி நாம் கேள்விப்படுகிறோம். இது டைனமிக் லிங்க் லைப்ரேரி(Dynamic Link Library) என்பதைக் குறிக்கிறது.

விண்டோஸ் இயங்குதளத்துடன் இணைந்த அதன் இயக்கத்திற்கு அடிப்படையில் தேவையான கோப்புகள் இவை. இவை மற்ற கோப்புகளிலிருந்து தனியே தெரிந்தாலும் பெர்சனல் கணணி பயன்படுத்துபவர்கள் இவை என்ன என்றோ அல்லது இவை இல்லை என்றால் என்ன செய்திடும் என்றோ கவலைப்படுவதில்லை.
இவை எதற்காக எவ்வாறு செயல்படுகின்றன என்று தெரிந்து கொண்டால் கணணி இயக்கம் குறித்த மர்மங்களிலிருந்து நிச்சயம் விடுபடலாம். இந்த கோப்புகளின் கட்டமைப்பு மற்றும் செயல்பாடுகளின் தன்மை குறித்து புரோகிராமர்கள் தான் கட்டாயம் அறிந்து கொண்டிருக்க வேண்டும்.
இருப்பினும் இவை மிக முக்கியமான வகை கோப்புகள் என்பதால் இவை குறித்து நாம் நிச்சயம் ஓரளவிற்காவது அறிந்திருக்க வேண்டும். எனவே கீழே கணணி தொழில் நுட்பம் சாராத ஒருவருக்குத் தெரிந்திருக்க வேண்டிய சில அடிப்படைத் தகவல்கள் இங்கு தரப்படுகின்றன.
ஒரு டி.எல்.எல் கோப்பு அந்த கோப்பின் துணைப்பெயரான DLL என்பதை வைத்து அடையாளம் காணலாம். இது குறித்து பல விளக்கங்கள் தரப்பட்டாலும் மைக்ரோசாப்ட் தன் இணைய தளத்தில் கூறப்பட்டுள்ளது, சுருக்கமாகவும் அதன் முக்கிய தன்மையினையும் காட்டுவதாக உள்ளது.
ஒரு டைனமிக் லிங்க் லைப்ரேரி கோப்பில் மற்ற டி.எல்.எல் அல்லது அப்ளிகேஷன் சாப்ட்வேர் தொகுப்புகளுக்கான கோப்பின் செயல்பாடுகளை இயக்கும் புரோகிராம் வரிகள் எழுதப்பட்டிருக்கும்.
புரோகிராமர்கள் ஒரு டி.எல்.எல் கோப்பில் சில குறியீட்டு வரிகளை அமைக்கின்றனர். இந்த குறியீடுகள் திரும்ப திரும்ப மேற்கொள்ள வேண்டிய சில செயல்களுகானவை. குறிப்பிட்ட சில செயல்களை கணணியில் மேற்கொள்ளத் தேவையான குறியீடுகள் இவை.
ஒரு எக்ஸிகியூட்டபிள்(.EXE) கோப்பு போல டி.எல்.எல் கோப்புகளை நேரடியாக இயக்க முடியாது. ஏற்கனவே இயங்கிக் கொண்டிருக்கின்ற எக்ஸிகியூட்டபிள் அல்லது டி.எல்.எல் கோப்புகளின் குறியீடுகளே இன்னொரு டி.எல்.எல் கோப்பின் குறியீடுகளை இயக்க முடியும்.
இதனை இன்னொரு வழியாகவும் காணலாம். டி.எல்.எல் கோப்புகள் ஒரு செயலை மட்டும் மேற்கொள்ளும் கோப்பு தொகுப்புகள்.
இதனை வெவ்வேறு புரோகிராம்களில் குறிப்பிட்ட செயலினை மேற்கொள்ள தேவைப்படுகையில் இøணைத்து இயக்கலாம். இதனால் கணணியின் செயல்பாடு எளிதாகிறது.
கணணியில் நாம் பலவகை அப்ளிகேஷன் புரோகிராம்களை இயக்குகிறோம். வேர்ட் ப்ராசசர், இன்டர்நெட் பிரவுசர், ஸ்ப்ரெட் ஷீட், பிக்சர் மேனேஜர், கிராபிக்ஸ் டிசைனர், பேஜ் மேக்கர் என இவற்றின் வேலைத் தன்மை மொத்தமாக வேறுபடுகின்றன. ஆனால் இவை அனைத்திலும் சில குறிப்பிட்ட செயல்பாடுகள் பொதுவான தன்மையானùதாய் இருக்கின்றன.
எடுத்துக்காட்டாக கோப்பை திறத்தல், மாற்றங்களை அப்டேட் செய்தல், ஒரு கோப்பில் மேல் கீழ் செல்லல், அழித்ததைப் பெறல், அழித்தல், அறவே நீக்குதல் என நிறைய வேலைகளை பொது வேலைகளாகக் காட்டலாம்.
இந்த வேலைகள் பெரும்பாலான அப்ளிகேஷன் சாப்ட்வேர் தொகுப்புகள் இயங்குகையில் மேற்கொள்ள வேண்டிய திருக்கும். இந்த வேலைகளுக்கு ஒவ்வொரு அப்ளிகேஷன் புரோகிராமிலும் அதற்கான குறியீடுகளை எழுதி அமைத்துக் கொண்டிருந்தால் நிச்சயம் அது புரோகிராமரின் உழைப்பின் நேரத்தை வீணாக்குவதாக அமையும்.
இவற்றைப் பொதுவாக மேற்கொள்ளும் வகையில் சிறிய புரோகிராம் கோப்புகளில் அமைத்து அவற்றை தேவைப்படும் போது மெயின் அப்ளிகேஷன் சாப்ட்வேர் புரோகிராமில் இருந்து இயக்கினால் எளிதாக வேலை அமைவதுடன் தேவையற்ற திரும்ப திரும்ப ஒரு பணிக்காக பல இடங்களில் வேலை மேற்கொள்வது குறையும்.
இந்த பொதுவான வேலைகளுக்காக அமைக்கும் கோப்புகளே டி.எல்.எல் கோப்புகள். இந்த கோப்புகள் மொத்தமாக ஒரு நூலகத்தில் இருக்கும் நூல்கள் போல இயங்குதளங்களில் தேக்கி வைக்கப்படுகின்றன.
அவற்றை மற்ற அப்ளிகேஷன் சாப்ட்வேர் புரோகிராம்கள் எடுத்து பயன்படுத்துகின்றன. ஒரு டி.எல்.எல் கோப்பை ஒரே நேரத்தில் பல அப்ளிகேஷன் சாப்ட்வேர் புரோகிராம்கள் பயன்படுத்த முடியும்.
இங்கு சில முக்கியமான டி.எல்.எல் கோப்புகளையும் அவற்றின் செயல்பாடுகள் என்ன என்பதனையும் காணலாம்.
COMDLG32.DLL: இது டயலாக் பாக்ஸ்களை கண்ட்ரோல் செய்கிறது.
GDI32.DLL: இந்த கோப்பு பல்வேறு பணிகளை மேற்கொள்கிறது. கிராபிக்ஸ் வரைகிறது. டெக்ஸ்ட்டைக் காட்டுகிறது. எழுத்து வகைகளை நிர்வகிக்கிறது.
KERNEL32.DLL: இதில் நூற்றுக்கணக்கான செயல்பாடுகள் உள்ளன. குறிப்பாக மெமரியினை நிர்வாகம் செய்வது அவற்றில் முக்கியமான ஒன்று. கணணியைப் பயன்படுத்துபவருக்கான பல வகையான யூசர் இன்டர்பேஸ்களை இது கையாள்கிறது. புரோகிராம் விண்டோக்களை அமைப்பதில் துணை புரிகிறது. அதன் மூலம் பயனாளர்களுக்கு இடையே செயல்படுகிறது.
இவ்வாறு பொதுவான செயல்பாடுகளுக்கென பொதுவான டி.எல்.எல் கோப்புகள் இருப்பதால் தான் விண்டோஸில் இயக்கப்படும் அனைத்து அப்ளிகேஷன் புரோகிராம்களும் ஒரே மாதிரியான தோற்றம் மற்றும் செயல்பாடுகளில் அமைகின்றன. அனைத்து வகையான அப்ளிகேஷன் செயல்பாடுகளை தரப்படுத்துவதில் இந்த டி.எல்.எல் கோப்புகள் முக்கியப் பங்கு வகிக்கின்றன.
இதனால்தான் டெஸ்க்டொப் கணணிகள் பயன்பாட்டில் விண்டோஸ் இயங்குதளம் அனைவரின் பாராட்டைப் பெற்ற ஆதரவு பெற்ற சிஸ்டமாக இடம் பிடிக்க முடிந்தது. விண்டோஸுக்கு முன் டாஸ் என்னும் இயக்கம் இருந்தது.
அதனைப் பயன்படுத்தியவர்கள் நினைவு கூர்ந்தால் எப்படி ஒவ்வொரு அப்ளிகேஷன் புரோகிராமிற்கும் ஒரு மாதிரியான முகப்பு கிடைத்தது என்பதனை உணர்வார்கள். அது விண்டோஸ் வந்தவுடன் மாறிவிட்டது. அதற்குக் காரணம் இந்த டி.எல்.எல் கோப்புகளே.