Search This Blog

Saturday, August 27, 2011

Baffling Plans




Elevation to Krishna Consciousness“We are all trying to achieve peace and freedom from these miseries, at least unconsciously, and in the higher intellectual circles there are attempts to get rid of these miseries by ingenious plans and designs. But the power that baffles all the plans and designs of even the most intelligent person is the power of Maya devi, or the illusory energy. The law of karma, or the result of all actions and reactions in the material world, is controlled by this all-powerful illusory energy.” (Shrila Prabhupada, Elevation to Krishna Consciousness, Ch 2)
When the adult reaches a mature stage in life, when life’s necessities are accounted for and there is a seemingly secure existence, the point of focus often turns towards redressing social ills, the miseries and pains endured by others. Indeed, it is seen that during times of economic downturn polling firms who ask citizens what they think of the economy get answers like, “I’m doing fine, but I’m really worried about my neighbor. I’m managing okay, but I know there are so many people suffering out there.” The compassion resting within the heart can be so strong that groups and commissions are formed to try to solve problems causing distress in local society and around the world. Yet, since no planning commission takes into account the all-powerful energy governing this world, who behaves without bias and prejudice, every remedy put forth will fail. On the other hand, for one who directly approaches the controller of the all-pervasive energy, the one person to whom material and spiritual distinctions don’t exist, all of life’s remedies can be quickly found, dissolving every type of misfortune, personal struggle, and unwarranted pity.
“End poverty now; save the poor; help the downtrodden”. These are the rallying cries for the popular activist groups. These movements are rooted in genuine compassion and heartfelt emotions, but we know that good intentions are not enough to earn success. Young children hope for success and happy things to come in the future, but if not for the actions of grownups, nothing would ever manifest. In a similar manner, simply wishing that everyone lived happily and in a peaceful condition cannot make the utopian view a reality.
Let’s look at a simple example to see where the planning commissions go wrong. As poverty is the primary focus of attention in the largest number of activist groups, let’s review some of the more common solutions applied towards ending it. Poverty is defined as a condition where opulences are lacking. Either income is very low, or material possessions are in short supply. Since there is little to no money, the person in poverty lives in essential squalor, conditions that the person on the planning commission couldn’t ever imagine enduring. The poverty stricken man must eat meats of poor quality, shop in stores that the plan maker wouldn’t be caught dead in, and live in a house located in a poor neighborhood.
image-on-money-5Since poverty is a condition where essential items are missing, the most obvious solution is to distribute money. Transfer wealth from those who have too much to those who don’t have any. Seems like a logical enough solution, no? There are some members of society flying around in corporate jets and sailing on their many yachts over the weekend. Surely if they just gave a little bit more of their money to the right people, the entire world could be fed and poverty could be eliminated for good.
Continuing with the example, let’s say that we have five millionaires step up to the plate. We’ll use small denominations of money just because they are easier to work with, but the principles will carry over even to the largest scale. Each of these millionaires has generously agreed to donate one million dollars to help a single person victimized by poverty. Thus we have five million dollars we are giving out to five different people. The expected result is that every one of the aided will have all of their problems solved. They will no longer be in poverty, and they will have no reason to feel bad about themselves. No more worrying over how to put food on the table and whether or not they will have a place to live.
lottery ballsThe keen observer, however, will accurately predict that in the majority of circumstances the five million dollars will do absolutely nothing to solve any problem. Just because we give someone money doesn’t mean that they will know what to do with it. This is in fact a common issue encountered by lottery winners, who are so known for wasting away their money very quickly that think tanks like the Sudden Money Institute had to be created to help people cope with coming into large sums of money. Think of the irony in that. Coming into a large sum of money is supposed to be a boon, the receiving of grand opulence, but there is nevertheless a support group established to help such people. Just as there are groups to help drug addicts, habitual gamblers, and those with anger problems, there are organizations to help those who get too much money too fast.
Of the five newly crowned millionaires, one may blow all of their money on cars and houses, another may waste it away on drugs, another on gambling at the racetrack, and another on opening a business that eventually folds. Even if one person actually uses the money to ensure that they never have to work again, there is still the issue of activity. If we place someone into a room and tell them they have nothing to worry about all day, that their food and drink will be provided for them, would they be happy? Actually, this is how prisoners are treated, and we know that the prison house is meant to be a punishment, a sort of rehabilitation center. Similarly, just having enough food to eat and a roof over the head is not enough to provide any sort of lasting satisfaction. If it were, the people running the planning commissions and activist groups would have been satisfied with their own material success.
Lord KrishnaThose on a higher level of thinking understand that every one of us starts off with everything. As God is the creator of this land, He is the original proprietor. Just because someone finds a piece of land and plants their flag on it doesn’t mean that they own anything. This earth and its bountiful fruits belong to every single person to utilize in their progressive march towards a purified consciousness. The planning commissions and the bleeding hearts concerned over poverty and social ills fail to realize the influence of maya, which governs the laws of karma. With every action, there is a reaction. This is quite easy to understand. If there is drug dependence and alcoholism, there will be negative consequences. The homeless often suffer from these problems. Just imagine, someone can become so fallen that they live on the streets, even when there is a significant portion of the world willing to help them. This shows that there are other factors involved in poverty that go unnoticed.
Studies in America have shown that if one graduates high school, waits until they get married to have children, and at least tries to find a job, they will have virtually no chance of finding poverty. These conclusions are not presented from the moral perspective either; they are just common sense. If you have children out of wedlock, you have to spend your time supporting them. If you haven’t graduated high school, you shut yourself out from the majority of high paying jobs. Similarly, if you have children to take care of by yourself, there is no time to invest in advancing your career, such as through going to college or attending specialized training schools.
Simply giving money to someone will not solve their problems, as there is no control over what they do with the money. This concept also applies to peace, as just asking that war be stopped will not make it so. One side may agree to stop their violence for a while, but if their desires are not altered for the better, they will inevitably stir up hostilities again. These factors are lost on the planning commissions because there is no concern given to maya, who manages karma.
Bhagavad-gitaThe Vedas, the ancient scriptures of India, contain complete information, all the knowledge one could ever need. Every condition, favorable and unfavorable, is due to karma, or past deeds. Deeds are driven by desire, so as long as desire is not pointed in the proper direction, the dualities of poverty and wealth, distress and happiness, and cold and heat will continue. What’s more is that no one person or collection of individuals can control how karma works. Parents have firsthand experience of this on a smaller scale. A parent may try their best to get their child to grow up to be successful in life, to be a good person who is inclined to follow a certain direction, but since the child has their own nature and desires, there is no control over the outcome. Sometimes the children just don’t end up growing up to be like what you wanted them to be.
How do the Vedas tackle the problem of poverty? How do the Vedas deal with war? If we know the nature of the playing field we are dealing with, it becomes much easier to find the answers to life’s common problems. As karma is driven by desire, once desire is shifted in the proper direction, the resultant actions become purified. With pure activity come pure results. Human life is meant for awakening God consciousness and nothing else. Poverty and wealth are two extremes that are actually not different in the grand scheme. One person may sleep on the bare ground while another has a plush mattress, but the activity of sleeping is the same. One person may go through life worrying about money while another has too much money to know what to do with, but inevitable death will arrive all the same.
Lord KrishnaThe real problem facing the human being is figuring out how to stop birth, old age, disease and death. None of these events are welcome, but they take place regardless. Maya, the governing agent of the material world, works under the jurisdiction of the Supreme Personality of Godhead, Lord Krishna. Maya manages karma, which ensures that results are distributed fairly and at just the appropriate times. As long as one operates under maya, they will be forced to live by karma’s rules. Shri Krishna, on the other hand, is above maya. One who takes directly to His service by regularly chanting, “Hare Krishna Hare Krishna, Krishna Krishna, Hare Hare, Hare Rama Hare Rama, Rama Rama, Hare Hare”, gradually purifies their behavior.
What is the difference between activity in karma and activity in Krishna consciousness? On the surface they appear to be the same, but the initial desire is what is different. When given a million dollars, the person operating under karma will think about their own enjoyment or the pleasure of some other entity that is not God. The person operating under Krishna’s direction will use whatever fortune they acquire for the Lord’s satisfaction. Krishna is described as atmarama, which means “self-satisfied”. Therefore He doesn’t need money or donations from His countless expansions residing in the innumerable universes. Yet, since the constitutional position of the living entity is that of servant of God, the sacrifices made for Krishna’s satisfaction actually provide pleasure to the performer first. Even chanting is a sacrifice, as it takes time to sit down, concentrate on the Lord’s beautiful form, and recite His name repeatedly each day. But this investment is the most worthwhile, as it brings the greatest benefits. Investments in ending poverty, stopping war, curing diseases and the like don’t carry much of a return. Without God consciousness, the aided living entities will remain fully under the grip of maya, thereby never finding a permanent peaceful condition.
Radha and KrishnaThe little faith invested in bhakti-yoga, or devotional service, can ensure that the devoted person purifies their thoughts and activities. In addition, whoever they come into contact with and explain Krishna to will also be benefitted immensely. There is no shortage of wealth in this world, for the animals are supplied their necessities by nature, which also operates under maya. With the animals, however, there is no such thing as sin, as they do not know any better. The human beings have the added bonus of being able to make incredibly poor decisions and suffer the negative consequences for them. Therefore all of life’s ills that we see in front of us are attributed directly to negative karma.
As the only way to redress the karma issue is to bypass its leader maya, the only remedy worth adopting is Krishna consciousness, which can be fostered by any person at any stage of life. Maya baffles every single plan made by the material enjoyer, but Krishna Himself can directly command maya to do whatever He wants. For the sincere bhakta, He transforms the material energy from an illusory one into a purely spiritual force that blows a fierce wind that elevates the spiritually conscious person back to the eternal land after death. One who reaches that majestic realm inhabited by Krishna and His nitya-siddhas will never have to take birth again, thus ensuring that karma will no longer leave them bewildered.

Plugging the leaks



As physical limits bite, electronic engineers must build ever cleverer transistors


MOORE’S LAW—the prediction made in 1965 by Gordon Moore, that the number of transistors on a chip of given size would double every two years—has had a good innings. The first integrated circuit (invented by Jack Kilby of Texas Instruments, see above) was a clunky affair. Now the size of transistors is measured in billionths of a metre. Moore’s law has yielded fast, smart computers, with pretty graphics and worldwide connections. It has thereby ushered in an age of information technology unimaginable when Dr Moore coined it. Not bad going for what was originally just an off-the-cuff observation.
That observation, however, is not truly a law. It is, rather, the description of a journey of many steps, each a specific technological change (see chart below). That new steps will happen is as much an article of faith as a prediction. Every time transistors shrink, they get closer to the point where they can shrink no further—for if the law continues on its merry way, transistors will be the size of individual silicon atoms within two decades.

More to the point, they have already shrunk to a size where every atom counts. Too few atoms can cause their insulation to break down, or allow current to leak to places it is not supposed to be because of a phenomenon called quantum tunnelling, in which electrons vanish spontaneously and reappear elsewhere. Too many atoms of the wrong sort, though, can be equally bad, interfering with a transistor’s conductivity. Engineers are therefore endeavouring to redesign transistors yet again, so that Dr Moore’s prediction can remain true a little longer.
Atom heart motherboard
A transistor is an electrically operated switch composed of four pieces: a source (where current enters), a drain (where it leaves), a channel (which links the two) and a gate (which opens and shuts the channel by varying in voltage). In a conventional transistor, these components lie in about the same plane. One idea for dealing with leaks is to change that by moving transistor design into three dimensions.
Building a transistor that sticks out of its parental chip lets many of its component atoms be deployed more usefully—particularly those that constitute the channel and the gate. By sticking the channel into the air and surrounding it on three sides with the atoms of the gate, you increase the surface area of the gate. That gives better control of the channel and reduces leaks. Having a better-functioning gate also lets more current flow when the transistor is on.
In May Intel, an American chip giant (co-founded, as it happens, by Dr Moore), announced plans to commercialise a technological fix of this sort under the marketing name “Tri-Gate”. The company reckons the new transistors, which should be available later this year, will consume half as much power as its existing offerings, making them particularly suitable for mobile computing, where battery life is an important selling point.
A universal change to three dimensions, though, will be difficult to sell to an industry that has grown up thinking in two. As an alternative the Silicon On Insulator (SOI) consortium, which includes Globalfoundries, an American firm, and ARM, a British one, is trying to improve flat transistors. The consortium’s technology builds its transistors inside a sliver of pure silicon, laid on top of an insulator, which in turn sits on top of a standard wafer, the substrate on which transistors are constructed. The idea is to make the channel as thin as possible, allowing the electric field generated by the gate to penetrate the entire thing, thus improving the control that the gate is able to exert. But this approach also forces the consortium to tackle the second problem raised by the continual shrinkage of transistors: too many or too few atoms in the wrong places.
The silicon of which transistors are made is frequently doped with other elements, to affect its electrical properties. The latest devices, though, are so small that doping their channels involves placing just a handful of dopant atoms among the silicon. Get the number wrong, and things will not work properly. But fluctuations in the manufacturing process make the required consistency hard to achieve. Correctly doping the ultra-thin channels that the consortium hopes to use is simply too difficult—hence the decision to do without dopants altogether and build channels out of pure silicon. But the design requires that this silicon layer be no more than five nanometres (billionths of a metre) deep. That figure, moreover, must be almost constant across the entire wafer—an exacting standard which Intel (admittedly, not a dispassionate observer) believes will add to manufacturing costs.
SuVolta, a small company in Silicon Valley, has therefore come up with a third approach. It, too, plans to build flat transistors with undoped channels. But it will do so on conventional, cheap silicon wafers without the need for the modified wafers or ultra-thin channels required by the SOI consortium, a trick it accomplishes by adding a second gate beneath the channel. In concert, the two gates are able to control the undoped channel without its having to be ridiculously thin. Once again, the result is better-behaved transistors and reduced power consumption—as little as half that demanded by old-style transistors, says the firm, with no loss of performance. SuVolta has already piqued the interest of Fujitsu, a Japanese electronics giant, which has licensed the technology.
Room at the bottom
All these approaches mean that Moore’s law should be able to chunter along for a few more years, at least. The International Technology Roadmap for Semiconductors, which is updated every year by a team of several hundred experts, predicts that standard transistors will be 16 nanometres across by 2013 (at the moment, 32 nanometres is the standard) and 11 nanometres by 2015. To go smaller than this, though, will require yet another conceptual leap. Fortunately, there are several on offer.
One promising approach was outlined last year by a team at the Tyndall National Institute in Ireland, led by Jean-Pierre Colinge. They published a paper announcing the creation of a junctionless transistor—an idea patented in 1925 by a physicist called Julius Lilienfeld, but which was, until recently, too difficult to manufacture.
The junctions in a transistor are between bits of silicon doped to conduct electrons (known as n-type material, because electrons are negatively charged), and p-type areas doped to conduct positively charged holes in the crystal lattice, which are places where electrons should be, but aren’t. In some transistors, source and drain are p-type, and channel n-type. In others the reverse is true. The junctions between n- and p-type silicon act like valves, stopping current flowing in the wrong direction.
As transistors get smaller, however, laying down n-type and p-type materials in proximity gets harder, thanks once again to fluctuations in the concentrations of dopants. Dr Colinge’s design—which, like Intel’s Tri-Gate, clamps a 3D gate around a single, ultra-thin silicon wire—avoids this by building the entire device from a single type of semiconductor, with much higher dopant concentrations than a conventional flat transistor. The design incorporates a channel thin enough to become entirely devoid of carriers (ie, free electrons or holes) when switched off, thus acting as a valve, yet full of them when switched on. It should be shrinkable, too. The Tyndall Institute’s researchers reported last year that atom-by-atom computer simulations of junctionless transistors with a gate length of just 3.1 nanometres show that they ought to work perfectly.
Such a gate length would keep Moore’s law rolling for several years. To carry on beyond that, however, requires even more exotic thinking. A number of groups of academics and engineers, for example, are pondering how to make transistors in which quantum tunnelling is a feature rather than a bug. Quantum theory dictates that electrons are available only at certain energy levels, which means that a transistor which harnessed the tunnelling effect could switch directly from a low current (off) to a high current (on), with no ramp-up time.
That would be a neat trick. Whether it would be the last one up the engineers’ sleeves, as the single-atom limit looms, remains to be seen. When he first promulgated it, Dr Moore thought his law might endure for ten years. The irresistible force of human ingenuity has ensured it has done far better than that. But that force is now up against the immovable object of atomic physics. It is a fascinating contest.

Fabricating fabric



How to generate more realistic images of clothes



Velvet dreams












FILMS like “Captain America”, “Tron Legacy” and “The Curious Case of Benjamin Button” have shown that it is possible to use computer-generated imagery (CGI) to make actors look younger, older or wimpier than they actually are, in a surprisingly realistic manner. At least, it is possible if those altered actors are kept at a suitable distance from the viewer. The difficulty of recreating the textures of both skin and fabric means the effect is less convincing when seen close up.
The reason is that, whereas it is possible to simulate realistically the forces which make virtual skin and fabric hang, bend, flap and stretch, recreating the subtle ways they reflect light has so far proved extremely tricky. The shimmer and sheen of both fabric and skin depend on the geometry of their internal structures—the exact arrangement of threads or protein fibres. This is hard to model accurately. Steve Marschner and his colleagues at Cornell University have, though, come up with a way to get round that problem. Instead of modelling, they are copying. They are using computerised tomography (CT) to analyse the structures of fabrics at high resolution and then plugging the results into CGI. That, allied to the laws of optics and some heavy-duty computer power, seems to do the trick.

Dr Marschner and his colleagues used a benchtop version of CT, developed for looking at the structure of materials rather than at human bodies, for their experiment. Employing doses of X-rays many times stronger than those used to study people, they obtained high-resolution information about small pieces of fabric. Computerised tomography allows the three-dimensional structure of the fibres in such scraps to be recorded, with all their kinks and imperfections. A number of small pieces can then be patched together into an entire garment inside a computer, in the same way that a handful of actors are turned into a CGI crowd. But because the internal structure of each bit of the garment matches that of a real piece of cloth, the way light will play on it can be calculated far more realistically than if it were just a computer model of what the interior of cloth is thought to look like.Computerised tomography is most familiar as a medical technique for examining people’s insides. Like classical radiology it uses X-rays. But because the image is constructed inside a computer using shots taken from many different directions, rather than being a single exposure recorded on photographic film, CT can capture fine detail and record soft tissues that are invisible to classical radiology.
Demonstrating the results of their technique at the SIGGRAPH computer-graphics conference in Vancouver this week, Dr Marschner and his colleagues showed realistic renderings of felt, gaberdine, silk and velvet. Moreover, their renderings remain realistic even when viewed close up. Sadly, skin is still beyond them. The high intensity of the X-rays involved would be too damaging for use on a living human being, and a corpse would probably not produce the right results. But once the rendering technique has been speeded up (at the moment it is still a bit slow and clunky), the swish of a virtual cloak or the doffing of a computerised hat should look far more realistic than it does now.
In the meantime, according to Dr Marschner’s colleague Kavita Bala, the technology might have an application in online retailing. At the moment, people buying clothes over the internet have only standard photographs to help them choose their purchases. Using CT-based computer graphics might, paradoxically, give a better idea of what the material an item of clothing is made from is really like than can be garnered from a boring, old photograph of the original.

A multilayered solution



E-READERS, such as Amazon’s Kindle, have been a commercial success. They have not, however, revolutionised the publishing industry in quite the way that many predicted they would. In part, that is because their displays are black and white, and they seem to many readers to be slow, grainy and—if truth be told—a little archaic. Better screens might make the difference between e-readers being intriguing gadgets and killer apps, and Shin-Hyun Kim and David Weitz, who work at the Experimental Soft Condensed Matter group at Harvard University, think they may have found a way to build those better screens.
Unlike conventional display screens, which are lit from behind, e-readers use reflected light in a way similar to paper. Letters and other characters on the screen are formed out of ink that has a high optical contrast with the background, making them easy to read. The difference is that, rather than being printed into permanent shapes like normal ink, electronic ink is held in small capsules that can reveal it or hide it as required. 
The result is legible even in bright sunlight. But it often takes more than half a second to “turn” the page of an e-book (so displaying the 25 images a second needed for video is out of the question). And, although the size (roughly 100 microns across) of the elements, known as pixels, that make up the display is fine for monochrome reading, they would need to be a third of that or less to create sub-pixels of the three primaries (red, green and blue) that colour displays require. The answer proposed by Dr Kim and Dr Weitz, in a paper in Angewandte Chemie, is to change the way e-ink is manufactured.
At the moment, such ink is composed of small, transparent spheres containing black and white particles suspended in a clear fluid. Half the particles are white and positively charged. The other half are black and negatively charged. When an electric field is applied, one lot is drawn towards it while the other is repelled. A negative charge attracts the positive particles, making the pixel appear black. A positive charge does the reverse.
The problem, according to Dr Kim and Dr Weitz, is that the densities of the black and white particles are different, and therefore cannot both be made to match that of the fluid in which they are immersed. This slows down their movement, and thus the speed at which a screen can refresh its image. A better solution would be to immerse the black particles in one fluid and the white particles in another (so that in both cases their densities match that of the suspending liquid)—yet, at the same time, to continue to package both types of particle in a single sphere. 
To do so, the pair turned to a technology called microfluidics, which borrows from the techniques used to make computer chips to produce devices that mix small amounts of liquid in precise ways. Their own, particular device uses tiny channels to force two different liquids (one of which contains ink particles) in one direction down a channel, through a nozzle, thus bringing them into contact with two other streams travelling in the opposite direction. As the four streams collide they are forced into a third channel, forming layered droplets as they go. 
Normally, this sort of single-step mixing would not work, because of the difficulty of getting two liquids to flow stably through one channel. However, by using an oily liquid and an aqueous one, and by covering one side of the channel with a substance that repels water and the other side with one that attracts water, this can be avoided. The result is a “Russian-doll” droplet that, if the correct oily and aqueous liquids are chosen, can be made permanent by curing some of its layers into transparent polymers using ultraviolet light.
To demonstrate, Dr Kim and Dr Weitz created what they call magnetic ink. This consists of an oily core containing magnetic particles mixed with carbon black, which is suspended within a watery layer that contains white polystyrene particles. That, in turn, is suspended in a transparent oily fluid. 
Like those in e-reader displays, the black and white particles can be drawn towards or away from the viewing surface (in this case using a magnetic field, rather than an electric one). They move much faster than those in traditional displays, though, because their densities are closer to those of the suspending media. If the new droplets can be incorporated into real screens, that will deal with the slow refresh rate.
The next stage is to include all three primary colours in a single droplet. That is some way off. But if it proves possible, it will deal with the black-and-whiteness problem, too, by providing full-colour pixels that have the same number of droplets as monochrome ones.
Turning this invention into a screen will take time. Indeed, it may never come to pass, for many other groups are approaching the e-reader problem from different directions. But whatever happens to this specific idea, Dr Kim’s and Dr Weitz’s invention is likely to have larger ramifications. It might, for example, be used to package together drugs in slow-release capsules of greater sophistication than is now possible.

Difference Engine: Devil in the details




IF YOU have not gone shopping for a new television set for quite a while, enough has changed to require some serious thought. Your correspondent has finally given in to family pressure to create a dedicated media lounge. Given the limited resources, this is unlikely to be some 24-seat viewing room with a silver screen, curtains and digital projectors to rival the home theatres created for the likes of Steven Spielberg or Larry Ellison. The good news is that, with modern television sets, it does not have to be. A spare room, with a couch and a couple of easy chairs, plus a large enough flat-panel television and a reasonable audio system, can more than meet most family’s viewing needs.

Before splurging on a fancy new high-definition television (HDTV) set, though, it is worth considering what features make sense and what do not. Start with the viewing angle. THX, a technical standards-setter for the video and audio industries, requires the back row of seats in a home theatre to have at least a 26º viewing angle from one edge of the screen to the other. Seats nearest the screen should have a viewing angle of no more than 36º. These subtended angles correspond to a viewing distance of roughly 2.2 times the screen width at the back row of the seating down to 1.5 times the screen width at the front. Within these limits, viewers should be able to enjoy the most immersive experience.

The question then is how to relate viewing distance to a person’s visual acuity. In other words, what is the maximum distance beyond which some picture detail is lost because of the eye’s limitations? Visual acuity indicates the angular size of the smallest detail a person’s visual system can resolve. This depends on the sharpness of the retinal focus within the eye, and the sensitivity of that part of the cortex that interprets visual stimuli.

Someone with 20/20 vision (6/6 in metric terms) can resolve a spatial pattern (of, say, a letter in the alphabet) where each element within it subtends an angle of one minute of arc when viewed from a distance of 20 feet (six metres). In other words, a person with 20/20 sight should, in normal lighting conditions, be able to identify two points that are 0.07 of an inch (1.77mm) apart from a distance of 20 feet. Twenty feet is taken because, as far as the eye is concerned, it is effectively infinity.

A person who can detect individual elements that make up, say, the letter “E” on the eighth line of an optometrist’s Snellen chart—and thereby recognise that the letter is an “E” and not a “D”—is said to have normal 20/20 eye sight. Someone with 20/40 sight can see objects at 20 feet that those with normal sight can see from 40 feet. In many countries, 20/200 is the legal definition of blindness. Meanwhile, 20/20 vision is not perfect vision; it is merely the lower limit of normal sight. The maximum acuity of the human eye is around 20/8. Some birds of prey are thought to have eye sight as sharp as 20/2.

As far as watching television is concerned, visual acuity represents the point beyond which some of the detail in the picture can no longer be resolved by the conical receptor cells in the retina of the eye. It will simply blend into the background instead of being seen as a distinct feature. Thus, it is a waste to make individual pixels—the tiniest elements in a display—appear smaller than 0.07 of an inch when viewed from 20 feet.

The problem with viewing images on a television screen—especially a high-definition one like the 1080p HDTV sets in use today—is that most people sit too far back. A survey made some years ago by Bernard Lechner, a television engineer at the former RCA Laboratories, near Princeton, New Jersey, showed that the median eye-to-screen distance in American homes was nine feet. At that distance, a 1080p HDTV set (with a screen 1,920 pixels wide and 1,080 pixels high) needs to be at least 69-inch across a diagonal if viewers are to see all the detail it offers.

In practice, the most popular television size in America today is 32 inches. To see all the detail on a 1080p set of that size means dragging the chair forward from nine feet to a little over four feet from the screen. If it were an older 720p television set (1,280 pixels wide and 720 pixels high), sitting six feet from the screen would suffice to see the full quality of the image.

Put another way, viewers cannot enjoy the full benefits of the higher pixel count of 1080p television if they sit any further back than 1.8 times the screen width. At 2.7 times the screen width, they might as well use a cheaper 720p set instead, as the eye cannot resolve the finer detail offered by a 1080p screen at that distance. Unfortunately, while 720p sets offer good value, they are becoming difficult to find. Manufacturers focus all their marketing efforts these days on higher-margin 1080p sets.

As far as screen sizes and viewing distances are concerned, a room measuring ten feet by 12 feet is therefore more than adequate for watching a 50-inch television set, with viewers no further than six-and-a-half feet from the screen. The question, then, is what kind of 1080p set to use—plasma display, liquid-crystal display (LCD) or the latest light-emitting diode (LED) variety? 
Plasmas, with their rapid switching and deep blacks, have long been the favourite for sports fans and movie buffs. Apart from their lack of blur and judder when tracking fast-moving objects and their freedom from wishy-washy greys, they can be viewed from wider angles than LCDs without the picture changing colour. They also produce better three-dimensional images, primarily because they generate less ghosting (double images) when using 3D glasses. But plasmas have lately fallen out of favour because of their bulk and thirst for power. More to the point, manufacturers have begun to fix many of the LCD’s faults.

To lick the motion problem, LCD set-makers have developed special circuitry for estimating and compensating for any rapid movement within a scene. This increases the screen’s frame rate from the 60 hertz of traditional television to 120 hertz and even 240 hertz. A few manufacturers have begun offering sets with refresh rates of up to 480 hertz, with 960 hertz on the horizon.

Unfortunately, the motion-compensating circuitry can make filmed content appear like a cheap video—a glitch known in the trade as the “soap-opera effect”. The source of the problem is the way film shot at 24 frames a second has to adjust to television's refresh rate of 60, 120 or even 240 frames a second. One way of doing this is to analyse first one frame of film and then the next, and calculate an average of the two. This interpolated frame is inserted between the first and second frames, and the process repeated for each successive frame of the film. The interpolation process is good at removing blur and judder, but it can make the motion appear unnaturally smooth and disconcerting. Be warned, 240-hertz sets are the worst offenders.

Lastly, there are the LED sets. Manufacturers would have you believe these are a new form of display. They are not. They are simply LCD televisions that use LEDs for backlighting instead of the usual fluorescent tubes. The LEDs can be either along the edges of the screen or spread as an array behind the whole of the display. Edge-lit displays have problems with uniformity of brightness as well as a limited viewing angle.

Screens that use a full-array of LED backlights are much better. Apart from giving more uniform brightness, they allow the screen to be dimmed selectively in places where a scene needs to be dark. The effect is to make the LCD’s blacks appear almost as dense as a plasma’s. Only top-of-the-range LCD sets from Sharp and Sony currently have this feature. Expect to pay dearly for it.

So, what to choose? That depends on budget and personal preferences. All things being equal, plasma televisions are about two-thirds the price of their LCD equivalents, which are themselves up to a third cheaper than LED sets. Meanwhile, the premium that 3D sets once commanded has all but vanished. They are now worth buying, not so much for their ability to show 3D content, but because they display 2D even better than conventional plasma or LCD sets (see “Beyond HDTV”, July 28th 2011).
As a sports-loving, old-movie addict, your correspondent’s choice is easy. With the help of a brother-in-law in the business, he has ordered a Panasonic Viera TC-R50VT20, a 50-inch plasma set with all the bells and whistles (arigato, Hiroshi-san). He recommends others read the annual ratings for television sets published in Consumer Reports (March 2011), then go to the nearest big-box store and see for themselves. One rule of thumb: manufacturers’ recommended prices average around $36/inch for plasma televisions and $48/inch for LCDs. Discounts in-store and online should lower such prices by at least 20%. Do not settle for less.

How dead is dead?



Sometimes, those who have died seem more alive than those who have not



IN GENERAL, people are pretty good at differentiating between the quick and the dead. Modern medicine, however, has created a third option, the persistent vegetative state. People in such a state have serious brain damage as a result of an accident or stroke. This often means they have no hope of regaining consciousness. Yet because parts of their brains that run activities such as breathing are intact, their vital functions can be sustained indefinitely.
When, if ever, to withdraw medical support from such people, and thus let them die, is always a traumatic decision. It depends in part, though, on how the fully alive view the mental capacities of the vegetative—an area that has not been investigated much.
To fill that gap Kurt Gray of the University of Maryland, and Annie Knickman and Dan Wegner of Harvard University, conducted an experiment designed to ascertain just how people perceive those in a persistent vegetative state. What they found astonished them.

After reading one of these stories, chosen at random, each participant was asked to rate David’s mental capacities, including whether he could influence the outcome of events, know right from wrong, remember incidents from his life, be aware of his environment, possess a personality and have emotions. Participants used a seven-point scale to make these ratings, where 3 indicated that they strongly agreed that he could do such things, 0 indicated that they neither agreed nor disagreed, and -3 indicated that they strongly disagreed.They first asked 201 people stopped in public in New York and New England to answer questions after reading one of three short stories. In all three, a man called David was involved in a car accident and suffered serious injuries. In one, he recovered fully. In another, he died. In the third, his entire brain was destroyed except for one part that kept him breathing. Although he was technically alive, he would never again wake up.
The results, reported in Cognition, were that the fully recovered David rated an average of 1.77 and the dead David -0.29. That score for the dead David was surprising enough, suggesting as it did a considerable amount of mental acuity in the dead. What was extraordinary, though, was the result for the vegetative David: -1.73. In the view of the average New Yorker or New Englander, the vegetative David was more dead than the version who was dead.
The researchers’ first hypothesis to explain this weird observation was that participants were seeing less mind in the vegetative than in the dead because they were focusing on the inert body of the individual hooked up to a life-support system. To investigate that, they ran a follow-up experiment which had two different descriptions of the dead David. One said he had simply passed away. The other directed the participant’s attention to the corpse. It read, “After being embalmed at the morgue, he was buried in the local cemetery. David now lies in a coffin underground.” No ambiguity there. In this follow-up study participants were also asked to rate how religious they were.
Once again, the vegetative David was seen to have less mind than the David who had “passed away”. This was equally true, regardless of how religious a participant said he was. However, ratings of the dead David’s mind in the story in which his corpse was embalmed and buried varied with the participant’s religiosity.
Irreligious participants gave the buried corpse about the same mental ratings as the vegetative patient (-1.51 and -1.64 respectively). Religious participants, however, continued to ascribe less mind to the irretrievably unconscious David than they did to his buried corpse (-1.57 and 0.59).
That those who believe in an afterlife ascribe mental acuity to the dead is hardly surprising. That those who do not are inclined to do so unless heavily prompted not to is curious indeed.