Search This Blog

Showing posts with label Physics Optics Photonics. Show all posts
Showing posts with label Physics Optics Photonics. Show all posts

Tuesday, January 24, 2017

Dark energy emerges when energy conservation is violated

Cecile G. Tamura
The conservation of energy is one of physicists' most cherished principles, but its violation could resolve a major scientific mystery: why is the expansion of the universe accelerating? That is the eye-catching claim of a group of theorists in France and Mexico, who have worked out that dark energy can take the form of Albert Einstein's cosmological constant by effectively sucking energy out of the cosmos as it expands.
The cosmological constant is a mathematical term describing an anti-gravitational force that Einstein had inserted into his equations of general relativity in order to counteract the mutual attraction of matter within a static universe. It was then described by Einstein as his "biggest blunder", after it was discovered that the universe is in fact expanding. But then the constant returned to favour in the late 1990s following the discovery that the universe's expansion is accelerating.

For many physicists, the cosmological constant is a natural candidate to explain dark energy. Since it is a property of space–time itself, the constant could represent the energy generated by the virtual particles that quantum mechanics dictates continually flit into and out of existence. Unfortunately the theoretical value of this "vacuum energy" is up to a staggering 120 orders of magnitude larger than observations of the universe's expansion imply.
Running total
The latest work, carried out by Alejandro Perez and Thibaut Josset of Aix Marseille University together with Daniel Sudarsky of the National Autonomous University of Mexico, proposes that the cosmological constant is instead the running total of all the non-conserved energy in the history of the universe. The "constant" in fact would vary – increasing when energy flows out of the universe and decreasing when it returns. However, the constant would appear unchanging in our current (low-density) epoch because its rate of change would be proportional to the universe's mass density. In this scheme, vacuum energy does not contribute to the cosmological constant.
The researchers had to look beyond general relativity because, like Newtonian mechanics, it requires energy to be conserved. Strictly speaking, relativity requires the conservation of a multi-component "energy-momentum tensor". That conservation is manifest in the fact that, on very small scales, space–time is flat, even though Einstein's theory tells us that mass distorts the geometry of space–time.
In contrast, most attempts to devise a theory of quantum gravity require space–time to come in discrete grains at the smallest (Planck-length) scales. That graininess opens the door to energy non-conservation. Unfortunately, no fully formed quantum-gravity theory exists yet, and so the trio instead turned to a variant of general relativity known as unimodular gravity, which allows some violation of energy conservation. They found that when they constrained the amount of energy that can be lost from (or gained by) the universe to be consistent with the cosmological principle – on very large scales the process must be both homogeneous and isotropic – the unimodular equations generated a cosmological-constant-like entity.
Modified quantum mechanics
In the absence of a proper understanding of Planck-scale space–time graininess, the researchers were unable to calculate the exact size of the cosmological constant. Instead, they incorporated the unimodular equations into a couple of phenomenological models that exhibit energy non-conservation. One of these describes how matter might propagate in granular space–time, while the other modifies quantum mechanics to account for the disappearance of superposition states at macroscopic scales.
These models both contain two free parameters, which were adjusted to make the models consistent with null results from experiments that have looked for energy non-conservation in our local universe. Despite this severe constraint, the researchers found that the models generated a cosmological constant of the same order of magnitude as that observed. "We are saying that even though each individual violation of energy conservation is tiny, the accumulated effect of these violations over the very long history of the universe can lead to dark energy and accelerated expansion," Perez says.
In future, he says it might be possible to subject the new idea to more direct tests, such as observing supernovae very precisely to try to work out whether the universe's accelerating expansion is driven by a constant or varying force. The model could also be improved so that it captures dark-energy's evolution from just after the Big Bang – and then comparing the results with observations of the cosmic microwave background.
If the trio are ultimately proved right, it would not mean physicists having to throw their long-established conservation principles completely out of the window. A variation in the cosmological constant, Perez says, could point to a deeper, more abstract kind of conservation law. "Just as heat is energy stored in the chaotic motion of molecules, the cosmological constant would be 'energy' stored in the dynamics of atoms of space–time," he explains. "This energy would only appear to be lost if space–time is assumed to be smooth."
Fanciful yet viable
Other physicists are cautiously supportive of the new work. George Ellis of the University of Cape Town in South Africa describes the research as "no more fanciful than many other ideas being explored in theoretical physics at present". The fact that the models predict energy to be "effectively conserved on solar-system scales" – a crucial check, he says – makes the proposal "viable" in his view.
Lee Smolin of the Perimeter Institute for Theoretical Physics in Canada, meanwhile, praises the researchers for their "fresh new idea", which he describes as "speculative, but in the best way". He says that the proposal is "probably wrong", but that if it's right "it is revolutionary".
http://www.phy.olemiss.edu/~luca/Topics/u/unimodular.html
http://journals.aps.org/…/ab…/10.1103/PhysRevLett.118.021102
http://physicsworld.com/…/dark-energy-emerges-when-energy-c…
https://en.wikipedia.org/wiki/Dark_energy
https://en.wikipedia.org/wiki/Cosmological_constant
http://www.sciencemag.org/…/simple-explanation-mysterious-s…
https://arxiv.org/pdf/1604.04183v3.pdf
https://www.nasa.gov/chand…/news/mysterious-xray-signal.html

Thursday, December 29, 2016

NASA's Kepler Mission Rewrites Drake's Equation --"Humans Not the First Technological Civilization in the Universe"



Cecile G. Tamura
"The question of whether advanced civilizations exist elsewhere in the universe has always been vexed with three large uncertainties in the Drake equation," said Adam Frank, professor of physics and astronomy at the University of Rochester. "We've known for a long time approximately how many stars exist. We didn't know how many of those stars had planets that could potentially harbor life, how often life might evolve and lead to intelligent beings, and how long any civilizations might last before becoming extinct."
As Frank puts it "We don't even know if it's possible to have a high-tech civilization that lasts more than a few centuries." With Frank and Sullivan's new result, scientists can begin using everything they know about planets and climate to begin modeling the interactions of an energy-intensive species with their home world knowing that a large sample of such cases has already existed in the cosmos.
"Our results imply that our biological, and cultural evolution has not been unique and has probably happened many times before. The other cases are likely to include many energy intensive civilizations dealing with crises on their planets as their civilizations grow. That means we can begin exploring the problem using simulations to get a sense of what leads to long lived civilizations and what doesn't."
A new study shows that the recent discoveries of exoplanets combined with a broader approach to the question makes it possible to assign a new empirically valid probability to whether any other advanced technological civilizations have ever existed. And it shows that unless the odds of advanced life evolving on a habitable planet are astonishingly low, then human kind is not the universe's first technological, or advanced, civilization.
The paper, published in Astrobiology, also shows for the first time just what "pessimism" or "optimism" mean when it comes to estimating the likelihood of advanced extraterrestrial life.

"Thanks to NASA's Kepler satellite and other searches, we now know that roughly one-fifth of stars have planets in 'habitable zones,' where temperatures could support life as we know it. So one of the three big uncertainties has now been constrained."
Frank said that Drake's third big question--how long civilizations might survive--is still completely unknown. "The fact that humans have had rudimentary technology for roughly ten thousand years doesn't really tell us if other societies would last that long or perhaps much longer," he explained.
The illustration of the Drake equation and the Frank equation is shown below. In 1961, astrophysicist Frank Drake developed an equation to estimate the number of advanced civilizations likely to exist in the Milky Way galaxy.
The Drake equation (top row) has proven to be a durable framework for research, and space technology has advanced scientists' knowledge of several variables. But it is impossible to do anything more than guess at variables such as L, the probably longevity of other advanced civilizations.
In their new research, Frank and Woodruff Sullivan offer a new equation (bottom row) to address a slightly different question: What is the number of advanced civilizations likely to have developed over the history of the observable universe? Frank and Sullivan's equation draws on Drake's, but eliminates the need for L.
"Rather than asking how many civilizations may exist now, we ask 'Are we the only technological species that has ever arisen?': said Sullivan. "This shifted focus eliminates the uncertainty of the civilization lifetime question and allows us to address what we call the 'cosmic archaeological question' -- how often in the history of the universe has life evolved to an advanced state?"
That still leaves huge uncertainties in calculating the probability for advanced life to evolve on habitable planets. It's here that Frank and Sullivan flip the question around. Rather than guessing at the odds of advanced life developing, they calculate the odds against it occurring in order for humanity to be the only advanced civilization in the entire history of the observable universe.

With that, Frank and Sullivan then calculated the line between a Universe where humanity has been the sole experiment in civilization and one where others have come before us.
"Of course, we have no idea how likely it is that an intelligent technological species will evolve on a given habitable planet," says Frank. But using our method we can tell exactly how low that probability would have to be for us to be the ONLY civilization the Universe has produced. We call that the pessimism line. If the actual probability is greater than the pessimism line, then a technological species and civilization has likely happened before."
Using this approach, Frank and Sullivan calculate how unlikely advanced life must be if there has never been another example among the universe's twenty billion trillion stars, or even among our own Milky Way galaxy's hundred billion.
The result? By applying the new exoplanet data to the Universe as a whole, Frank and Sullivan find that human civilization is likely to be unique in the cosmos only if the odds of a civilization developing on a habitable planet are less than about one in 10 billion trillion, or one part in 10 to the 22th power.
"One in 10 billion trillion is incredibly small," says Frank "To me, this implies that other intelligent, technology producing species very likely have evolved before us. Think of it this way. Before our result you'd be considered a pessimist if you imagined the probability of evolving a civilization on a habitable planet were, say, one in a trillion. But even that guess, one chance in a trillion, implies that what has happened here on Earth with humanity has in fact happened about a 10 billion other times over cosmic history!"
For smaller volumes the numbers are less extreme. For example, another technological species likely has evolved on a habitable planet in our own Milky Way galaxy if the odds against it evolving on any one habitable planet are better than one chance in 60 billion.
But if those numbers seem to give ammunition to the "optimists" about the existence of alien civilizations, Sullivan points out that the full Drake equation -- which calculates the odds that other civilizations are around today -- may give solace to the pessimists.
"The universe is more than 13 billion years old," said Sullivan. "That means that even if there have been a thousand civilizations in our own galaxy, if they live only as long as we have been around -- roughly ten thousand years -- then all of them are likely already extinct. And others won't evolve until we are long gone. For us to have much chance of success in finding another "contemporary" active technological civilization, on average they must last much longer than our present lifetime."
"Given the vast distances between stars and the fixed speed of light we might never really be able to have a conversation with another civilization anyway," said Frank. "If they were 50,000 light years away then every exchange would take 100,000 years to go back and forth."
But, as Frank and Sullivan point out, even if there aren't other civilizations in our galaxy to communicate with now, the new result still has a profound scientific and philosophical importance. "From a fundamental perspective the question is 'has it ever happened anywhere before?'" said Frank. "And it is astonishingly likely that we are not the only time and place that an advance civilization has evolved."
According to Frank and Sullivan their result has a practical application as well. As humanity faces its crisis in sustainability and climate change we can wonder if other civilization-building species on other planets have gone through a similar bottleneck and made it to the other side.
https://www.rochester.edu/news/are-we-alone-in-the-universe/
http://www.seti.org/node/993
http://www.dailygalaxy.com/…/nasas-kepler-mission-discoveri

Thursday, December 15, 2016

Max Born, German Physicist

Cecile G. Tamura

Max Born, (born Dec. 11, 1882, Breslau, Ger. [now Wrocław, Pol.]—died Jan. 5, 1970, Göttingen, W.Ger.) German physicist who shared the Nobel Prize for Physics in 1954 with Walther Bothe for his probabilistic interpretation of quantum mechanics.


Born came from an upper-middle-class, assimilated, Jewish family. At first he was considered too frail to attend public school, so he was tutored at home before being allowed to attend the König Wilhelm Gymnasium in Breslau. Thereafter he continued his studies in physics and mathematics at universities in Breslau, Heidelberg, Zürich, and Göttingen. At the University of Göttingen he wrote his dissertation (1906), on the stability of elastic wires and tapes, under the direction of the mathematician Felix Klein, for which he was awarded a doctorate in 1907.

After brief service in the army and a stay at the University of Cambridge, where he worked with physicists Joseph Larmor and J.J. Thomson, Born returned to Breslau for the academic year 1908–09 and began an extensive study of Albert Einstein’s theory of special relativity. On the strength of his papers in this field, Born was invited back to Göttingen as an assistant to the mathematical physicist Hermann Minkowski. In 1912 Born met Hedwig Ehrenberg, whom he married a year later. Three children, two girls and a boy, were born from the union. It was a troubled relationship, and Born and his wife often lived apart.

In 1915 Born accepted a professorship to assist physicist Max Planck at the University of Berlin, but World War I intervened and he was drafted into the German army. Nonetheless, while an officer in the army, he found time to publish his first book, Dynamik der Kristallgitter (1915; Dynamics of Crystal Lattices).
In 1919 Born was appointed to a full professorship at the University of Frankfurt am Main, and in 1921 he accepted the position of professor of theoretical physics at the University of Göttingen. James Franck had been appointed professor of experimental physics at Göttingen the previous year. The two of them made the University of Göttingen one of the most important centres for the study of atomic and molecular phenomena. A measure of Born’s influence can be gauged by the students and assistants who came to work with him—among them, Wolfgang Pauli, Werner Heisenberg, Pascual Jordan, Enrico Fermi, Fritz London, P.A.M. Dirac, Victor Weisskopf, J. Robert Oppenheimer, Walter Heitler, and Maria Goeppert-Mayer.

The Göttingen years were Born’s most creative and seminal. In 1912 Born and Hungarian engineer Theodore von Karman formulated the dynamics of a crystal lattice, which incorporated the symmetry properties of the lattice, allowed the imposition of quantum rules, and permitted thermal properties of the crystal to be calculated. This work was elaborated when Born was in Göttingen, and it formed the basis of the modern theory of lattice dynamics.
 

In 1925 Heisenberg gave Born a copy of the manuscript of his first paper on quantum mechanics, and Born immediately recognized that the mathematical entities with which Heisenberg had represented the observable physical quantities of a particle—such as its position, momentum, and energy—were matrices. Joined by Heisenberg and Jordan, Born formulated all the essential aspects of quantum mechanics in its matrix version. A short time later, Erwin Schrödinger formulated a version of quantum mechanics based on his wave equation. It was soon proved that the two formulations were mathematically equivalent. What remained unclear was the meaning of the wave function that appeared in Schrödinger’s equation. In 1926 Born submitted two papers in which he formulated the quantum mechanical description of collision processes and found that in the case of the scattering of a particle by a potential, the wave function at a particular spatiotemporal location should be interpreted as the probability amplitude of finding the particle at that specific space-time point. In 1954 he was awarded the Nobel Prize for this work.

Born remained at Göttingen until April 1933, when all Jews were dismissed from their academic posts in Germany. Born and his family went to England, where he accepted a temporary lectureship at Cambridge. In 1936 he was appointed Tait Professor of Natural Philosophy at the University of Edinburgh. He became a British citizen in 1939 and remained at Edinburgh until his retirement in 1953. The next year, he and his wife moved to Bad Pyrmont, a small spa town near Göttingen.
https://www.britannica.com/biography/Max-Born
https://hammeringshield.wordpress.com/…/max-born-albert-ei…/
https://www.researchgate.net/…/232697459_Max_Born_Albert_Ei…
http://www.informationphilosopher.com/solutio…/scientists/…/
https://arxiv.org/abs/1210.6929
https://archive.org/…/…/Born-TheBornEinsteinLetters_djvu.txt
http://www.preposterousuniverse.com/…/why-probability-in-q…/
https://arxiv.org/abs/0806.4935





Friday, December 2, 2016

Vacuum tube (old technology to make Electronics to get a whole lot faster without semiconductors )



Researchers are re purposing decades-old technology to build faster gadgets for the future, creating nano scale
Researchers are re purposing decades-old technology to build faster gadgets for the future, creating nano scale vacuum tubes that could dramatically improve the speed and efficiency of personal electronics and solar panels.
Vacuum tubes were originally used in the earliest digital electronic computers back in the 1930s and 1940s, before being replaced by transistors composed of semiconductors, which can can be manufactured much smaller, making today's computers, smartphones, and tablets possible.
that could dramatically improve the speed and efficiency of personal electronics and solar panels.
Vacuum tubes were originally used in the earliest digital electronic computers back in the 1930s and 1940s, before being replaced by transistors composed of semiconductors, which can can be manufactured much smaller, making today's computers, smartphones, and tablets possible.

But transistors have their limits in size and speed too, and we're getting closer than ever to reaching them. Now scientists from UC San Diego have gone back to the vacuum tube idea - and this time they've made them at tiny sizes and with far more efficient technology.
"This certainly won’t replace all semiconductor devices, but it may be the best approach for certain specialty applications, such as very high frequencies or high power devices," says lead researcher and electrical engineer, Dan Sievenpiper.
While transistors remain one of the most important inventions of the 20th century - and much smaller and more energy-efficient than the original vacuum tubes - scientists are now struggling to make them any tinier or more powerful than they already are.
What's more, electron flow through transistor semiconductor materials like silicon is slowed as electrons collide with atoms, and semiconductors also have what's called a band gap - where a boost of external energy is needed to get electrons moving.
The main advantage new nanoscale vacuum tubes have over semiconductor-based transistors is that they carry currents through air, rather than a solid material, and could be be much faster as a result.
The vacuum tube design (left), electric field enhancement (middle), and electric field distribution (right) of the new nanoscale structure. Credit: UC San Diego   
Freeing up electrons to carry currents through the air normally takes a large voltage or a powerful laser, both of which are difficult to do at the nanoscale, and which hampered the progress of early vacuum tubes.
To solve this problem, the team created a layer of special mushroom-style structures made of gold - known as an electromagnetic metasurface - and placed it on top of a layer of silicon dioxide and a silicon wafer.
When a low-powered voltage (less than 10 volts) and a low-powered laser are applied to this metasurface, it creates 'hot spots' with high-intensity electric fields, giving the structure enough energy to free the electrons from the metal.
In testing, this enabled the researchers to achieve a 1,000 percent (or 10-fold) increase in conductivity compared with nanoscale vacuum tubes without the metasurface. 
Right now, it's just a proof-of-concept demonstration, and there's a lot more work to be done to make the system practical for use in actual devices. But in the future, different metasurfaces could be designed to meet specific needs, such as new kinds of solar panels, the researchers suggest.
"Next we need to understand how far these devices can be scaled and the limits of their performance," says Sievenpiper.
Here's the team explaining their findings: Thanks: fossbytes.com and sciencealert.com
 


Wednesday, November 23, 2016

A new type of atomic bond has been discovered


An electron corrals a nearby atom closer, creating a supersized molecule – for a fraction of a second
Flitting chemical bond makes giant butterfly molecules
A new type of atomic bond has been discovered
And it forms a whole new class of molecules.

Physicists built a new, supersized molecule made of atoms held together by a far-roaming electron – like a flock of sheep being herded by a sheepdog.
Reporting in Nature Communications, the team from Germany and America created fleeting “butterfly” Rydberg molecules they predicted on paper 14 years ago – and which could find a place in quantum computers.
The new kind of molecule is bound by a lone electron ranging extremely far from its nucleus and whizzing around another atom, herding it close like a sheepdog does a stray sheep.
“It's a whole new way an atom can be bound by another atom," says Chris Greene a physicist at Purdue University, who co-authored the research.
Back in 1888, when most scientists didn’t believe in atoms, Swedish physicist Johannes Rydberg found a formula that reproduced colours of light emitted by different chemical elements.
Some 25 years later, Danish physicist Niels Bohr built on Rydberg’s ideas when he described the ‘solar system’ model of the atom, with the nucleus at the centre orbited by electrons.
One of Bohr’s central ideas was that if you give an electron a kick of energy, you can promote it to a higher energy level, meaning it orbits further, on average, from the nucleus.
Rydberg atoms are extreme examples of this. The outermost electron, promoted to an extremely high energy, can roam up to 1,000 times further from the nucleus than normal.
Rydberg atoms are also atomic monstrosities. They can be up to a millionth of a metre in diameter. That might seem small, but it’s about the size of an Escherichia coli bacterium, which is built from about 90 billion regular atoms.

In 2002, Greene and his team predicted that the free-ranging electron of a Rydberg atom might be used to form a new kind of chemical bond.
They worked out the shape of the atomic orbitals (describing the probability of finding an electron at a particular position around the nucleus) and found it looked like a butterfly – hence the name.
Now they’ve made one.

Since the molecule would be bound by only the “tiniest conceivable” force, Greene knew their only hope using ultracold, almost motionless atoms. His team used rubidium, an element chosen for cold atom experiments because it’s easy to manipulate with lasers.
Greene’s team cooled rubidium gas to just 10 millionths of a degree above absolute zero. Using a laser, they gave an electron a kick of energy, knocking it from its usual orbit out into a super-excited state and creating a Rydberg atom.

They then used the laser again to corral another rubidium atom into just the right distance nearby. That’s when the excited electron took over.
“This electron is like a sheepdog,” says Greene. This herding creates a tiny force of attraction holding the two atoms together in the very fragile butterfly state.
Though the molecule lasted only about five millionths of a second, it was long enough to study.
The butterfly state caused changes in the frequency of light that the Rydberg molecule absorbed. By detecting these changes, the team could measure the energy of binding between the two atoms.
This is not the first kind of Rydberg molecule created. Back in 2007, scientists managed to coax two Rydberg atoms together, each with a herding electron, to form a molecule that looked a little like an extinct marine animal called a trilobite.
The butterfly Rydberg is different because only one atom needs to be in a super-excited state. The other is passively herded.
From a practical point of view, Rydberg molecules have a very high electric dipole moment (in essence, the separation of charge within the molecule) coming from the large distance between the negative electron and positive nucleus.
This means they can be moved around with electric fields 100 times weaker than those needed for regular atoms – useful for setting up the long-range interactions between atoms needed for quantum computing.
For now, Greene plans to see if the ranging electron can herd more than one atom.
https://en.wikipedia.org/wiki/Rydberg_molecule
http://iopscience.iop.org/article/10.1088/0953-4075/…/10/102
http://www.purdue.edu/…/weak-atomic-bond,-theorized-14-year…
http://journals.aps.org/…/ab…/10.1103/PhysRevLett.104.010502
https://en.wikipedia.org/wiki/Rubidium

https://cosmosmagazine.com/…/new-kind-of-chemical-bond-make…
http://www.nature.com/…/jo…/v458/n7241/full/nature07945.html
http://www.nature.com/articles/ncomms12820
http://www.telegraph.co.uk/…/A-giant-molecule-stuns-the-sci…

This computer-generated image is of a strange molecule that has shocked chemists. It is as big as a bacterium and should exist in the real world according to research.


Around 200,000,000,000,000,000 conventional atoms would fit on the full stop at the end of this sentence. They are mostly empty space - the positively charged nucleus, where most mass resides, is 100,000 times smaller than the overall atom, which is a mist of negative charge, consisting of one or more electrons.

But the molecule shown here, consisting of only two atoms, is enormous - about one millionth of a metre across, about the same size as an E-coli bacterium.

The predictions that these fragile giants should exist have been published in the Journal of Physics by Edward Hamilton and Prof Chris Greene of the University of Colorado, with Dr Hossein Sadeghpour of the Harvard-Smithsonian Centre for Astrophysics in Cambridge, near Boston.


These are called "butterfly Rydberg states", where butterfly refers to the shape and state refers to the way electrons are distributed around an atom or molecule. Rydberg acknowledges pioneering work in the late 1800s by Johannes Rydberg that helped in the development of quantum mechanics.


This image shows the likelihood of finding an electron in orbit around the molecule (the peaks correspond to where it is most likely to be), calculated by the most successful theory in science, quantum mechanics.

Two years ago, Prof Greene and colleagues, including Prof Alan Dickinson of the University of Newcastle, found a novel and bizarre class of molecular states that involved electron motion that are far more complicated than previously thought. "They showed an uncanny resemblance to a trilobite, and for this reason they were dubbed trilobite states," he said.

Now the team has found a related but different butterfly Rydberg state, which once again is vast compared with conventional atoms and molecules.

Although the practical importance of this work is unclear, the finding has caused a buzz among scientists.


"The main excitement about this work in the atomic and molecular physics community has related to the fact that these huge molecules should exist and be observable, and that their electron density should exhibit amazingly rich, quantum mechanical peaks and valleys," said Prof Greene.


At least one well-known chemist has told Prof Greene that he was shocked by the work because he had thought that everything was known about the simplest molecules that consist of two atoms.


The giant molecules, which are extremely tenuous, have not yet been seen in a laboratory, but a team at the University of Connecticut is now looking for them.


Credit : Vienna University of Technology

Monday, November 14, 2016

Electron Spins Talk to Each Other Via a 'Quantum Mediator'

 Cecile G. Tamura

In the esoteric world of quantum computing research, it is relatively easy to get two bits of quantum information to communicate with one another—as long as they are neighbors. Separate them, however, and they can no longer exchange information.
Thanks to a clever work around new Lieven Vandersypen, Ph.D. student Tim Baart, and post-doc Takafumi Fujita, we now have a way to overcome this problem. They hope to use it to make quantum computers more flexible by improving their ability to exchange information over longer distances.
One way quantum computers store information is through electron spin of quantum dots. An “up” spin would be zero; a “down” spin would be one. They communicate spin information when the electrons are next to one another.
The researchers then added an empty quantum dot between the two occupied quantum dots. Lowering the energy barrier of the empty dot enables the occupied dot to send its spin information into the empty dot. The empty dot can then transmit it to the second occupied dot.
The researchers can turn the interaction on and off at will. This could make it possible to transmit information over longer distances in computers by using strings of empty dots.
The unparalleled possibilities of quantum computers are currently still limited because information exchange between the bits in such computers is difficult, especially over larger distances. Lieven Vandersypen, Professor at QuTech and workgroup leader at the Dutch Organization for Fundamental Research on Matter (FOM), have succeeded with his colleagues for the first time in enabling two non-neighbouring quantum bits in the form of electron spins in semiconductors to communicate with each other.
Information exchange is something that we scarcely think about these days. People constantly communicate via e-mails, mobile messaging applications and phone calls. Technically, it is the bits in those various devices that talk to each other. “For a normal computer, this poses absolutely no problem,” says professor Lieven Vandersypen, Co-Director of the Kavli Institute of Nanotechnology at TU Delft. “However, for the quantum computer – which is potentially much faster than the current computers – that information exchange between quantum bits is very complex, especially over long distances.”
Electrons talk with each other
Within Vandersypen's research group, PhD student Tim Baart and postdoc Takafumi Fujita worked on the communication between quantum bits. Each bit consists of a single electron with a spin direction (spin up = ‘0’ and spin down = ‘1’). “From previous research, we knew that two neighbouring electron spins can interact with each other, but that this interaction sharply decreases with increasing distance between them,” says Baart. “ We have now managed to make two non-neighbouring electrons communicate with each other for the first time. To achieve this, we used a quantum mediator: an object that can exchange the information between the two spins over a larger distance.”
Mediator
Chip used to create quantum dots The chip with the electrical contacts used to create the quantum dots. (Source: Tim Baart)
Baart and Fujita positioned the electrons in so-called quantum dots, where they were held in position by an electrical field. Between the two occupied quantum dots, they positioned an empty quantum dot that could form an energy barrier between the two spins. “By adjusting the electrical field around the empty quantum dot, we could enable the electrons to exchange their spin information via the superexchange mechanism: when the energy barrier is lowered, the spin information is exchanged,” says Baart. “This makes the empty quantum dot act as a type of mediator to make the interaction between the quantum bits possible. Furthermore, we can switch this interaction on and off at will.”
Fast quantum computer
The research of Vandersypen and Baart forms an important step in the construction of a larger quantum computer in which the communication between quantum bits over large distances is essential. Now that the concept of this quantum mediator has been demonstrated in practice, the researchers want to increase the distance between electron spins and place other types of ‘mediators’ between the quantum bits as well.
https://en.wikipedia.org/wiki/Quantum_dot
http://www.tudelft.nl/…/deta…/onderhandelen-met-quantumdots/
http://www.trustedreviews.com/…/quantum-dots-explained-what…

Seven innovations that could shape the future of computing

In-memory computing
Graphene-based microchips


Quantum computing
Molecular electronics
DNA data storage
Neuromorphic computing
Passive Wi-fi.

Saturday, November 5, 2016

Can you accelerate to the speed of light?


This one is easy to answer; we pretty much agree that you can’t. But when I think about reasons often given for this barrier I found a flaw in the reasoning routinely given. The same kind of error was once made by Einstein himself and later corrected it, the error is called ‘frame switching’, that is, attributing to one frame the measurement made by another.
I’m not going to dwell on the incorrect solution but move right along to the correct one.
Let us employ the popular twins of The Twin’s Paradox fame. The twins initially share the same inertial frame and then one accelerates away.
We know that, from Special Relativity, the mass of the accelerated twin increases with his speed relative to the stay at home twin. If the unaccelerated twin is providing the power to push the accelerated twin then the amount of energy will not increase linearly with increasing target speed but will rise much quicker as the inertia of the moving twin increases, eventually rising to infinity.
So we know that we can not accelerate an object to the speed of light that way.
Now we consider the other case, rockets on the accelerated twin’s vehicle. We note that each twin calculates the same speed difference between them. Assume that the accelerated twin accelerates in bursts and then coasts. Each time the accelerated twin coasts he can count himself stationary and the other twin to be the one that is moving. It takes no more energy to accelerate by 1,000kmh from any speed that the twin obtains because at any speed he can count himself stationary and there is nothing in Special Relativity or physics that can dispute this. Thus to add another 1,000kph never takes any more energy at any speed, that is, it does not take any more energy to accelerate from stationary (at rest with the other twin) or from 280,000kps
What stops the twin from reaching the speed of light is not some barrier of any kind, but the problem that he would run out of universe to accelerate in. At the speed of light the distance between any points in the direction of travel falls to zero and the interval to travel that distance falls to zero, so there simply isn’t any more universe left, even an infinite universe would not solve the problem.
We may try to argue the addition of velocities but that only applies if an object leaves our accelerated twin’s rocket at some speed in the direction of travel and only by the measure of the unaccelerated twin. As far as the accelerated twin is concerned, he is at rest before each acceleration step…
What the accelerated twin would notice is that the universe seems to be getting ever shorter in the direction of his travel, so there is some indication of motion by that measure. Eventually, the whole universe would appear to be compacted to a point, so no further acceleration is possible for that reason.
I have seen written that as the accelerated twin gains mass it becomes increasingly difficult for that twin to accelerate. This is not the case. It becomes more difficult for the unaccelerated twin to push the accelerated twin (because he measures the increase of mass, the accelerating twin does not) but not for the accelerating twin that can accelerate forever, or until he runs out of universe which will occur before he reaches the speed of light.
The key points are:
1) The accelerated twin’s inertial frame has the same mass, length, and temporal frequency (clock rate) regardless of his speed (Einstein said that all the physics remains the same);
2) At the speed of light the accelerated twin would literally leave the universe for any measurable interval on his clock.
Note also that the energy of the rocket for each acceleration will be measured differently by the two twins. The unaccelerated twin will measure each ‘burn’ as taking ever longer as the speed increases but temperature and pressure decrease (power decreases, thrust decreases), thus he measures the same energy (power * interval) for each burn.
Note also that if the amount of energy required to accelerate changed with your speed then, using a fine enough measurement you could determine when you were absolutely stationary (as this would require the minimum energy for acceleration) thus establishing a preferred or absolutely stationary inertial frame, in violation of the most basic principles of relativity theory ie there is no preferred or absolutely stationary inertial frame (from which all other frames could be compared or measured).

 To many, the speed of light being the ultimate speed limit is a fundamental law of physics. Albert Einstein believed that particles could never travel faster than the speed of light, and doing so would constitute time travel. For those that regret something in theirpast this is potentially an interesting question. For those who have led a completely perfect life, it is an interesting piece of knowledge to have nonetheless. There are also other reasons to research this question. The closest star, other than our sun, is Proxima Centauri and is about twenty five trillion miles away, which would take over ten thousand years to reach with the fastest spaceship we have today.Therefore if we ever wish to truly explore our universe we must start to explore the limitations of our transport.
To answer this question I must first define a few parameters. As the question stands it is rather ambiguous. I could say yes, but then slow light down before conducting my ‘race’, and then show that the particle did indeed travel faster than the light did. There are many ways in which this can be achieved, for example a Danish physicist named Lene Vestergaard Hau was able to slow light down to 17m/s, roughly thirty eight miles per hour, enabling my car to travel faster than light. They achieved this by cooling Bose-Einstein condensate atoms to a fraction of a degree above absolute zero, before passing the beam of light through it.Obviously this is cheating, using the the wave-particle duality of light to think of light as a particle, we know that the light is still travelling at the speed of light in a vacuum, it just has further to travel, as it interacts with all the particles in the material. I.e. the light has had the distance it needs to travel extended, in much the same way as the resistance of a metal increases when it is heated up, as the electrons collide more often with the ions in the metal and therefore take longer to travel through it.Therefore it is only fair to define the speed of light as the speed of light in a vacuum, known as c, which is 3.00×108 m/s to three significant figures. We commonly slow light down in every day life, when it bends through glass for example, but does this mean we can speed it up? Can we make light travel faster than light?
This is known as Superluminal Propagation and is the first faster-than-light example I would like to talk about. It seems possible to send pulses of light faster than c over small distances, however interpreting these results has been difficult because the light pulses always get distorted in the process.In 2000 Mugnai reported the propagation of microwaves over quite large distances, tens of centimetres, at speeds 7% faster than c. Impressive as this is, research by Wang has shown a much larger superluminal effect for pulses of visible light in which the light is travelling so fast the pulse exits the medium before it enters it. The observed group velocity has been calculated as -c/310 . The easiest way to understand this negative velocity is to interpret this value as meaning the energy of the wave moved a positive distance over a negative time. In other words the pulse emerged from the medium before entering it. Although light can be considered as a particle the photon has no mass, so this is not a true example of a particle travelling faster than c.
The second thing I will talk about is special relativity andits consequence, time dilation. The theory of special relativity states that the speed of light is constant in all reference frames. As a result of this the speed of light cannot be altered, if I was driving a car at half the speed of light and I turned on the headlights, the speed of light would still be measured as c, rather than 1.5c, as might seem logical. This is perhaps counterintuitive as with much smaller speeds we would add velocities, for example driving a car at 50mph and throwing a stone at 20mph in the same direction, would result in the stone having a velocity of 70mph in that direction. To illustrate time dilation I will show the effect of moving at high speeds on two clocks.
Screen Shot 2013-09-21 at 12.26.31
In this situation the clock works by bouncing light between the two mirrors A and B, this represents one tick, so for the sake of argument I will say that it takes one second for the light to travel the full distance between the two mirrors or 2L. Both images depict the same clock, however the one on the right is moving close to light speed in relation to the other. The speed of light must be c in both frames, however as you can see the light has further to travel in the moving clock. As 2D is longer than 2L, the moving clock appears to be running more slowly from the frame of the stationary clock. Using these two examples it is possible to derive the change in time and thus the time dilation. The first clock gives us the simple equation  Screen Shot 2013-09-21 at 12.27.39 and the moving clock shows Screen Shot 2013-09-21 at 12.27.45 . Using Pythagoras theorem we can show that Screen Shot 2013-09-21 at 12.27.52 , this can then be substituted back into the equation and rearranged forScreen Shot 2013-09-21 at 12.27.56 , finally giving  .
Screen Shot 2013-09-21 at 12.28.03
This expresses the fact that time slows down the faster you go. The main point of this is that it is possible to travel a distance of one light year in less than a year. This statement needs explaining however because for a stationary onlooker, say a person on earth, the journey time will appear to be much longer than a year. However the traveller will have experienced less than a year. This is common at CERN where short-lived particles, such as muons, with a mean lifetime of 2.2 microseconds, are accelerated which hugely extends their lifetime. As we have no stationary point to measure the universe from, due to the Earth’s constant movements, we will never have standard time. A clock on Earth synchronized with one light years away on a distant planet will not stay in sync for very long, the mass of the planet and its velocity determine the experience of time. From here I can crudely argue that if there will never be a standard time the one that matters the most is our own. Having said that I have not proved that it would be faster than light, this is a slightly unscientific argument, however it is interesting to say that if there were no other people in the world to offer an alternative view I would say that I can travel faster than light.
My next example involves space-time distortion. After the Big Bang the universe expanded at a rate much faster than 3.00×108 m/s.Special relativity does not provide a limit for distorting space-time. Miguel Alcubierre hypothesized that a spacecraft could be enclosed in a ‘bubble’ and exotic matter could be used to rapidly expand space-time at the back of the bubble, making you move further away from objects behind you, and contacting it at the front, bringing object ahead of you closer. This would be a new way of travel in which it would be space-time which is moving rather than the spaceship. In this way the ship would reach a destination much faster than a beam of light travelling outside of the bubble but without anything travelling faster than c inside the bubble.[6]This method has one important drawback, It violates the weak, dominant and strong energy conditions, both the weak and the dominant energy conditions require the energy density to be positive for all observers, therefore negative energy is needed, which may or may not exist.[7]
A particle which is always travelling faster than the speed of light is known as a Tachyon. Tachyons, if they exist, would both answer my question immediately and have very interesting properties. The equation  Screen Shot 2013-09-21 at 12.30.59 has often been used to show that particles with mass can never achieve the speed of light, this is because it would require infinite energy. However if the same equation was applied to tachyons, it would show two things. Firstly, a tachyon would never be able to decelerate below the speed of light, as crossing this limit from either side would require infinite energy. Secondly it would have an imaginary mass. When v is larger than c the denominator in the above equation would become imaginary, as the total energy must be real the numerator must also be imaginary. Therefore the rest mass must be imaginary, as an imaginary number divided by another imaginary number is real.The existence of tachyons would cause certain causality paradoxes. If they could be used to send signals faster than c, then if one frame is moving at 0.6c and another is moving at -0.6c there would always be one frame in which the signal was received before it was sent. Effectively the signal would have moved back in time. Special relativity claims the laws of physics work the same in every frame, if it is possible for signals to move back in one frame it must be for all of them. Therefore if A sends a signal to B which moves faster than light in A’s frame and therefore backwards in time in B’s frame. B could then reply with a signal faster than light in B’s frame but backwards in time in A’s frame, thus it could end up that A received the reply before sending the original message, challenging causality and causing paradoxes.
The mathematical case that prohibits faster than light travel uses the Equation E=MC2, shows that energy and mass are the same thing, this equation implies that the more energy you inject into a rocket, the more mass it gains, and the more mass it gains, the harder it is to accelerate. Boosting it to the speed of light is impossible because in the process the rocket would become infinitely massive and would require and infinite amount of energy.
A wormhole is effectively a shortcut through space-time. A wormhole connects two places in space-time and allows a particle to travel a distance faster than a beam of light would on the outside of the wormhole. The particles inside the wormhole are not going faster than the speed of light,they are only able to beat light because theyhave a smaller distance to travel. Scientists imagine that the opening to a wormhole would look something a bubble. It is theorized that a wormhole allowing travel in both directions, known as a Lorentzian traversable wormhole, would require exotic matter. As they connect two points in both space and time they theoretically allow travel through time as well as space. This fascinated many scientists and Morris, Thorne and Yurtsever worked out how to convert a wormhole traversing space into one traversing time. This process involves accelerating one opening of the wormhole relative to the other, before bringing it back to the original location. This uses a process I have mentioned earlier, time dilation. Time dilation would cause the end of the wormhole that was accelerated to have aged less. Say there were two clocks, one at each opening of the wormhole, after this tampering, the clock on the accelerated end of the wormhole may read 2000 where as the clock at the stationary end showed 2013, so that a traveller entering one end would find himself/herself in the same region but 13 years in the past. As fantastic as this may sound there have been many who believe it would be impossible, there are predictions that say that a loop of virtual particles would circulate through the wormhole with ever-increasing intensity, destroying it before it could be of any use.
Over the course of this essay I have shown a few ways in which the speed of light can be ‘beaten’, but ultimately I have failed to produce any proven faster than light particles, and for this reason I must conclude that particles cannot travel faster than the speed of light. However the topics I have raised are still in the developing stage, I have shown that there are many diverse areas of physics which are both being explored and need exploring. Many of these areas are still heavily theoretical and are under further research, but our technology is increasing at an exponential rate and the human race is steadily getting smarter, so with time and perseverance we will know the answer to questions such as these and many more. To infinity, and beyond.

Thanks Robert Karl Stonjek