Search This Blog

Showing posts with label Physics Optics Photonics. Show all posts
Showing posts with label Physics Optics Photonics. Show all posts

Wednesday, May 3, 2023

The steps to use paddy husk gasification for Rural Electrification

The energy cost now strongly depends on the prices of fossil fuels due to the world's intense fuel dependence on energy production. This is causing pain in most of the world's nations, and Sri Lanka is no different. From this perspective, the promotion of biomass as a source of renewable energy is significant to the country. Given that rice is the nation's leading food and the crop with the most considerable area under cultivation, it has been discovered that the rice husk (RH) produced during paddy processing has a significant potential for producing electricity.

Paddy husk gasification is a process that can be used to generate electricity from agricultural waste, specifically the husks of rice. The process involves heating the husks in a gasifier, which breaks down the biomass into a gas that can power an engine or a turbine to generate electricity.

The Husk Power Systems (HPS) and Decentralized Energy Systems India (DESI), two businesses that have successfully offered power access utilizing this resource, have popularized rice husk-based electricity generation and supply throughout South Asia. To examine the factors that make a small-scale rural power supply company profitable and determine whether a collection of villages can be electrified using a larger facility. Using a financial analysis of alternative supply alternatives that consider the residential and commercial electricity demands under various scenarios, Serving just consumers with low electricity usage results in the electricity-producing facility only being used to part of its capacity, which raises the cost of supply. Increased electricity use improves financial viability and considerably helps high-consumption clients. The feasibility and levelized cost of the collection are enhanced by integrating rice mill demand, especially during the off-peak period, with a predominant residential peak demand system. Finally, larger plants significantly reduce costs to provide a competitive supply. However, the more critical investment requirement, risks associated with the rice mill's monopoly supply of husk, organizational challenges related to managing a more extensive distribution area, and the possibility of plant failure could negatively impact investor interest.

 

Here are the steps to use paddy husk gasification for rural electrification:

 

Assess the availability of paddy husk: The first step is to determine the amount of paddy husk available in the rural area. This will help to determine the size of the gasification system that will be needed.

Choose the gasification system: There are different types of gasification systems available, including fixed beds, fluidized beds, and entrained flow gasifiers. The choice of the gasification system will depend on the amount of paddy husk available and the amount of electricity that needs to be generated. 

Install the gasification system: Once chosen, it must be installed in the rural area. The design should be located close to the source of the paddy husk to minimize transportation costs. 

Operate the gasification system: It must be operated properly to ensure electricity is generated efficiently. This involves feeding the paddy husk into the gasifier and maintaining the appropriate temperature and pressure.

 

Distribute the electricity: The generated electricity can be distributed to the surrounding rural area using a grid or a microgrid. The distribution system should be designed to meet the needs of the rural community. 

Monitor and maintain the system: It is essential to monitor the gasification system to ensure that it operates efficiently and to perform regular maintenance to prevent breakdowns and ensure a long lifespan.

 

In summary, paddy husk gasification can be a sustainable solution for rural electrification.

Thursday, March 30, 2023

What is Plant-e

 


Plant-e is a technology that generates electricity from living plants through a process known as microbial fuel cells (MFCs). MFCs use the natural metabolic processes of certain bacteria to break down organic matter, such as the sugars and other compounds produced by plants during photosynthesis, and generate electricity in the process.
Microbial Fuel Cells (MFCs) have been aptly described by Du et al. (2007) as “bioreactors that convert the energy in the chemical bonds of organic compounds into electrical energy through the catalytic activity of microorganisms under anaerobic conditions”.

In Plant-e's technology, electrodes are placed in the soil near the roots of the plants, and the bacteria living in the soil around the roots consume the organic matter and produce electrons, which can then be captured and used to generate electricity. The technology has potential applications in renewable energy, agriculture, and environmental monitoring.

While the technology is still in its early stages of development, it has shown promise as a sustainable and environmentally-friendly alternative to traditional forms of energy generation.

Wednesday, November 18, 2020

Wave-particle duality

The quest to understand nature’s fundamental building blocks began with the ancient Greek philosopher Democritus’s assertion that such things exist. Two millennia later, Isaac Newton and Christiaan Huygens debated whether light is made of particles or waves. The discovery of quantum mechanics some 250 years after that proved both luminaries right: Light comes in individual packets of energy known as photons, which behave as both particles and waves.
Wave-particle duality turned out to be a symptom of a deep strangeness. Quantum mechanics revealed to its discoverers in the 1920s that photons and other quantum objects are best described not as particles or waves but by abstract “wave functions” — evolving mathematical functions that indicate a particle’s probability of having various properties. The wave function representing an electron, say, is spatially spread out, so that the electron has possible locations rather than a definite one. But somehow, strangely, when you stick a detector in the scene and measure the electron’s location, its wave function suddenly “collapses” to a point, and the particle clicks at that position in the detector.
A particle is thus a collapsed wave function. But what in the world does that mean? Why does observation cause a distended mathematical function to collapse and a concrete particle to appear? And what decides the measurement’s outcome? Nearly a century later, physicists have no idea... Cecile G. Tamura

Friday, May 1, 2020

Claude Shannon Father of Information Theory

Information Theory is one of the few scientific fields fortunate enough to have an identifiable beginning - Claude Shannon's 1948 paper.  The story of the evolution of how it progressed from a single theoretical paper to a broad field that has redefined our world is a fascinating one.  It provides the opportunity to study the social, political, and technological interactions that have helped guide its development and define its trajectory, and gives us insight into how a new field evolves.

We often hear Claude Shannon called the father of the Digital Age.  In the beginning of his paper Shannon acknowledges the work done before him, by such pioneers as Harry Nyquist and RVL. Hartley at Bell Labs in the 1920s. Though their influence was profound, the work of those early pioneers was limited and focussed on their own particular applications. It was Shannon’s unifying vision that revolutionized communication, and spawned a multitude of communication research that we now define as the field of Information Theory.
One of those key concepts was his definition of the limit for channel capacity.  Similar to Moore’s Law, the Shannon limit can be considered a self-fulfilling prophecy.  It is a benchmark that tells people what can be done, and what remains to be done – compelling them to achieve it.


"What made possible, what induced the development of coding as a theory, and the development of very complicated codes, was Shannon's Theorem: he told you that it could be done, so people tried to do it. [Interview with Fano, R. 2001]

Quantum information science is a young field, its underpinnings still being laid by a large number of researchers [see "Rules for a Complex Quantum World," by Michael A. Nielsen; Scientific American, November 2002]. Classical information science, by contrast, sprang forth about 50 years ago, from the work of one remarkable man: Claude E. Shannon. In a landmark paper written at Bell Labs in 1948, Shannon defined in mathematical terms what information is and how it can be transmitted in the face of noise. What had been viewed as quite distinct modes of communication--the telegraph, telephone, radio and television--were unified in a single framework.
Shannon was born in 1916 in Petoskey, Michigan, the son of a judge and a teacher. Among other inventive endeavors, as a youth he built a telegraph from his house to a friend's out of fencing wire. He graduated from the University of Michigan with degrees in electrical engineering and mathematics in 1936 and went to M.I.T., where he worked under computer pioneer Vannevar Bush on an analog computer called the differential analyzer.
Shannon's M.I.T. master's thesis in electrical engineering has been called the most important of the 20th century: in it the 22-year-old Shannon showed how the logical algebra of 19th-century mathematician George Boole could be implemented using electronic circuits of relays and switches. This most fundamental feature of digital computers' design--the representation of "true" and "false" and "0" and "1" as open or closed switches, and the use of electronic logic gates to make decisions and to carry out arithmetic--can be traced back to the insights in Shannon's thesis.


In 1941, with a Ph.D. in mathematics under his belt, Shannon went to Bell Labs, where he worked on war-related matters, including cryptography. Unknown to those around him, he was also working on the theory behind information and communications. In 1948 this work emerged in a celebrated paper published in two parts in Bell Labs's research journal.
Quantifying Information
Shannon defined the quantity of information produced by a source--for example, the quantity in a message--by a formula similar to the equation that defines thermodynamic entropy in physics. In its most basic terms, Shannon's informational entropy is the number of binary digits required to encode a message. Today that sounds like a simple, even obvious way to define how much information is in a message. In 1948, at the very dawn of the information age, this digitizing of information of any sort was a revolutionary step. His paper may have been the first to use the word "bit," short for binary digit.
As well as defining information, Shannon analyzed the ability to send information through a communications channel. He found that a channel had a certain maximum transmission rate that could not be exceeded. Today we call that the bandwidth of the channel. Shannon demonstrated mathematically that even in a noisy channel with a low bandwidth, essentially perfect, error-free communication could be achieved by keeping the transmission rate within the channel's bandwidth and by using error-correcting schemes: the transmission of additional bits that would enable the data to be extracted from the noise-ridden signal.
Today everything from modems to music CDs rely on error-correction to function. A major accomplishment of quantum-information scientists has been the development of techniques to correct errors introduced in quantum information and to determine just how much can be done with a noisy quantum communications channel or with entangled quantum bits (qubits) whose entanglement has been partially degraded by noise.


The Unbreakable Code
A year after he founded and launched information theory, Shannon published a paper that proved that unbreakable cryptography was possible. (He did this work in 1945, but at that time it was classified.) The scheme is called the one-time pad or the Vernam cypher, after Gilbert Vernam, who had invented it near the end of World War I. The idea is to encode the message with a random series of digits--the key--so that the encoded message is itself completely random. The catch is that one needs a random key that is as long as the message to be encoded and one must never use any of the keys twice. Shannon's contribution was to prove rigorously that this code was unbreakable. To this day, no other encryption scheme is known to be unbreakable.
The problem with the one-time pad (so-called because an agent would carry around his copy of a key on a pad and destroy each page of digits after they were used) is that the two parties to the communication must each have a copy of the key, and the key must be kept secret from spies or eavesdroppers. Quantum cryptography solves that problem. More properly called quantum key distribution, the technique uses quantum mechanics and entanglement to generate a random key that is identical at each end of the quantum communications channel. The quantum physics ensures that no one can eavesdrop and learn anything about the key: any surreptitious measurements would disturb subtle correlations that can be checked, similar to error-correction checks of data transmitted on a noisy communications line.


Encryption based on the Vernam cypher and quantum key distribution is perfectly secure: quantum physics guarantees security of the key and Shannon's theorem proves that the encryption method is unbreakable. [For Scientific American articles on quantum cryptography and other developments of quantum information science during the past decades, please click here.]
A Unique, Unicycling Genius


Shannon fit the stereotype of the eccentric genius to a T. At Bell Labs (and later M.I.T., where he returned in 1958 until his retirement in 1978) he was known for riding in the halls on a unicycle, sometimes juggling as well [see "Profile: Claude E. Shannon," by John Horgan; Scientific American, January 1990]. At other times he hopped along the hallways on a pogo stick. He was always a lover of gadgets and among other things built a robotic mouse that solved mazes and a computer called the Throbac ("THrifty ROman-numeral BAckward-looking Computer") that computed in roman numerals. In 1950 he wrote an article for Scientific American on the principles of programming computers to play chess [see "A Chess-Playing Machine," by Claude E. Shannon; Scientific American, February 1950].
In the 1990s, in one of life's tragic ironies, Shannon came down with Alzheimer's disease, which could be described as the insidious loss of information in the brain. The communications channel to one's memories--one's past and one's very personality--is progressively degraded until every effort at error correction is overwhelmed and no meaningful signal can pass through. The bandwidth falls to zero. The extraordinary pattern of information processing that was Claude Shannon finally succumbed to the depredations of thermodynamic entropy in February 2001. But some of the signal generated by Shannon lives on, expressed in the information technology in which our own lives are now immersed.
https://www.scientificamerican.com

Saturday, April 18, 2020

What is Mechanical Ventilation and Why it is being used for COVID-19

Mechanical ventilation, or assisted ventilation, is the medical term for artificial ventilation where mechanical means are used to assist or replace spontaneous breathing. This may involve a machine called a ventilator, or the breathing may be assisted manually by a suitably qualified professional, such as an anesthesiologist, Registered Nurse, respiratory therapist, or paramedic, by compressing a bag valve mask device.



Mechanical ventilation can be



Noninvasive, involving various types of face masks



Invasive, involving endotracheal intubation

Selection and use of appropriate techniques require an understanding of respiratory mechanics.



Indications

There are numerous indications for endotracheal intubation and mechanical ventilation (see table Situations Requiring Airway Control), but, in general, mechanical ventilation should be considered when there are clinical or laboratory signs that the patient cannot maintain an airway or adequate oxygenation or ventilation.

Concerning findings include



Respiratory rate > 30/minute



Inability to maintain arterial oxygen saturation  >  90% with fractional inspired oxygen (FIO2) > 0.60



pH < 7.25



PaCO2 > 50 mm Hg (unless chronic and stable)

The decision to initiate mechanical ventilation should be based on clinical judgment that considers the entire clinical situation and not simple numeric criteria. However, mechanical ventilation should not be delayed until the patient is in extremis.



Respiratory Mechanics

Normal inspiration generates negative intrapleural pressure, which creates a pressure gradient between the atmosphere and the alveoli, resulting in air inflow. In mechanical ventilation, the pressure gradient results from increased (positive) pressure of the air source.

Peak airway pressure is measured at the airway opening (Pao) and is routinely displayed by mechanical ventilators. It represents the total pressure needed to push a volume of gas into the lung and is composed of pressures resulting from inspiratory flow resistance (resistive pressure), the elastic recoil of the lung and chest wall (elastic pressure), and the alveolar pressure present at the beginning of the breath (positive end-expiratory pressure [PEEP]



Resistive pressure is the product of circuit resistance and airflow. In the mechanically ventilated patient, resistance to airflow occurs in the ventilator circuit, the endotracheal tube, and, most importantly, the patient’s airways. (NOTE: Even when these factors are constant, an increase in airflow increases resistive pressure.)

Components of airway pressure during mechanical ventilation, illustrated by an inspiratory-hold maneuver



PEEP = positive end-expiratory pressure.







Elastic pressure is the product of the elastic recoil of the lungs and chest wall (elastance) and the volume of gas delivered. For a given volume, elastic pressure is increased by increased lung stiffness (as in pulmonary fibrosis) or restricted excursion of the chest wall or diaphragm (eg, in tense ascites or massive obesity). Because elastance is the inverse of compliance, high elastance is the same as low compliance.

End-expiratory pressure in the alveoli is normally the same as atmospheric pressure. However, when the alveoli fail to empty completely because of airway obstruction, airflow limitation, or shortened expiratory time, end-expiratory pressure may be positive relative to the atmosphere. This pressure is called intrinsic PEEP or auto PEEP to differentiate it from externally applied (therapeutic) PEEP, which is created by adjusting the mechanical ventilator or by placing a tight-fitting mask that applies positive pressure throughout the respiratory cycle.


Any elevation in peak airway pressure (eg, > 25 cm H2O) should prompt measurement of the end-inspiratory pressure (plateau pressure) by an end-inspiratory hold maneuver to determine the relative contributions of resistive and elastic pressures. The maneuver keeps the exhalation valve closed for an additional 0.3 to 0.5 second after inspiration, delaying exhalation. During this time, airway pressure falls from its peak value as airflow ceases. The resulting end-inspiratory pressure represents the elastic pressure once PEEP is subtracted (assuming the patient is not making active inspiratory or expiratory muscle contractions at the time of measurement). The difference between peak and plateau pressure is the resistive pressure.

Elevated resistive pressure (eg, > 10 cm H2O) suggests that the endotracheal tube has been kinked or plugged with secretions or that an intraluminal mass or bronchospasm is present.

Increased elastic pressure (eg, > 10 cm H2O) suggests decreased lung compliance due to



Edema, fibrosis, or lobar atelectasis



Large pleural effusions, pneumothorax, or fibrothorax



Extrapulmonary restriction as may result from circumferential burns or other chest wall deformity, ascites, pregnancy, or massive obesity



A tidal volume too large for the amount of lung being ventilated (eg, a normal tidal volume being delivered to a single lung because the endotracheal tube is malpositioned)

Intrinsic PEEP (auto PEEP) can be measured in the passive patient through an end-expiratory hold maneuver. Immediately before a breath, the expiratory port is closed for 2 seconds. Flow ceases, eliminating resistive pressure; the resulting pressure reflects alveolar pressure at the end of expiration (intrinsic PEEP). Although accurate measurement depends on the patient being completely passive on the ventilator, it is unwarranted to use neuromuscular blockade solely for the purpose of measuring intrinsic PEEP. A nonquantitative method of identifying intrinsic PEEP is to inspect the expiratory flow tracing. If expiratory flow continues until the next breath or the patient’s chest fails to come to rest before the next breath, intrinsic PEEP is present. The consequences of elevated intrinsic PEEP include increased inspiratory work of breathing and decreased venous return, which may result in decreased cardiac output and hypotension.

The demonstration of intrinsic PEEP should prompt a search for causes of airflow obstruction (eg, airway secretions, decreased elastic recoil, bronchospasm); however, a high minute ventilation (> 20 L/minute) alone can result in intrinsic PEEP in a patient with no airflow obstruction. If the cause is airflow limitation, intrinsic PEEP can be reduced by shortening inspiratory time (ie, increasing inspiratory flow) or reducing the respiratory rate, thereby allowing a greater fraction of the respiratory cycle to be spent in exhalation.



Means and Modes of Mechanical Ventilation

Mechanical ventilators are



Volume cycled: Delivering a constant volume with each breath (pressures may vary)



Pressure cycled: Delivering constant pressure during each breath (volume delivered may vary)



A combination of volume and pressure cycled

Assist-control (A/C) modes of ventilation are modes that maintain a minimum respiratory rate regardless of whether or not the patient initiates a spontaneous breath. Because pressures and volumes are directly linked by the pressure-volume curve, any given volume will correspond to a specific pressure, and vice versa, regardless of whether the ventilator is pressure cycled or volume cycled.

Adjustable ventilator settings differ with mode but include



Respiratory rate



Tidal volume



Trigger sensitivity



Flow rate



Waveform



Inspiratory/expiratory (I/E) ratio

Volume-cycled ventilation

Volume-cycled ventilation delivers a set tidal volume. This mode includes



Volume-control (V/C)



Synchronized intermittent mandatory ventilation (SIMV)

The resultant airway pressure is not fixed but varies with the resistance and elastance of the respiratory system and with the flow rate selected.

V/C ventilation is the simplest and most effective means of providing full mechanical ventilation. In this mode, each inspiratory effort beyond the set sensitivity threshold triggers delivery of the fixed tidal volume. If the patient does not trigger the ventilator frequently enough, the ventilator initiates a breath, ensuring the desired minimum respiratory rate.

SIMV also delivers breaths at a set rate and volume that is synchronized to the patient’s efforts. In contrast to V/C, patient efforts above the set respiratory rate are unassisted, although the intake valve opens to allow the breath. This mode remains popular, despite not providing full ventilator support as does V/C, not facilitating liberation of the patient from mechanical ventilation, and not improving patient comfort.

Pressure-cycled ventilation

Pressure-cycled ventilation delivers a set inspiratory pressure. This mode includes



Pressure control ventilation (PCV)



Pressure support ventilation (PSV)



Noninvasive modalities applied via a tight-fitting face mask (several types available)

Hence, tidal volume varies depending on the resistance and elastance of the respiratory system. In this mode, changes in respiratory system mechanics can result in unrecognized changes in minute ventilation. Because it limits the distending pressure of the lungs, this mode can theoretically benefit patients with acute respiratory distress syndrome (ARDS); however, no clear clinical advantage over V/C has been shown, and, if the volume delivered by PCV is the same as that delivered by V/C, the distending pressures will be the same.

Pressure control ventilation is a pressure-cycled form of A/C. Each inspiratory effort beyond the set sensitivity threshold delivers full pressure support maintained for a fixed inspiratory time. A minimum respiratory rate is maintained.

In pressure support ventilation, a minimum rate is not set; all breaths are triggered by the patient. The ventilator assists the patient by delivering a pressure that continues at a constant level until the patient's inspiratory flow falls below a preset level determined by an algorithm. Thus, a longer or deeper inspiratory effort by the patient results in a larger tidal volume. This mode is commonly used to liberate patients from mechanical ventilation by letting them assume more of the work of breathing. However, no studies indicate that this approach is more successful than others in discontinuing mechanical ventilation.

Noninvasive positive pressure ventilation (NIPPV)

NIPPV is the delivery of positive pressure ventilation via a tight-fitting mask that covers the nose or both the nose and mouth. Helmets that deliver NIPPV are being studied as an alternative for patients who cannot tolerate the standard tight-fitting face masks. Because of its use in spontaneously breathing patients, it is primarily applied as a form of PSV or to deliver end-expiratory pressure, although volume control can be used.
Thanks https://www.msdmanuals.com

Tuesday, March 17, 2020

Albert Einstein and the Dutch astronomer Willem de Sitter in 1932, discussing cosmology:

Cecile G. Tamura
The Einstein–de Sitter universe


In 1917, both Einstein and de Sitter proposed a new interpretation of the universe as a whole: the structure of the universe could be described in terms of relativistic field equations.
Their contributions marked the beginning of the modern scientific comprehension of the origin and evolution of the universe.


The Einstein-DeSitter model is a matter-dominated Friedmann model with zero curvature (k = 0).
This model corresponds to a Minkowski universe (zero curvature), in which the universe will continue to expand forever with just the right amount of energy to escape to infinity.
It is analogous to launching a rocket. If the rocket is given insufficient energy, it will be pulled back by the Earth.
However, if its energy exceeds a certain critical velocity (escape velocity), it will continue into space with ever-increasing speed.
If it has exactly the escape velocity, it will proceed to escape the Earth with a velocity going to zero as the rocket approaches spatial infinity.
The Einstein-DeSitter model corresponds to the universe having exactly the right escape velocity provided by the Big-Bang to escape the pull of gravity due to the matter in the universe.
Image © Wide World Photos, New York.

Saturday, November 2, 2019

Cosmic Triangles Open a Window to the Origin of Time

Cecile G. Tamura
A close look at fundamental symmetries has exposed hidden patterns in the universe. Physicists think that those same symmetries may also reveal time’s original secret.
The Cosmological Bootstrap: Inflationary Correlators from Symmetries and Singularities
(https://www.quantamagazine.org/the-origin-of-time-bootstrapped-from-fundamental-symmetries-20191029/?fbclid=IwAR3Ty4_RN5CpbSXTVHdRaGRvs2HEltqypC3FKPNEu6qjR6jDu49w6WWSUAw)

“We look at patterns in space today, and we infer a cosmological history in order to explain them.
The approach has the potential to help explain why time began, and why it might end.
As Arkani-Hamed put it, “The thing that we’re bootstrapping is time itself.”
"One curious pattern cosmologists have known about for decades is that space is filled with correlated pairs of objects: pairs of hot spots seen in telescopes’ maps of the early universe; pairs of galaxies or of galaxy clusters or superclusters in the universe today; pairs found at all distances apart.
You can see these “two-point correlations” by moving a ruler all over a map of the sky. When there’s an object at one end, cosmologists find that this ups the chance that an object also lies at the other end."
"The simplest explanation for the correlations traces them to pairs of quantum particles that fluctuated into existence as space exponentially expanded at the start of the Big Bang. Pairs of particles that arose early on subsequently moved the farthest apart, yielding pairs of objects far away from each other in the sky today. Particle pairs that arose later separated less and now form closer-together pairs of objects. Like fossils, the pairwise correlations seen throughout the sky encode the passage of time — in this case, the very beginning of time."
A Map of the Start of Time
In 1980, the cosmologist Alan Guth, pondering a number of cosmological features, posited that the Big Bang began with a sudden burst of exponential expansion, known as “cosmic inflation.” Two years later, many of the world’s leading cosmologists gathered in Cambridge, England, to iron out the details of the new theory. Over the course of the three-week Nuffield workshop, a group that included Guth, Stephen Hawking, and Martin Rees, the future Astronomer Royal, pieced together the effects of a brief inflationary period at the start of time. By the end of the workshop, several attendees had separately calculated that quantum jitter during cosmic inflation could indeed have happened at the right rate and evolved in the right way to yield the universe’s observed density variations.
To understand how, picture the hypothetical energy field that drove cosmic inflation, known as the “inflaton field.” As this field of energy powered the exponential expansion of space, pairs of particles would have spontaneously arisen in the field. (These quantum particles can also be thought of as ripples in the quantum field.) Such pairs pop up in quantum fields all the time, momentarily borrowing energy from the field as allowed by Heisenberg’s uncertainty principle. Normally, the ripples quickly annihilate and disappear, returning the energy. But this couldn’t happen during inflation. As space inflated, the ripples stretched like taffy and were yanked apart, and so they became “frozen” into the field as twin peaks in its density. As the process continued, the peaks formed a nested pattern on all scales.
After inflation ended (a split second after it began), the spatial density variations remained. Studies of the ancient light called the cosmic microwave background have found that the infant universe was dappled with density differences of about one part in 10,000 — not much, but enough. Over the nearly 13.8 billion years since then, gravity has heightened the contrast by pulling matter toward the dense spots: Now, galaxies like the Milky Way and Andromeda are 1 million times denser than the cosmic average. As Guth wrote in his memoir (referring to a giant swath of galaxies rather than the wall in China), “The same Heisenberg uncertainty principle that governs the behavior of electrons and quarks may also be responsible for Andromeda and The Great Wall!”

Thursday, October 10, 2019

Niels Bohr Physicist

Niels Bohr, the Danish physicist who applied the nascent quantum theory to the structure of the hydrogen atom, was born In 1911, scattering experiments by Ernest Rutherford and collaborators had disproven the “plum pudding” model of the atom. Rather than a blob-like structure with + and - charge spread throughout, their experiments suggested that atoms have a tiny positive nucleus.
The negative electrons would have to orbit this positive nucleus. But classically this kind of configuration shouldn’t be stable! The accelerating electrons would radiate energy and spiral into the nucleus. Rutherford’s atom was at odds with classical physics.

But classical physics was already having problems with atomic phenomena. Atoms were known to emit and absorb light preferentially at certain wavelengths. The "spectral lines" fit simple patterns, but had no known explanation.
Bohr addressed both issues at once by positing the existence of "stationary orbits" around the nucleus. Electrons could exist in these stationary orbits without (as predicted by classical electrodynamics) radiating energy.
Transitions between these orbits emitted energy in discrete amounts (related to Planck's quanta) dictated by the initial and final level. This reproduced the empirical formulas obtained by Balmer and Rydberg for the wavelengths of the spectral lines.
Bohr's model was phenomenological; a hodgepodge of early quantum ideas that fit the data without offering deeper explanation. It preceded de Broglie's hypothesis, which would've allowed Bohr to explain orbits as integer multiples of an electron's wavelength.
But it seemed to explain (or at least relate) two outstanding puzzles that had baffled scientists. So, while many physicists were rightly concerned by the radical implications of Bohr's model, he was clearly onto something.
Bohr’s model is a central result of the "old quantum theory," the transitional body work straddling classical physics and modern quantum mechanics. These early results, which flew in the face of accepted theories, set physicists on the road to our present understanding of Nature.
https://bit.ly/2us1BW9
https://www.atomicheritage.org/profile/niels-bohr

Tuesday, August 13, 2019

The Dark Energy Spectroscopic Instrument (DESI)

The Dark Energy Spectroscopic Instrument (DESI) will measure the effect of dark energy on the expansion of the universe.  It will obtain optical spectra for tens of millions of galaxies and quasars, constructing a 3-dimensional map spanning the nearby universe to 10 billion light years.

DESI will be conducted on the Mayall 4-meter telescope at Kitt Peak National Observatory starting in 2018.  DESI is supported by the Department of Energy Office of Science to perform this Stage IV dark energy measurement using baryon acoustic oscillations and other techniques that rely on spectroscopic measurements.
 The Dark Energy Spectroscopic Instrument (DESI) is scheduled to see ‘first light’ in September.



 Crews at the Mayall Telescope near Tucson, Arizona, lift and install the top-end components for the Dark Energy Spectroscopic Instrument, or DESI. The components, which include a stack of six lenses and other structures or positioning and support, weigh about 12 tons. DESI, scheduled to begin its sky survey next year, is designed to produce the largest 3-D map of the universe and produce new clues about the nature of dark energy. Credit: David Sprayberry, NOAO/AURA
"The survey will reconstruct 11 billion years of cosmic history. It could answer the first and most basic question about dark energy: is it a uniform force across space and time, or has its strength evolved over eons?
It will track cosmic expansion by measuring features of the early Universe, known as baryon acoustic oscillations (BAOs).
These oscillations are ripples in the density of matter that left a spherical imprint in space around which galaxies clustered. The distribution of galaxies is highest in the centre of the imprint, a region called a supercluster, and around its edges — with giant voids between these areas." thanks
Cecile G. Tamura

Friday, June 7, 2019

What is the No-Boundary Proposal

Cecile G. Tamura

Stephen Hawking had a vision that the universe expanded out of a dimensionless point, rather like a shuttlecock. Recently, his stunning proposal has come under attack, but a vigorous defense has been mounted.
“If you know the wave function of the universe, why aren’t you rich?” — Murray Gell-Mann

 " The “no-boundary proposal,” which Hawking and his frequent collaborator, James Hartle, fully formulated in a 1983 paper, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expanded from a point of zero size. Hartle and Hawking derived a formula describing the whole shuttlecock — the so-called “wave function of the universe” that encompasses the entire past, present and future at once — making moot all contemplation of seeds of creation, a creator, or any transition from a time before.
Hartle and Hawking’s proposal radically reconceptualized time. Each moment in the universe becomes a cross-section of the shuttlecock; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space. As Hartle, now 79 and a professor at the University of California, Santa Barbara, explained it by phone recently, “We didn’t have birds in the very early universe; we have birds later on. … We didn’t have time in the early universe, but we have time later on.”