Home Blog Page 48

User:Lester1960: Difference between revisions – Wikipedia

0
User:Lester1960: Difference between revisions – Wikipedia

From Wikipedia, the free encyclopedia

Content deleted Content added


 

Line 1: Line 1:

</ref>== Mario Louis Lachat ==

== Mario Louis Lachat ==

Mario Louis Lachat (* 1959 in [[Lausanne]]) is a Swiss cardiovascular surgeon.

Mario Louis Lachat (* 1959 in [[Lausanne]]) is a Swiss cardiovascular surgeon.

Line 13: Line 13:

=== Life ===

=== Life ===

Mario Lachat studied medicine at the [[University of Lausanne]] from 1981 to 1986. During his studies, Lachat assisted the vascular surgeons [[Ninoslav Radovanovic]] and Slobodan Vecerina at the Clinique Cecil in Lausanne, who inspired him to become a vascular surgeon. After Lachat started working as an assistant doctor at the University Hospital Zurich in 1987, he was particularly encouraged and influenced by his boss, Marko Turina, and his deputy, Ludwig Karl von Segesser, and began his surgical training at the University Hospital Zurich in 1987 under the direction of Marko Turina, obtaining the title of FMH specialist in general surgery in 1994. This was followed in 1998 by recognition as a specialist in cardiothoracic and vascular surgery (FMH), in 1999 by recognition by the European Board of Thoracic and Cardiovascular Surgery (FETCS), in 2001 by recognition by the European Board of Vascular Surgeons (FEBVS) and finally in 2003 (following the establishment of the specialty of vascular surgery in Switzerland) as a vascular surgeon (FMH).

Mario Lachat studied medicine at the [[University of Lausanne]] from 1981 to 1986. During his studies, Lachat assisted the vascular surgeons [[Ninoslav Radovanovic]] and Slobodan Vecerina at the Clinique Cecil in Lausanne, who inspired him to become a vascular surgeon. After Lachat started working as an assistant doctor at the University Hospital Zurich in 1987, he was particularly encouraged and influenced by his boss, Marko Turina, and his deputy, Ludwig Karl von Segesser, and began his surgical training at the University Hospital Zurich in 1987 under the direction of Marko Turina, obtaining the title of FMH specialist in general surgery in 1994. This was followed in 1998 by recognition as a specialist in cardiothoracic and vascular surgery (FMH), in 1999 by recognition by the European Board of Thoracic and Cardiovascular Surgery (FETCS), in 2001 by recognition by the European Board of Vascular Surgeons (FEBVS) and finally in 2003 (following the establishment of the specialty of vascular surgery in Switzerland) as a vascular surgeon (FMH).

In 2002, Lachat was awarded a venia legendi in cardiovascular surgery (aortic aneurysm: open or endoluminal therapy?). From 2002 to 2019, he was Professor of Vascular Surgery at the University Hospital of Zurich.[2] He then set up as an independent surgeon at the Hirslanden Clinic in Zurich. He is currently (as of 2024) Co-Chief Physician and Director of the Research Department at the Aortic and Vascular Center Zurich[1] and Medical Co-Director at Endospan, Herzlyia, Israel[3].

In 2002, Lachat was awarded a venia legendi in cardiovascular surgery (aortic aneurysm: open or endoluminal therapy?). From 2002 to 2019, he was Professor of Vascular Surgery at the University Hospital of Zurich.[2] He then set up as an independent surgeon at the Hirslanden Clinic in Zurich. He is currently (as of 2024) Co-Chief Physician and Director of the Research Department at the Aortic and Vascular Center Zurich[1] and Medical Co-Director at Endospan, Herzlyia, Israel[3].

Lachat has also given lectures at national and international congresses. He is the author or co-author of over 400 publications and has been cited over 10,000 times[4].

Lachat has also given lectures at national and international congresses. He is the author or co-author of over 400 publications and has been cited over 10,000 times[4].


Latest revision as of 14:20, 17 October 2024

Mario Louis Lachat (* 1959 in Lausanne) is a Swiss cardiovascular surgeon.

– Life
– Research areas
– Weblinks
– Individual references

Mario Lachat studied medicine at the University of Lausanne from 1981 to 1986. During his studies, Lachat assisted the vascular surgeons Ninoslav Radovanovic and Slobodan Vecerina at the Clinique Cecil in Lausanne, who inspired him to become a vascular surgeon[1]. After Lachat started working as an assistant doctor at the University Hospital Zurich in 1987, he was particularly encouraged and influenced by his boss, Marko Turina, and his deputy, Ludwig Karl von Segesser, and began his surgical training at the University Hospital Zurich in 1987 under the direction of Marko Turina, obtaining the title of FMH specialist in general surgery in 1994. This was followed in 1998 by recognition as a specialist in cardiothoracic and vascular surgery (FMH), in 1999 by recognition by the European Board of Thoracic and Cardiovascular Surgery (FETCS), in 2001 by recognition by the European Board of Vascular Surgeons (FEBVS) and finally in 2003 (following the establishment of the specialty of vascular surgery in Switzerland) as a vascular surgeon (FMH).
In 2002, Lachat was awarded a venia legendi in cardiovascular surgery (aortic aneurysm: open or endoluminal therapy?). From 2002 to 2019, he was Professor of Vascular Surgery at the University Hospital of Zurich.[2] He then set up as an independent surgeon at the Hirslanden Clinic in Zurich. He is currently (as of 2024) Co-Chief Physician and Director of the Research Department at the Aortic and Vascular Center Zurich[1] and Medical Co-Director at Endospan, Herzlyia, Israel[3].
Lachat has also given lectures at national and international congresses. He is the author or co-author of over 400 publications and has been cited over 10,000 times[4].
Mario Lachat is married and has two grown-up daughters.

Although Lachat was particularly interested in cardiac surgery, the aorta increasingly became the focus of his research. Lachat became convinced that the aorta should be considered as a single unit and treated holistically by specialized surgeons, including interdisciplinary ones. In 2002, he became the first surgeon in Switzerland to specialize in aortic surgery, performing conventional, hybrid (partly open surgery, partly endovascular) or completely endovascular aortic procedures. The in-depth examination of the respective pathologies and the biological condition or life expectancy of the patient, as well as a sophisticated choice of treatment options, were able to significantly reduce the complication rates of more complex interventions. Lachat gained international recognition for this approach. Particularly in the treatment of acute patients or more complex aortic pathologies, his results shaped the international community and helped to bring about a breakthrough in newer and better treatment methods. Lachat established himself internationally as a pioneer in aortic surgery. During his years as head of vascular surgery at Zurich University Hospital, he established this visionary specialization in Switzerland. As early as 2003, his unit was divided into different areas (aortic surgery, venous surgery, tumor surgery, cerebrovascular surgery, peripheral vascular surgery) with respective heads. In 2018, Lachat initiated Switzerland’s first aortic center at the Hirslanden Clinic in Zurich.
In collaboration with interventional radiologist Thomas Pfammatter, he carried out pioneering work in the field of endovascular aortic surgery in Switzerland:

  • First endovascular treatment of a ruptured abdominal aortic aneurysm (1998) in Switzerland[5][6]
  • First endovascular treatment of a ruptured thoracoabdominal aortic aneurysm (2008) in Switzerland[7]
  • First fenestrated aortic prosthesis implantation in 2004[8]
  • First branched aortic prosthesis implantation 3.12.2008[9]
  • First complete endovascular treatment of the thoracoabdominal aorta (2002) and the aortic arch (2010)[10]
  • Consultant and co-developer of various endovascular products[11][12]
  • Off-pump wrap isolated ascending aneurysm[13]
  • Inventor of the STAT (sutureless telescoping anastomosis technique) / VORTEC[14] (Viabahn Open Revascularization Technique) anastomosis technique, the first sutureless connection method to become clinically established in cardiovascular surgery, both for the aorta and peripheral vessels. [15] The method shortens the ischemia time, prevents clamping lesions on the vessels and reduces the tissue trauma required to prepare the vessels for a suture anastomosis.
  1. ^ Lachat, Mario. “Hirslanden”. Hirslanden. Hirslanden Spital. Retrieved 17 October 2024.

User:Lester1960: Difference between revisions – Wikipedia

Chinese Quantum Computing Advance Shows Progress, Innovation, But Not an Imminent Threat to Encryption

0
Chinese Quantum Computing Advance Shows Progress, Innovation, But Not an Imminent Threat to Encryption

Insider Brief

  • Chinese researchers have made incremental progress in quantum computing by using quantum annealing to factor a 50-bit integer, but this does not pose an immediate threat to modern cryptographic systems like RSA.
  • The research highlights the potential of quantum annealing for cryptographic applications, though it remains far from capable of breaking the 2048-bit keys used in military-grade encryption.
  • Experts emphasize that while the advance is important, ongoing efforts in post-quantum cryptography will help safeguard data against future quantum threats.

Recent reports that Chinese scientists are on the verge of breaking advanced encryption using quantum computers ignited concerns about cybersecurity, but do not actually do not represent an imminent threat, according to an article in Forbes, citing several experts in the field of cybersecurity and quantum cybersecurity.

A closer examination of the underlying research — which originated from a South China Morning Post article on a Chinese academic paper — reveals a more nuanced picture that, while the work marks progress in quantum computing, would not create an immediate threat to current cryptographic systems.

The paper, titled Quantum Annealing Public Key Cryptographic Attack Algorithm Based on D-Wave Advantage, describes the use of quantum annealing techniques to factor a 50-bit integer using a hybrid quantum-classical approach. While this represents a technical milestone, it is far removed from cracking the highly secure encryption algorithms commonly used in military and financial systems today, the experts point out.

That type of advance would likely never make it to the mainstream media anyway, they add.

Chinese Quantum Computing Advance Shows Progress, Innovation, But Not an Imminent Threat to Encryption

“We should apply common sense,” Duncan Jones, Head of Cybersecurity at Quantinuum, a leading quantum computing company, told Forbes. “If you were the Chinese military and you had just broken AES, media coverage wouldn’t be on your priority list. Instead, you would maximize your informational advantage, without revealing you’ve broken a critical global cipher.”

Understanding RSA and Quantum Computing’s Potential Impact

The focus of the Chinese paper is on attacking RSA encryption, a widely used public-key cryptosystem that underpins secure communications, including online banking and confidential messaging. RSA relies on the difficulty of factoring large prime numbers — a problem that becomes exponentially harder as the numbers grow larger. The concern with quantum computing is that algorithms like Shor’s algorithm, designed for universal quantum computers, could theoretically factor these large numbers much faster than classical computers.

However, the technology required to apply Shor’s algorithm on the scale necessary to break RSA encryption is likely years away. Achieving this would require a universal quantum computer with thousands, if not millions, of stable qubits, a milestone that remains well beyond the reach of current technology.

In fact, the researchers in the paper did acknowledge those limitations. They listed in the paper that environmental interference, underdeveloped hardware and the inability to develop a single attack method for multiple encryption systems still hinder quantum computing’s full cryptographic potential.

“Quantum-resistant algorithms that we are currently working with from the recent NIST standardization are also resistant to Quantum Annealing,” said Jason Soroko, senior fellow at Sectigo and co-host of the Root Causes podcast, as reported by Forbes. Soroko is referring to the U.S. National Institute of Standards and Technology‘s efforts to develop cryptographic algorithms secure against quantum attacks.

Quantum Annealing Innovations

What is interesting about the paper is the use of a modality that most currently do not associate with cryptography.

The Chinese researchers employed quantum annealing, a form of quantum computation optimized for solving combinatorial optimization problems. Unlike universal quantum computers, quantum annealers, such as those developed by D-Wave Systems, use quantum tunneling to search for low-energy states, which can sometimes allow for faster problem-solving in certain scenarios.

Quantum annealers are not typically suited for executing algorithms like Shor’s, but researchers have been experimenting with ways to adapt problems like integer factorization to the quantum annealing model. The Chinese team’s work involved factoring larger numbers than previously achieved with quantum annealers, reaching up to 50 bits using a hybrid quantum-classical algorithm. While this is at least a noteworthy advance in the field, it still falls far short of posing a real threat to RSA encryption, which typically involves key sizes of 2048 bits or more.

The gulf between 50-bit integers and the 2048-bit numbers commonly used in modern cryptographic systems is enormous, with the complexity of factoring increasing exponentially as the key size grows, according to Forbes. In this case, the difference in scale between the Chinese achievement and the size of practical RSA keys means there is no immediate danger to secure communications.

Quantum-Classical Innovation, Incremental Advances, But Not a Paradigm Shift

Letting the dust settle a bit, the Chinese paper represents an incremental step forward in the fields of quantum computing and cryptography, rather than a leap that could compromise modern encryption systems. “

“The advances are valuable contributions to the ongoing research in quantum annealing and cryptography,” Dr. Erik Garcell, Head of Technical Marketing at Classiq, a quantum algorithm design company, told Forbes. “But they do not represent a fundamental breakthrough that threatens contemporary encryption.”

What the research conducted at Shanghai University’s School of Communication and Information Engineering does demonstrate are innovative techniques for optimizing quantum annealing’s approach to factoring integers. The team’s improvements in algorithmic efficiency and qubit usage show the potential of quantum annealing to contribute to cryptographic research. However, this does not translate to a method capable of breaking RSA encryption as it is used in real-world applications today.

The work also highlights that the methods described rely on a hybrid approach, where classical computation still plays a significant role. This hybrid approach is yet more evidence to add to the general understanding that quantum-classical hybrid methods will yield advantages in cryptographic and — most likely — commercial activities.

While not an immediate threat, the advance also underscores the importance of continual work in securing data in a post-quantum world, the Forbes article sums up, which is happening right now, adding: “In the meantime, researchers continue to explore quantum-resistant cryptographic algorithms, known as post-quantum cryptography. These algorithms are designed to be secure against both classical and quantum attacks, ensuring the longevity of digital security in a future where powerful quantum computers may exist.”

34.4 Dark Matter and Closure – College Physics

0
34.4 Dark Matter and Closure – College Physics

Summary

  • Discuss the existence of dark matter.
  • Explain neutrino oscillations and their consequences.

One of the most exciting problems in physics today is the fact that there is far more matter in the universe than we can see. The motion of stars in galaxies and the motion of galaxies in clusters imply that there is about 10 times as much mass as in the luminous objects we can see. The indirectly observed non-luminous matter is called dark matter. Why is dark matter a problem? For one thing, we do not know what it is. It may well be 90% of all matter in the universe, yet there is a possibility that it is of a completely unknown form—a stunning discovery if verified. Dark matter has implications for particle physics. It may be possible that neutrinos actually have small masses or that there are completely unknown types of particles. Dark matter also has implications for cosmology, since there may be enough dark matter to stop the expansion of the universe. That is another problem related to dark matter—we do not know how much there is. We keep finding evidence for more matter in the universe, and we have an idea of how much it would take to eventually stop the expansion of the universe, but whether there is enough is still unknown.

The first clues that there is more matter than meets the eye came from the Swiss-born American astronomer Fritz Zwicky in the 1930s; some initial work was also done by the American astronomer Vera Rubin. Zwicky measured the velocities of stars orbiting the galaxy, using the relativistic Doppler shift of their spectra (see Figure 1(a)). He found that velocity varied with distance from the center of the galaxy, as graphed in Figure 1(b). If the mass of the galaxy was concentrated in its center, as are its luminous stars, the velocities should decrease as the square root of the distance from the center. Instead, the velocity curve is almost flat, implying that there is a tremendous amount of matter in the galactic halo. Although not immediately recognized for its significance, such measurements have now been made for many galaxies, with similar results. Further, studies of galactic clusters have also indicated that galaxies have a mass distribution greater than that obtained from their brightness (proportional to the number of stars), which also extends into large halos surrounding the luminous parts of galaxies. Observations of other EM wavelengths, such as radio waves and X rays, have similarly confirmed the existence of dark matter. Take, for example, X rays in the relatively dark space between galaxies, which indicates the presence of previously unobserved hot, ionized gas (see Figure 1(c)).

Is the universe open or closed? That is, will the universe expand forever or will it stop, perhaps to contract? This, until recently, was a question of whether there is enough gravitation to stop the expansion of the universe. In the past few years, it has become a question of the combination of gravitation and what is called the cosmological constant. The cosmological constant was invented by Einstein to prohibit the expansion or contraction of the universe. At the time he developed general relativity, Einstein considered that an illogical possibility. The cosmological constant was discarded after Hubble discovered the expansion, but has been re-invoked in recent years.

Gravitational attraction between galaxies is slowing the expansion of the universe, but the amount of slowing down is not known directly. In fact, the cosmological constant can counteract gravity’s effect. As recent measurements indicate, the universe is expanding faster now than in the past—perhaps a “modern inflationary era” in which the dark energy is thought to be causing the expansion of the present-day universe to accelerate. If the expansion rate were affected by gravity alone, we should be able to see that the expansion rate between distant galaxies was once greater than it is now. However, measurements show it was less than now. We can, however, calculate the amount of slowing based on the average density of matter we observe directly. Here we have a definite answer—there is far less visible matter than needed to stop expansion. The critical density [latex]{\rho _c}[/latex] is defined to be the density needed to just halt universal expansion in a universe with no cosmological constant. It is estimated to be about

[latex]{\rho _c \approx 10^{-26} \;\text{kg/m}^3}.[/latex]

However, this estimate of [latex]{\rho _c}[/latex] is only good to about a factor of two, due to uncertainties in the expansion rate of the universe. The critical density is equivalent to an average of only a few nucleons per cubic meter, remarkably small and indicative of how truly empty intergalactic space is. Luminous matter seems to account for roughly [latex]{0.5 \% }[/latex] to [latex]{2 \% }[/latex] of the critical density, far less than that needed for closure. Taking into account the amount of dark matter we detect indirectly and all other types of indirectly observed normal matter, there is only [latex]{10 \% }[/latex] to [latex]{40 \% }[/latex] of what is needed for closure. If we are able to refine the measurements of expansion rates now and in the past, we will have our answer regarding the curvature of space and we will determine a value for the cosmological constant to justify this observation. Finally, the most recent measurements of the CMBR have implications for the cosmological constant, so it is not simply a device concocted for a single purpose.

After the recent experimental discovery of the cosmological constant, most researchers feel that the universe should be just barely open. Since matter can be thought to curve the space around it, we call an open universe negatively curved. This means that you can in principle travel an unlimited distance in any direction. A universe that is closed is called positively curved. This means that if you travel far enough in any direction, you will return to your starting point, analogous to circumnavigating the Earth. In between these two is a flat (zero curvature) universe. The recent discovery of the cosmological constant has shown the universe is very close to flat, and will expand forever. Why do theorists feel the universe is flat? Flatness is a part of the inflationary scenario that helps explain the flatness of the microwave background. In fact, since general relativity implies that matter creates the space in which it exists, there is a special symmetry to a flat universe.

34.4 Dark Matter and Closure – College Physics
Figure 1. Evidence for dark matter: (a) We can measure the velocities of stars relative to their galaxies by observing the Doppler shift in emitted light, usually using the hydrogen spectrum. These measurements indicate the rotation of a spiral galaxy. (b) A graph of velocity versus distance from the galactic center shows that the velocity does not decrease as it would if the matter were concentrated in luminous stars. The flatness of the curve implies a massive galactic halo of dark matter extending beyond the visible stars. (c) This is a computer-generated image of X rays from a galactic cluster. The X rays indicate the presence of otherwise unseen hot clouds of ionized gas in the regions of space previously considered more empty. (credit: NASA, ESA, CXC, M. Bradac (University of California, Santa Barbara), and S. Allen (Stanford University))

There is no doubt that dark matter exists, but its form and the amount in existence are two facts that are still being studied vigorously. As always, we seek to explain new observations in terms of known principles. However, as more discoveries are made, it is becoming more and more difficult to explain dark matter as a known type of matter.

One of the possibilities for normal matter is being explored using the Hubble Space Telescope and employing the lensing effect of gravity on light (see Figure 2). Stars glow because of nuclear fusion in them, but planets are visible primarily by reflected light. Jupiter, for example, is too small to ignite fusion in its core and become a star, but we can see sunlight reflected from it, since we are relatively close. If Jupiter orbited another star, we would not be able to see it directly. The question is open as to how many planets or other bodies smaller than about 1/1000 the mass of the Sun are there. If such bodies pass between us and a star, they will not block the star’s light, being too small, but they will form a gravitational lens, as discussed in Chapter 34.2 General Relativity and Quantum Gravity.

In a process called microlensing, light from the star is focused and the star appears to brighten in a characteristic manner. Searches for dark matter in this form are particularly interested in galactic halos because of the huge amount of mass that seems to be there. Such microlensing objects are thus called massive compact halo objects, or MACHOs. To date, a few MACHOs have been observed, but not predominantly in galactic halos, nor in the numbers needed to explain dark matter.

MACHOs are among the most conventional of unseen objects proposed to explain dark matter. Others being actively pursued are red dwarfs, which are small dim stars, but too few have been seen so far, even with the Hubble Telescope, to be of significance. Old remnants of stars called white dwarfs are also under consideration, since they contain about a solar mass, but are small as the Earth and may dim to the point that we ordinarily do not observe them. While white dwarfs are known, old dim ones are not. Yet another possibility is the existence of large numbers of smaller than stellar mass black holes left from the Big Bang—here evidence is entirely absent.

There is a very real possibility that dark matter is composed of the known neutrinos, which may have small, but finite, masses. As discussed earlier, neutrinos are thought to be massless, but we only have upper limits on their masses, rather than knowing they are exactly zero. So far, these upper limits come from difficult measurements of total energy emitted in the decays and reactions in which neutrinos are involved. There is an amusing possibility of proving that neutrinos have mass in a completely different way.

We have noted in Chapter 33.4 Particles, Patterns, and Conservation Laws that there are three flavors of neutrinos ([latex]{\nu _e}[/latex], [latex]{\nu _{\mu}}[/latex], and [latex]{\nu _{\tau}}[/latex]) and that the weak interaction could change quark flavor. It should also change neutrino flavor—that is, any type of neutrino could change spontaneously into any other, a process called neutrino oscillations. However, this can occur only if neutrinos have a mass. Why? Crudely, because if neutrinos are massless, they must travel at the speed of light and time will not pass for them, so that they cannot change without an interaction. In 1999, results began to be published containing convincing evidence that neutrino oscillations do occur. Using the Super-Kamiokande detector in Japan, the oscillations have been observed and are being verified and further explored at present at the same facility and others.

Neutrino oscillations may also explain the low number of observed solar neutrinos. Detectors for observing solar neutrinos are specifically designed to detect electron neutrinos [latex]{\nu _e}[/latex] produced in huge numbers by fusion in the Sun. A large fraction of electron neutrinos [latex]{\nu _e}[/latex] may be changing flavor to muon neutrinos [latex]{\nu _{\mu}}[/latex] on their way out of the Sun, possibly enhanced by specific interactions, reducing the flux of electron neutrinos to observed levels. There is also a discrepancy in observations of neutrinos produced in cosmic ray showers. While these showers of radiation produced by extremely energetic cosmic rays should contain twice as many [latex]{\nu _{\mu}}[/latex] s as [latex]{\nu _e}[/latex] s, their numbers are nearly equal. This may be explained by neutrino oscillations from muon flavor to electron flavor. Massive neutrinos are a particularly appealing possibility for explaining dark matter, since their existence is consistent with a large body of known information and explains more than dark matter. The question is not settled at this writing.

The most radical proposal to explain dark matter is that it consists of previously unknown leptons (sometimes obtusely referred to as non-baryonic matter). These are called weakly interacting massive particles, or WIMPs, and would also be chargeless, thus interacting negligibly with normal matter, except through gravitation. One proposed group of WIMPs would have masses several orders of magnitude greater than nucleons and are sometimes called neutralinos. Others are called axions and would have masses about [latex]{10^{-10}}[/latex] that of an electron mass. Both neutralinos and axions would be gravitationally attached to galaxies, but because they are chargeless and only feel the weak force, they would be in a halo rather than interact and coalesce into spirals, and so on, like normal matter (see Figure 3).

The figure shows a satellite in orbit above the glowing atmosphere of the Earth. The satellite is tube shaped and has a cover that is open at one end. It has two solar panels and is covered in what looks like aluminum foil.
Figure 2. The Hubble Space Telescope is producing exciting data with its corrected optics and with the absence of atmospheric distortion. It has observed some MACHOs, disks of material around stars thought to precede planet formation, black hole candidates, and collisions of comets with Jupiter. (credit: NASA (crew of STS-125))
A close-up photograph of a slow-moving stream carrying leaves.
Figure 3. Dark matter may shepherd normal matter gravitationally in space, as this stream moves the leaves. Dark matter may be invisible and even move through the normal matter, as neutrinos penetrate us without small-scale effect. (credit: Shinichi Sugiyama)

Some particle theorists have built WIMPs into their unified force theories and into the inflationary scenario of the evolution of the universe so popular today. These particles would have been produced in just the correct numbers to make the universe flat, shortly after the Big Bang. The proposal is radical in the sense that it invokes entirely new forms of matter, in fact two entirely new forms, in order to explain dark matter and other phenomena. WIMPs have the extra burden of automatically being very difficult to observe directly. This is somewhat analogous to quark confinement, which guarantees that quarks are there, but they can never be seen directly. One of the primary goals of the LHC at CERN, however, is to produce and detect WIMPs. At any rate, before WIMPs are accepted as the best explanation, all other possibilities utilizing known phenomena will have to be shown inferior. Should that occur, we will be in the unanticipated position of admitting that, to date, all we know is only 10% of what exists. A far cry from the days when people firmly believed themselves to be not only the center of the universe, but also the reason for its existence.

  • Dark matter is non-luminous matter detected in and around galaxies and galactic clusters.
  • It may be 10 times the mass of the luminous matter in the universe, and its amount may determine whether the universe is open or closed (expands forever or eventually stops).
  • The determining factor is the critical density of the universe and the cosmological constant, a theoretical construct intimately related to the expansion and closure of the universe.
  • The critical density ρc is the density needed to just halt universal expansion. It is estimated to be approximately 10–26 kg/m3.
  • An open universe is negatively curved, a closed universe is positively curved, whereas a universe with exactly the critical density is flat.
  • Dark matter’s composition is a major mystery, but it may be due to the suspected mass of neutrinos or a completely unknown type of leptonic matter.
  • If neutrinos have mass, they will change families, a process known as neutrino oscillations, for which there is growing evidence.

Conceptual Questions

1: Discuss the possibility that star velocities at the edges of galaxies being greater than expected is due to unknown properties of gravity rather than to the existence of dark matter. Would this mean, for example, that gravity is greater or smaller than expected at large distances? Are there other tests that could be made of gravity at large distances, such as observing the motions of neighboring galaxies?

2: How does relativistic time dilation prohibit neutrino oscillations if they are massless?

3: If neutrino oscillations do occur, will they violate conservation of the various lepton family numbers ([latex]{L_e}[/latex] , [latex]{L_{\mu}}[/latex] , and [latex]{L_{\tau}}[/latex])? Will neutrino oscillations violate conservation of the total number of leptons?

4: Lacking direct evidence of WIMPs as dark matter, why must we eliminate all other possible explanations based on the known forms of matter before we invoke their existence?

Problems & Exercises

1: If the dark matter in the Milky Way were composed entirely of MACHOs (evidence shows it is not), approximately how many would there have to be? Assume the average mass of a MACHO is 1/1000 that of the Sun, and that dark matter has a mass 10 times that of the luminous Milky Way galaxy with its [latex]{10^{11}}[/latex] stars of average mass 1.5 times the Sun’s mass.

2: The critical mass density needed to just halt the expansion of the universe is approximately [latex]{10^{-26} \;\text{kg/m}^3}[/latex].

3: (a) Convert this to [latex]{\text{eV}/c^2 \cdot \text{m}^3}[/latex].

(b) Find the number of neutrinos per cubic meter needed to close the universe if their average mass is [latex]{7 \;\text{eV}/c^2}[/latex] and they have negligible kinetic energies.

3: Assume the average density of the universe is 0.1 of the critical density needed for closure. What is the average number of protons per cubic meter, assuming the universe is composed mostly of hydrogen?

4: To get an idea of how empty deep space is on the average, perform the following calculations:

(a) Find the volume our Sun would occupy if it had an average density equal to the critical density of [latex]{10^{-26} \;\text{kg/m}^3}[/latex] thought necessary to halt the expansion of the universe.

(b) Find the radius of a sphere of this volume in light years.

(c) What would this radius be if the density were that of luminous matter, which is approximately 5% that of the critical density?

(d) Compare the radius found in part (c) with the 4-ly average separation of stars in the arms of the Milky Way.

Glossary

axions
a type of WIMPs having masses about 10−10 of an electron mass
cosmological constant
a theoretical construct intimately related to the expansion and closure of the universe
critical density
the density of matter needed to just halt universal expansion
dark matter
indirectly observed non-luminous matter
flat (zero curvature) universe
a universe that is infinite but not curved
microlensing
a process in which light from a distant star is focused and the star appears to brighten in a characteristic manner, when a small body (smaller than about 1/1000 the mass of the Sun) passes between us and the star
MACHOs
massive compact halo objects; microlensing objects of huge mass
neutrino oscillations
a process in which any type of neutrino could change spontaneously into any other
neutralinos
a type of WIMPs having masses several orders of magnitude greater than nucleon masses
negatively curved
an open universe that expands forever
positively curved
a universe that is closed and eventually contracts
WIMPs
weakly interacting massive particles; chargeless leptons (non-baryonic matter) interacting negligibly with normal matter

Solutions

Problems & Exercises

1: [latex]{1.5 \times 10^{15}}[/latex]

3: [latex]{0.6 \;\text{m}^{-3}}[/latex]

 

Silk-on-graphene films line up for next-generation bioelectronics – Physics World

0
Silk-on-graphene films line up for next-generation bioelectronics – Physics World






Silk-on-graphene films line up for next-generation bioelectronics – Physics World




















Skip to main content



Silk-on-graphene films line up for next-generation bioelectronics – Physics World

Discover more from Physics World


Copyright © 2024 by IOP Publishing Ltd and individual contributors

September/October 2024 – test post

0
September/October 2024 – test post

September/October 2024 – test post

This digital edition is brought to you by:



The post September/October 2024 – test post appeared first on Mechanical Business.

Press Release: Wasatch Photonics Celebrates 20 Years of Innovation

0
Press Release: Wasatch Photonics Celebrates 20 Years of Innovation

Press Release: Wasatch Photonics Celebrates 20 Years of InnovationLOGAN, UT – December 15, 2022 – Wasatch Photonics is proud to celebrate its 20th anniversary of helping customers develop new applications of light. Founded in 2002, the company designs and manufactures high efficiency transmission gratings for spectroscopy, optical coherence tomography (OCT), astronomy, and pulse compression, as well as spectrometers for OCT, Raman, and other spectroscopic techniques.

In 20 years, Wasatch Photonics has grown from a small shop started by a group of entrepreneur-innovators in Logan, UT to commercialize disruptive volume phase holographic (VPH) grating technology, establishing three sites across the US and a global distributor network. Along the way, we’ve patented several unique VPH grating technologies and developed a line of highly configurable modular spectrometers for OCT and Raman with industry-leading sensitivity and speed. Both the Logan, UT and Morrisville, NC manufacturing facilities are ISO 9001:2015 certified and are under upgrade or expansion.

More than anything, we are proud of enabling our customers to achieve their goals in research and product development, from large-format gratings used in telescopes across the globe to Raman-based instruments that make our world a safer, more secure place. Wasatch Photonics’ customers have taken our gratings to Mars, to the space station, and into homes for health monitoring, and they’ve positioned our spectrometers at the forefront of medicine through new devices based on Raman or made more powerful by OCT imaging.

Reflecting on the 20-year anniversary, Board Chairman Lindsay Burton said, “Wasatch Photonics has a genuine positive impact on the world – I get such a thrill to hear our team talk about the applications. It’s amazing to be involved in cutting edge technology that couldn’t have been done 10-20 years ago. We are doing some truly meaningful work, and it’s wonderful that so many of our team share that same motivation and vision. That makes our efforts collaborative, and it shows.

Since its early beginnings in Utah, Wasatch Photonics has worked with customers to push the limits of grating-based photonics, through relationships based on shared expertise and trust. “The company still has a lot of its original innovative spirit, in all aspects – from engineering to marketing,” says Doug Miller, one of the founders, “but now we’re a company that can design to a high spec, build it, and deploy it reliably.” No longer a start-up but a flourishing company, we’ve become like so many of our OEM customers – big enough to have an impact in the industry, but still small enough to care about each customer.

As we look forward, we are focused on keeping our founding principle of innovation alive and using it to drive the most promising applications of Raman, OCT, and VPH gratings,” says CEO Dr. David Creasey. “We continue to refine, optimize, and miniaturize our product lines to enable those next-generation devices in a way that both accelerates their development, and is scalable to volume for OEMs. There’s never been a more exciting time for the company.

Want to learn more about Wasatch Photonics’ origins and growth? Watch this space for our story, or contact us

About Wasatch Photonics

Founded in 2002, Wasatch Photonics designs, manufactures, and markets high efficiency volume phase holographic (VPH) transmission gratings for pulse compression, astronomy, spectral imaging, and optical coherence tomography (OCT), as well as spectrometers for OCT, Raman, and other spectral techniques. Our high-performance gratings, spectrometers, and systems are used in research labs around the world. We also work extensively with OEM partners spanning a diverse range of industries, including defense and security, industrial lasers and materials processing, sensing, analytical instrumentation, medical, and energy markets.      


Sign up for updates

Exploring Polyacrylamide Gel Electrophoresis (PAGE)

0
Exploring Polyacrylamide Gel Electrophoresis (PAGE)

Last Updated on July 2, 2024 by Zinab Hasssan

Polyacrylamide Gel Electrophoresis (PAGE) is a fundamental technique in the realm of molecular biology, enabling the separation and analysis of macromolecules such as DNA, RNA, and proteins based on their size and charge. As a cornerstone of scientific research and clinical diagnostics, understanding the intricacies of this method is essential.

 In this blog, we’ll delve into the world of Polyacrylamide Gel Electrophoresis, exploring common questions and providing insightful answers to demystify this powerful tool.

Polyacrylamide Gel Electrophoresis (PAGE) is an electrophoretic technique that is used mainly for protein analysis by separating proteins, nucleic acid fragments (smaller than 100 bp), and polypeptide chains depending on their electrophoretic mobility. It is widely used in molecular biology, biochemistry, genetics, forensic chemistry, and biotechnology.

Exploring Polyacrylamide Gel Electrophoresis (PAGE)Exploring Polyacrylamide Gel Electrophoresis (PAGE)

SDS-PAGE (SDS-polyacrylamide gel electrophoresis) is an electrophoretic technique that separates polypeptide chains according to their molecular weights (Mr). This method uses polyacrylamide gel containing sodium dodecyl sulfate (SDS) which is added to the system in order to provide denaturing conditions.

You can tell from its name that the technique utilizes polyacrylamide gel containing sodium dodecyl sulfate (SDS). SDS in the sample and the gel cancels the effect of intrinsic electrical charge of the sample proteins. All proteins acquire a negatively charged rod like structure, so separation becomes largely dependent on Molecular weight of sample proteins.

 The gel used in SDS PAGE can be divided into stacking gel and separating gel. Stacking gel (acrylamide 5%) is poured on top of the separating gel (after solidification) and a gel comb is inserted in the stacking gel. The acrylamide percentage is chosen in accordance with the size of target proteins in the sample.

How does polyacrylamide gel electrophoresis (PAGE) work?How does polyacrylamide gel electrophoresis (PAGE) work?

The main principle of Polyacrylamide Gel Electrophoresis PAGE is to separate molecules (proteins, or polypeptides, or nucleic acids) by passing them through the pores of a polyacrylamide gel using an electric current. To achieve this, an acrylamide–bisacrylamide mix is polymerized (polyacrylamide) by the addition of ammonium persulfate (APS).

The reaction, which is catalyzed by tetramethylethylenediamine (TEMED), forms a net-like structure with pores through which analytes can move. The higher the percentage of total acrylamide included in the gel, the smaller the pore size, hence the smaller the proteins that will be able to pass through. The ratio of acrylamide to bisacrylamide will also impact pore size but this is often kept constant. Smaller pore sizes also reduce the speed at which small proteins are able to move through the gel, improving their resolution and preventing them from running off into the buffer rapidly when current is applied.

Source: Polyacrylamide gel electrophoresis, how it works, technique variants and its applications, Analysis & Separations from Technology Networks. Available at: https://www.technologynetworks.com/analysis/articles/polyacrylamide-gel-electrophoresis-how-it-works-technique-variants-and-its-applications-359100#:~:text=1)%20Samples%20are%20prepared%20for,gels%20are%20stained%20and%20visualized (Accessed: 01 July 2024).

The SDS-PAGE provides us with information on the mass, charge, purity, or presence of a protein by separating polypeptide chains according to their molecular weights.

In PraxiLabs, we provide several virtual labs simulations for protein electrophoresis such as:

Learn polyacrylamide gel electrophoresis protocol and practice proper sample preparation for SDS-PAGE (SDS-polyacrylamide gel electrophoresis.

Learn how to extract the cellular proteome from a sample, followed by its isoelectric focusing and SDS-polyacrylamide gel electrophoresis.

Understand how to detect a specific protein in a sample using electroblotting of proteins into nitrocellulose membrane.

Learn how to separate and identify DNA or RNA molecules by size, using an electric current.

 Subscribe and Try them Now!

We use PAGE for protein electrophoresis to obtain high resolution separation of complex mixtures of proteins by separating polypeptide chains according to their molecular weights.

Applications of Protein Gel Electrophoresis:

Once proteins have been separated by gel electrophoresis, they can be utilized for a number of downstream applications including:

  • Determining the size and IP of proteins.
  • Enzymatic assays.
  • Further purification.
  • Transfer to a membrane for immunological detection (immunoblotting or western blotting).
  • Elution and digestion for mass spectrometric analysis.
  • Serum protein electrophoresis is conducted when a patient has an abnormal result on a total protein or albumin blood test or has symptoms of diseases that are associated with abnormal protein production, such as multiple myeloma or multiple sclerosis.

Ready to revolutionize your biology education? Dive into PraxiLabs’ virtual labs and gain unlimited access to cutting-edge experiments – sign up for free today!

 

Discover Polyacrylamide Gel Electrophoresis Virtual labs from PraxiLabs Discover Polyacrylamide Gel Electrophoresis Virtual labs from PraxiLabs 

 By the end of this simulation students should be able to:

  •  Prepare polyacrylamide gel properly according to the size of target proteins. Interpret the results of a successful protein electrophoresis run.
  •  Learn polyacrylamide gel electrophoresis protocol.
  • Practice proper sample preparation for SDS-PAGE (SDS-polyacrylamide gel electrophoresis).

 Steps of Polyacrylamide Gel Electrophoresis Simulation

 Steps of Polyacrylamide Gel Electrophoresis Simulation Steps of Polyacrylamide Gel Electrophoresis Simulation

By applying the following steps, you will perform the Polyacrylamide Gel Electrophoresis simulation from PraxiLabs successfully:

First: Gel and buffer preparation

Second: steps of gel casting and sample application

1. Prepare the Casting Gel Cassettes:

Clean the glass plates with distilled water, dry.

Clamp two glass plates in the casting frames on the casting stands.

Pipette distilled water and set aside until the  gel is prepared to ensure no leakage.

2.Prepare the Separating Gel:

To prepare a polyacrylamide 12%, 10 ml of separating gel:

Remove D.W. by turning the cassette upside down on filter paper.

Pipette The appropriate amount of separating gel solution into the gap between the glass plates. (slowly, thoroughly), leave one 1 cm.

To make the top of the separating gel be horizontal, fill in distilled water into the gap until it overflows.

Wait for 20-30 min to let it gelate.

Discard D.W. by turning the cassette upside down on filter paper.

3.Prepare the Stacking Gel:

To prepare a polyacrylamide 12%,use  5 ml of stacking gel:

Pipette the stacking gel until overflow.

Insert the well-forming comb without trapping air under the teeth. Do not move it after insertion.

Wait for 20-30 min to let it gelate. Make sure a complete gelation of the stacking gel.

Take out the comb.

Take the glass plates out of the casting frame and set them in the cell buffer unit.

Pour the running buffer (electrophoresis buffer) into the inner chamber and keep pouring after overflow until the buffer surface reaches the mark on the outer chamber.

4.Prepare samples:

Add 5 ul sample in a tube.

Add 3 ul loading buffer.

Vortex for 10 seconds.

Heat them at 95˚C for 3 min.

Using the pipette, Load prepared samples into wells and make sure notto overflow.

  1. Load 4 ul protein marker into the first lane.
  2. Cover the top and connect the electrodes.
  3. Set at 120 volt for one hour and run electrophoresis.
  4. Stop SDS-PAGE running when the lowermost sign of the protein marker almost reaches the foot line of the glass plate.

General Guidelines for SDS-PAGE (SDS- Polyacrylamide Gel Electrophoresis) Experiment

 

General Guidelines for SDS-PAGE (SDS- Polyacrylamide Gel Electrophoresis) ExperimentGeneral Guidelines for SDS-PAGE (SDS- Polyacrylamide Gel Electrophoresis) Experiment

Request a Demo Now and Increase your Students’ Learning Retention and Engagement!

 

Draft:Lamidi: Difference between revisions – Wikipedia

0
User:Lester1960: Difference between revisions – Wikipedia

From Wikipedia, the free encyclopedia

Content deleted Content added


Line 2: Line 2:

{{Infobox name|name=Lamidi|gender=Male|language=Yoruba|origin=Nigeria|meaning=The Yorùbá version of El-Hameed.|region=South West, Nigeria|}}

{{Infobox name|name=Lamidi|gender=Male|language=Yoruba|origin=Nigeria|meaning=The Yorùbá version of El-Hameed.|region=South West, Nigeria|}}

”’Lamidi”’ is a [[Nigerians|Nigerian]] male given name and surname of [[Yoruba language|Yoruba]] origin. It means “The Yorùbá version of El-Hameed.

”’Lamidi”’ is a [[Nigerians|Nigerian]] male given name and surname of [[Yoruba language|Yoruba]] origin. It means “The Yorùbá version of El-Hameed.

==References==

==References==


Revision as of 14:06, 17 October 2024

Nigerian given name

Lamidi
Gender Male
Language(s) Yoruba
Word/name Nigeria
Meaning The Yorùbá version of El-Hameed.
Region of origin South West, Nigeria

Lamidi is a Nigerian male given name and surname of Yoruba origin. It means “The Yorùbá version of El-Hameed.[1]

References

User:Lester1960: Difference between revisions – Wikipedia

Study finds preschool children form ‘social droplets’ when moving around the classroom – Physics World

0
Study finds preschool children form ‘social droplets’ when moving around the classroom – Physics World






Study finds preschool children form ‘social droplets’ when moving around the classroom – Physics World




















Skip to main content



Study finds preschool children form ‘social droplets’ when moving around the classroom – Physics World

Discover more from Physics World


Copyright © 2024 by IOP Publishing Ltd and individual contributors

Uniting Macroscopic and Microscopic Worlds

0
Uniting Macroscopic and Microscopic Worlds

Question

The nature of dark matter (DM) remains elusive, despite extensive efforts to detect it. Recent gravitational wave observations have been linked to primordial black holes (PBHs) from the early Universe, which may contribute to DM. There’s a need for general methods to distinguish different DM types, which can include a mixture of macroscopic black holes and microscopic particles.

Findings

This study demonstrates that “chirping” gravitational waves generated by mergers of astronomical compact objects are affected by diffractive optics gravitational lensing from encountered DM objects before reaching the Solar System. These effects can allow for investigation of their origins. The innovative approach enables definitively probing if black holes are surrounded by particle DM halos on an event-by-event basis.

Meaning

This study is an important advance because it establishes a general and definitive search method for DM in circumstances where it is composed of multiple macroscopic and microscopic components, including PBHs and particles. The novel approach demonstrated in this research enables explorations of DM, a fundamental component of the Universe that has not been captured before. Importantly, this approach also has the potential to reveal new insights about the origin of black holes themselves.

Overview

Dark matter (DM), an elusive and invisible substance, constitutes ~85% of the Universe’s matter content. DM plays a key role in the Universe, including holding galaxies such as our own Milky Way together. Despite its profound significance and decades of efforts to detect its non-gravitational manifestations, the underlying nature of DM remains one of the greatest mysteries in science. Previous DM search methods have typically focused on DM consisting of a specific type of new particle and its interactions, calling for more general and extensive searches especially for scenarios where DM is composed of macroscopic and microscopic objects.

Since the groundbreaking discovery of gravitational waves from a merger of two stellar-mass black holes in 2015 by the international LIGO experiment, leading to the 2017 Nobel Prize, dozens of such events have already been detected thus far. Intriguingly, the gravitational wave observations from merging black holes further indicate that there are many black holes in the Universe and have sparked suggestions on an intimate link to DM if such black holes have non-astrophysical origin. In particular, primordial black holes (PBHs) that have been theorized for decades by cosmologists, including the late Stephen Hawking, to have been formed in large numbers a few seconds after the Big Bang, are intriguing DM candidates and are dramatically distinct from new particles that have often been the focus of traditional DM searches. However, finding a way to detect and identify these PBHs and distinguish them from textbook astrophysical black holes has been a major challenge. On the other hand, it is difficult to explain DM by PBHs with celestial masses alone, and it could be likely that it is composed of multiple components, including new particles. In such a scenario, the PBHs are expected to become “dressed” after their creation, gradually enveloping their surroundings in halos of DM particles. However, there has been no method to test this scenario definitively.

In this new study, an international team of researchers from Japan and Korea have established an innovative approach to search for DM halos around PBHs using observations of unusual gravitational wave signal patterns from diffractive gravitational lensing effects imprinted by DM objects located along the line of sight in response to traversing gravitational waves emitted from a distant merging compact object system, such as two astrophysical stellar-mass black holes (see left panel in Figure 1).

The proposed method exploits unique frequency-dependent amplitude variations of gravitational waves generated during celestial mergers from gravitational lensing of DM objects. The effect is highly sensitive to the presence and distribution of the extended DM halos surrounding the encountered black holes. This study not only provides a way to detect PBHs, but also allows us to clearly distinguish between PBHs with and without surrounding DM halos (referred to as “dressed” and “bare” PBHs, respectively) (see right panel in Figure 1). This distinction is crucial since various detection approaches put forth thus far have failed to identify observables that make a non-ambiguous difference between black holes with and without halos. This allows not only directly probing the theory that PBHs coexist with new particles as DM components, but also can establish definitive support for it.

Uniting Macroscopic and Microscopic Worlds

Figure: [Left] Illustration of the diffractive lensing of chirping gravitational waves originating from binary merger by “dressed” PBHs in particle DM halos at cosmological distances. Diffractive lensing distorts gravitational wave wavefronts along a broad region of space, which is summed up to produce frequency-dependent amplification of the wave in contrast to the discrete images resulting from geometrical optics lensing. [Right] Frequency dependence of the lensing magnification and phase shift for typical assumed parameters. Characteristic amplification by “dressed” PBH (solid) is exhibiting non-trivial frequency dependence at low-frequency diffraction regimes and illustrates discrimination from a bare PBH without halo (dashed) as well as un-lensed observations (straight dot-dashed). Regimes of geometric optics (GO), transition regime beyond geometric optics (bGO) and weak diffraction (WD) are shown. Figures reproduced from publication.

“Our novel approach paves the way for definitive discoveries of dark matter in upcoming observations should it be composed of macroscopic black holes and microscopic particles, as suggested by various theories. This is quite exciting. However, much remains to be done to comprehensively explore the landscape of dark matter.” says Volodymyr Takhistov, a principal investigator and associate professor at the International Center for Quantum-field Measurement Systems of the Universe and Particles (WPI-QUP) at KEK.

The findings offer a novel perspective on the DM composition and establish new tools for its exploration when it is composed of a mixture of macroscopic and microscopic objects, including particles and ancient black holes. If confirmed experimentally, this could provide new insights about the key components of the cosmos, the early Universe’s conditions as well as the nature of gravitational wave events detected by observatories such as LIGO opening new pathways for fundamental discoveries.

This study was published in Physical Review Letters on September 5, 2024.

Research group

This project was conducted by Han-Gil Choi, a research fellow at the Institute for Basic Science (IBS) in Korea, Sunghoon Jung, an associate professor at the Seoul National University in Korea, Philip Lu, a postdoctoral researcher at the Seoul National University in Korea and Volodymyr Takhistov, a principal investigator and associate professor at the International Center for Quantum-field Measurement Systems for Studies of the Universe and Particles (QUP-WPI) at the High Energy Accelerator Research Organization (KEK) as well as a Visiting Scientist at Kavli IPMU, University of Tokyo.

Acknowledgement

This project was supported by the Institute for Basic Science (IBS) in Korea under the project code IBS-R018-D3 (H.G.C.), National Research Foundation of Korea Grant NRF2019R1C1C1010050 (S.J., P.L.), Japan Society for the Promotion of Science KAKENHI Grant No. 23K13109 (V.T.) as well as the World Premier International Research Center Initiative (WPI), MEXT, Japan (V.T.).

Published paper

Title: Coexistence Test of Primordial Black Holes and Particle Dark Matter from Diffractive Lensing
Authors: Han-Gil Choi, Sunghoon Jung, Philip Lu, Volodymyr Takhistov
Journal: Physical Review Letters
https://link.aps.org/doi/10.1103/PhysRevLett.133.101002
DOI: 10.1103/PhysRevLett.133.101002

Contact

International Center for Quantum-field Measurement Systems for Studies of the Universe and Particle (WPI-QUP)
e-mail: qup_pr@ml.post.kek.jp

High Energy Accelerator Research Organization (KEK)
e-mail: press@kek.jp

Related page

International Center for Quantum-field Measurement Systems for Studies of the Universe and Particles

View original article here.