miércoles, 16 de diciembre de 2015

Scientists watch a planet being born for the first time in history

Scientists watch a planet being born for the first time in history
Scientists watch a planet being born for the first time in history
For the first time ever, scientists have been able to see planets as they are born. In photographs obtained by the Large Binocular Telescope and the Magellan Adaptive Optics System, astronomers watched as a ring of material formed into planets around a young star. This discovery could lead to the discovery of other forming exoplanets and give scientists answers to how planets are formed and then evolve into solar systems such as ours.
enter image description here

The telescope and optics system was able to photograph LkCa 15, a gas-giant exoplanet forming around a young star about 450 light years from Earth. The LkCa 15 system, according to Space.com, features a “disk of dust and gas” around a “sun-like” star that’s just two million years young. The scientific team, led by Stephanie Sallum, a graduate student at the University of Arizona, used the Large Binocular Telescope, an observatory in southeastern Arizona that has two 27-foot-wide mirrors.
The scientists confirmed that one giant protoplanet (not quite a planet, yet) existed and called it LkCa 15b. They were able to see it in hydrogen-alpha photons which are a type of light emitted when “superheated material accretes onto a newly forming world.” Essentially, the new planets are surrounded by “feeder” material.
Another newborn planet, LkCa 15c is also inside the gap between star and the dust ring and it’s possible that another LkCa 15d is also there. “We’re seeing sources in the clearing,” Sallum said. “This is the first time that we’ve been able to connect a forming planet to a gap in a protoplanetary disk.”

Google, NASA: Our Quantum Computer Is 100 Million Times Faster Than A Normal PC

Google, NASA: Our Quantum Computer Is 100 Million Times Faster Than A Normal PC
Google, NASA: Our Quantum Computer Is 100 Million Times Faster Than A Normal PC
But only for very specific optimisation problems.
“Two years ago Google and NASA went halfsies on a D-Wave quantum computer, mostly to find out whether there are actually any performance gains to be had when using quantum annealing instead of a conventional computer. Recently, Google and NASA received the latest D-Wave 2X quantum computer, which the company says has “over 1000 qubits.”
At an event yesterday at the NASA Ames Research Center, where the D-Wave computer is kept, Google and NASA announced their latest findings—and for highly specialised workloads, quantum annealing does appear to offer a truly sensational performance boost. For an optimisation problem involving 945 binary variables, the D-Wave X2 is up to 100 million times faster (108) than the same problem running on a single-core classical (conventional) computer.
Google and NASA also compared the D-Wave X2’s quantum annealing against Quantum Monte Carlo, an algorithm that emulates quantum tunnelling on a conventional computer. Again, a speed-up of up to 108 was seen in some cases.”

enter image description here


Hartmut Neven, the head of Google’s Quantum Artificial Intelligence lab, said these results are “intriguing and very encouraging” but that there’s still “more work ahead to turn quantum enhanced optimization into a practical technology.”
As always, it’s important to note that D-Wave’s computers are not capable of universal computing: they are only useful for a small number of very specific tasks—and Google, NASA, and others are currently trying to work out what those tasks might be. D-Wave’s claim of “over 1,000 qubits” is also unclear. In the past, several physical qubits were clustered to create a single computational qubit, and D-Wave doesn’t make that distinction clear.

Scientists explain origin of heavy elements in the Universe

Scientists explain origin of heavy elements in the Universe (new.huji.ac.il)
Scientists explain origin of heavy elements in the Universe
In a letter published in the prestigious journal Nature Physics, a team of scientists from The Hebrew University of Jerusalem suggests a solution to the Galactic radioactive plutonium puzzle.
All the Plutonium used on Earth is artificially produced in nuclear reactors. Still, it turns out that it is also produced in nature.
“The origin of heavy elements produced in nature through rapid neutron capture (‘r-process’) by seed nuclei is one of the current nucleosynthesis mysteries,” Dr. Kenta Hotokezaka, Prof. Tsvi Piran and Prof. Michael Paul from the Racah Institute of Physics at the Hebrew University of Jerusalem said in their letter.
Plutonium is a radioactive element. Its longest-lived isotope is plutonium-244 with a lifetime of 120 million years.
Detection of plutonium-244 in nature would imply that the element was synthesized in astrophysical phenomena not so long ago (at least in Galactic time scales) and hence its origin cannot be too far from us.
Several years ago it was discovered that the early Solar system contained a significant amount of plutonium-244. Considering its short-lived cycle, plutonium-244 that existed over four billion years ago when Earth formed has long since decayed but its daughter elements have been detected.
But recent measurements of the deposition of plutonium-244, including analysis of Galactic debris that fell to Earth and settled in deep sea, suggest that only very small amount of plutonium has reached Earth from outer space over the recent 100 million years. This is in striking contradiction to its presence at the time when the Solar system was formed, and that is why the Galactic radioactive plutonium remained a puzzle.
The Hebrew University team of scientists have shown that these contradicting observations can be reconciled if the source of radioactive plutonium (as well as other rare elements, such as gold and uranium) is in mergers of binary neutron stars. These mergers are extremely rare events but are expected to produce large amounts of heavy elements.
The model implies that such a merger took place accidentally in the vicinity of our Solar System within less than a hundred million years before it was born. This has led to the relatively large amount of plutonium-244 observed in the early Solar system.
On the other hand, the relatively small amount of plutonium-244 reaching Earth from interstellar space today is simply accounted for by the rarity of these events. Such an event hasn’t occurred in the last 100 million years in the vicinity of our Solar system.

Computing with time travel

Computing with time travel
Computing with time travel
Why send a message back in time, but lock it so that no one can ever read the contents? Because it may be the key to solving currently intractable problems. That’s the claim of an international collaboration who have just published a paper in npj Quantum Information.
It turns out that an unopened message can be exceedingly useful. This is true if the experimenter entangles the message with some other system in the laboratory before sending it. Entanglement, a strange effect only possible in the realm of quantum physics, creates correlations between the time-travelling message and the laboratory system. These correlations can fuel a quantum computation.
Around ten years ago researcher Dave Bacon, now at Google, showed that a time-travelling quantum computer could quickly solve a group of problems, known as NP-complete, which mathematicians have lumped together as being hard.
The problem was, Bacon’s quantum computer was travelling around ‘closed timelike curves’. These are paths through the fabric of spacetime that loop back on themselves. General relativity allows such paths to exist through contortions in spacetime known as wormholes.
Physicists argue something must stop such opportunities arising because it would threaten ‘causality’ – in the classic example, someone could travel back in time and kill their grandfather, negating their own existence.
And it’s not only family ties that are threatened. Breaking the causal flow of time has consequences for quantum physics too. Over the past two decades, researchers have shown that foundational principles of quantum physics break in the presence of closed timelike curves: you can beat the uncertainty principle, an inherent fuzziness of quantum properties, and the no-cloning theorem, which says quantum states can’t be copied.
However, the new work shows that a quantum computer can solve insoluble problems even if it is travelling along “open timelike curves”, which don’t create causality problems. That’s because they don’t allow direct interaction with anything in the object’s own past: the time travelling particles (or data they contain) never interact with themselves. Nevertheless, the strange quantum properties that permit “impossible” computations are left intact. “We avoid ‘classical’ paradoxes, like the grandfathers paradox, but you still get all these weird results,” says Mile Gu, who led the work.
Gu is at the Centre for Quantum Technologies (CQT) at the National University of Singapore and Tsinghua University in Beijing. His eight other coauthors come from these institutions, the University of Oxford, UK, Australian National University in Canberra, the University of Queensland in St Lucia, Australia, and QKD Corp in Toronto, Canada.
“Whenever we present the idea, people say no way can this have an effect” says Jayne Thompson, a co-author at CQT. But it does: quantum particles sent on a timeloop could gain super computational power, even though the particles never interact with anything in the past. “The reason there is an effect is because some information is stored in the entangling correlations: this is what we’re harnessing,” Thompson says.
There is a caveat – not all physicists think that these open timeline curves are any more likely to be realisable in the physical universe than the closed ones. One argument against closed timelike curves is that no-one from the future has ever visited us. That argument, at least, doesn’t apply to the open kind, because any messages from the future would be locked.

Provided by: National University of Singapore

A fundamental quantum physics problem has been proved unsolvable

A fundamental quantum physics problem has been proved unsolvable
A fundamental quantum physics problem has been proved unsolvable
For the first time a major physics problem has been proved unsolvable, meaning that no matter how accurately a material is mathematically described on a microscopic level, there will not be enough information to predict its macroscopic behaviour.
The research, by an international team of scientists from UCL, the Technical University of Music and the Universidad Complutense de Madrid – ICMAT, concerns the spectral gap, a term for the energy required for an electron to transition from a low-energy state to an excited state.
Spectral gaps are a key property in semiconductors, among a multitude of other materials, in particular those with superconducting properties. It was thought that it was possible to determine if a material is superconductive by extrapolating from a complete enough microscopic description of it, however this study has shown that determining whether a material has a spectral gap is what is known as “an undecidable question”.
“Alan Turing is famous for his role in cracking the Enigma, but amongst mathematicians and computer scientists, he is even more famous for proving that certain mathematical questions are `undecidable’ – they are neither true nor false, but are beyond the reach of mathematics code,” said co-author Dr Toby Cubitt, from UCL Computer Science.
“What we’ve shown is that the spectral gap is one of these undecidable problems. This means a general method to determine whether matter described by quantum mechanics has a spectral gap, or not, cannot exist. Which limits the extent to which we can predict the behaviour of quantum materials, and potentially even fundamental particle physics.”

enter image description here


The research, which was published today in the journal Nature, used complex mathematics to determine the undecidable nature of the spectral gap, which they say they have demonstrated in two ways:
“The spectral gap problem is algorithmically undecidable: there cannot exist any algorithm which, given a description of the local interactions, determines whether the resulting model is gapped or gapless,” wrote the researchers in the journal paper.
“The spectral gap problem is axiomatically independent: given any consistent recursive axiomatisation of mathematics, there exist particular quantum many-body Hamiltonians for which the presence or absence of the spectral gap is not determined by the axioms of mathematics.”
In other words, no algorithm can determine the spectral gap, and no matter how the maths is broken down, information about energy of the system does not confirm its presence.

enter image description here


The research has profound implications for the field, not least for the Clay Mathematics Institute’s infamous $1m prize to prove whether the standard model of particular physics, which underpins the behaviour of the most basic particulars of matter, has a spectral gap using standard model equations.
“It’s possible for particular cases of a problem to be solvable even when the general problem is undecidable, so someone may yet win the coveted $1m prize. But our results do raise the prospect that some of these big open problems in theoretical physics could be provably unsolvable,” said Cubitt.
“We knew about the possibility of problems that are undecidable in principle since the works of Turing and Gödel in the 1930s,” agreed co-author Professor Michael Wolf, from the Technical University of Munich.
“So far, however, this only concerned the very abstract corners of theoretical computer science and mathematical logic. No one had seriously contemplated this as a possibility right in the heart of theoretical physics before. But our results change this picture. From a more philosophical perspective, they also challenge the reductionists’ point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description.”
“It’s not all bad news, though,” added Professor David Pérez-García, from the Universidad Complutense de Madrid and ICMAT. “The reason this problem is impossible to solve in general is because models at this level exhibit extremely bizarre behaviour that essentially defeats any attempt to analyse them.
“But this bizarre behaviour also predicts some new and very weird physics that hasn’t been seen before. For example, our results show that adding even a single particle to a lump of matter, however large, could in principle dramatically change its properties. New physics like this is often later exploited in technology.”