Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Physics Page 2 of 11

Scientists Made a ‘T-Ray’ Laser That Runs on Laughing Gas

‘T-Ray’ laser finally arrives in practical, tunable form. Duke physicist Henry Everitt worked on it over two decades. Courtesy of Chad Scales, US Army Futures Command

It was a Frankenstein moment for Duke alumnus and adjunct physics professor Henry Everitt.

After years of working out the basic principles behind his new laser, last Halloween he was finally ready to put it to the test. He turned some knobs and toggled some switches, and presto, the first bright beam came shooting out.

“It was like, ‘It’s alive!’” Everitt said.

This was no laser for presenting Powerpoint slides or entertaining cats. Everitt and colleagues have invented a new type of laser that emits beams of light in the ‘terahertz gap,’ the no-man’s-land of the electromagnetic spectrum between microwaves and infrared light.

Terahertz radiation, or ‘T-rays,’ can see through clothing and packaging, but without the health hazards of harmful radiation, so they could be used in security scanners to spot concealed weapons without subjecting people to the dangers of X-rays.

It’s also possible to identify substances by the characteristic frequencies they absorb when T-rays hit them, which makes terahertz waves ideal for detecting toxins in the air or gases between the stars. And because such frequencies are higher than those of radio waves and microwaves, they can carry more bandwidth, so terahertz signals could transmit data many times faster than today’s cellular or Wi-Fi networks.

“Imagine a wireless hotspot where you could download a movie to your phone in a fraction of a second,” Everitt said.

Yet despite the potential payoffs, T-rays aren’t widely used because there isn’t a portable, cheap or easy way to make them.

Now Everitt and colleagues at Harvard University and MIT have invented a small, tunable T-ray laser that might help scientists tap into the terahertz band’s potential.

While most terahertz molecular lasers take up an area the size of a ping pong table, the new device could fit in a shoebox. And while previous sources emit light at just one or a few select frequencies, their laser could be tuned to emit over the entire terahertz spectrum, from 0.1 to 10 THz.

The laser’s tunability gives it another practical advantage, researchers say: the ability to adjust how far the T-ray beam travels. Terahertz signals don’t go very far because water vapor in the air absorbs them. But because some terahertz frequencies are more strongly absorbed by the atmosphere than others, the tuning capability of the new laser makes it possible to control how far the waves travel simply by changing the frequency. This might be ideal for applications like keeping car radar sensors from interfering with each other, or restricting wireless signals to short distances so potential eavesdroppers can’t intercept them and listen in.

Everitt and a team co-led by Federico Capasso of Harvard and Steven Johnson of MIT describe their approach this week in the journal Science. The device works by harnessing discrete shifts in the energy levels of spinning gas molecules when they’re hit by another laser emitting infrared light.

Their T-ray laser consists of a pencil-sized copper tube filled with gas, and a 1-millimeter pinhole at one end. A zap from the infrared laser excites the gas molecules within, and when the molecules in this higher energy state outnumber the ones in a lower one, they emit T-rays.

The team dubbed their gizmo the “laughing gas laser” because it uses nitrous oxide, though almost any gas could work, they say.

Duke professor Henry Everitt and MIT graduate student Fan Wang and colleagues have invented a new laser that emits beams of light in the ‘terahertz gap,’ the no-man’s-land of the electromagnetic spectrum.

Everitt started working on terahertz laser designs 35 years ago as a Duke undergraduate in the mid-1980s, when a physics professor named Frank De Lucia offered him a summer job.

De Lucia was interested in improving special lasers called “OPFIR lasers,” which were the most powerful sources of T-rays at the time. They were too bulky for widespread use, and they relied on an equally unwieldy infrared laser called a CO2 laser to excite the gas inside.

Everitt was tasked with trying to generate T-rays with smaller gas laser designs. A summer gig soon grew into an undergraduate honors thesis, and eventually a Ph.D. from Duke, during which he and De Lucia managed to shrink the footprint of their OPFIR lasers from the size of an axe handle to the size of a toothpick.

But the CO2 lasers they were partnered with were still quite cumbersome and dangerous, and each time researchers wanted to produce a different frequency they needed to use a different gas. When more compact and tunable sources of T-rays came to be, OPFIR lasers were largely abandoned.

Everitt would shelf the idea for another decade before a better alternative to the CO2 laser came along, a compact infrared laser invented by Harvard’s Capasso that could be tuned to any frequency over a swath of the infrared spectrum.

By replacing the CO2 laser with Capasso’s laser, Everitt realized they wouldn’t need to change the laser gas anymore to change the frequency. He thought the OPFIR laser approach could make a comeback. So he partnered with Johnson’s team at MIT to work out the theory, then with Capasso’s group to give it a shot.

The team has moved to patent their design, but there is still a long way before it finds its way onto store shelves or into consumers’ hands. Nonetheless, the researchers — who couldn’t resist a laser joke — say the outlook for the technique is “very bright.”

This research was supported by the U.S. Army Research Office (W911NF-19-2-0168, W911NF-13-D-0001) and by the National Science Foundation (ECCS-1614631) and its Materials Research Science and Engineering Center Program (DMR-1419807).

CITATION: “Widely Tunable Compact Terahertz Gas Lasers,” Paul Chevalier, Arman Armizhan, Fan Wang, Marco Piccardo, Steven G. Johnson, Federico Capasso, Henry Everitt. Science, Nov. 15, 2019. DOI: 10.1126/science.aay8683.

How Small is a Proton? Smaller Than Anyone Thought

The proton, that little positively-charged nugget inside an atom, is fractions of a quadrillionth of a meter smaller than anyone thought, according to new research appearing Nov. 7 in Nature.

Haiyan Gao of Duke Physics

In work they hope solves the contentious “proton radius puzzle” that has been roiling some corners of physics in the last decade, a team of scientists including Duke physicist Haiyan Gao have addressed the question of the proton’s radius in a new way and discovered that it is 0.831 femtometers across, which is about 4 percent smaller than the best previous measurement using electrons from accelerators. (Read the paper!)

A single femtometer is 0.000000000000039370 inches imperial, if that helps, or think of it as a millionth part of a billionth part of a meter. And the new radius is just 80 percent of that.

But this is a big — and very small — deal for physicists, because any precise calculation of energy levels in an atom will be affected by this measure of the proton’s size, said Gao, who is the Henry Newson professor of physics in Trinity College of Arts & Sciences.

Bohr model of Hydrogen. One proton, one electron, as simple as they come.

What the physicists actually measured is the radius of the proton’s charge distribution, but that’s never a smooth, spherical point, Gao explained. The proton is made of still smaller bits, called quarks, that have their own charges and those aren’t evenly distributed. Nor does anything sit still. So it’s kind of a moving target.

One way to measure a proton’s charge radius is to scatter an electron beam from the nucleus of an atom of hydrogen, which is made of just one proton and one electron. But the electron must only perturb the proton very gently to enable researchers to infer the size of the charge involved in the interaction. Another approach measures the difference between two atomic hydrogen energy levels. Past results from these two methods have generally agreed.

Artist’s conception of a very happy muon by Particle Zoo

But in 2010, an experiment at the Paul Scherrer Institute replaced the electron in a hydrogen atom with a muon, a much heavier and shorter-lived member of the electron’s particle family. The muon is still negatively charged like an electron, but it’s about 200 times heavier, so it can orbit much closer to the proton. Measuring the difference between muonic hydrogen energy levels, these physicists obtained a proton charge radius that is highly precise, but much smaller than the previously accepted value. And this started the dispute they’ve dubbed the “proton charge radius puzzle.”

To resolve the puzzle, Gao and her collaborators set out to do a completely new type of electron scattering experiment with a number of innovations. And they looked at electron scattering from both the proton and the electron of the hydrogen atom at the same time. They also managed to get the beam of electrons scattered at near zero degrees, meaning it came almost straight forward, which enabled the electron beam to “feel” the proton’s charge response more precisely.

Voila, a 4-percent-smaller proton. “But actually, it’s much more complicated,” Gao said, in a major understatement.

The work was done at the Department of Energy’s Thomas Jefferson National Accelerator Facility in Newport News, Virginia, using new equipment supported by both the National Science Foundation and the Department of Energy, and some parts that were purpose-built for this experiment. “To solve the argument, we needed a new approach,” Gao said.

Gao said she has been interested in this question for nearly 20 years, ever since she became aware of two different values for the proton’s charge radius, both from electron scattering experiments.  “Each one claimed about 1 percent uncertainty, but they disagreed by several percent,” she said.

And as always in modern physics, had the answer not worked out so neatly, it might have called into question parts of the Standard Model of particle physics. But alas, not this time.

“This is particularly important for a number of reasons,” Gao said. The proton is a fundamental building block of visible matter, and the energy level of hydrogen is a basic unit of measure that all physicists rely on.

The new measure may also help advance new insights into quantum chromodynamics (QCD), the theory of strong interaction in quarks and gluons, Gao said. “We really don’t understand how QCD works.”

“This is a very, very big deal,” she said. “The field is very excited about it. And I should add that this experiment would not have been so successful without the heroic contributions from our highly talented and hardworking graduate students and postdocs from Duke.”

This work was funded in part by the U. S. National Science Foundation (NSF MRI PHY-1229153) and by the U.S. Department of Energy (Contract No. DE-FG02-03ER41231), including contract No. DE-AC05-06OR23177 under which Jefferson Science Associates, LLC operates Thomas Jefferson National Accelerator Facility.

CITATION: “A Small Proton Charge Radius from An Electron-Proton Scattering Experiment,”  W. Xiong, A. Gasparian, H. Gao, et al. Nature, Nov. 7, 2019. DOI: 10.1038/s41586-019-1721-2 (ONLINE)

Nature Shows a U-Turn Path to Better Solar Cells

The technical-sounding category of “light-driven charge-transfer reactions,” becomes more familiar to non-physicists when you just call it photosynthesis or solar electricity.

When a molecule (in a leaf or solar cell) is hit by an energetic photon of light, it first absorbs the little meteor’s energy, generating what chemists call an excited state. This excited state then almost immediately (like trillionths of a second) shuttles an electron away to a charge acceptor to lower its energy. That transference of charge is what drives plant life and photovoltaic current.

A 20 Megawatt solar farm ( Aerial Innovations via wikimedia commons)

The energy of the excited state plays an important role in determining solar energy conversion efficiency. That is, the more of that photon’s energy that can be retained in the charge-separated state, the better. For most solar-electric devices, the excited state rapidly loses energy, resulting in less efficient devices.

But what if there were a way to create even more energetic excited states from that incoming photon?

Using a very efficient photosynthesizing bacterium as their inspiration, a team of Duke chemists that included graduate students Nick Polizzi and Ting Jiang, and faculty members David Beratan and Michael Therien, synthesized a “supermolecule” to help address this question.

“Nick and Ting discovered a really cool trick about electron transfer that we might be able to adapt to improving solar cells,” said Michael Therien, the William R. Kenan, Jr. Professor of Chemistry. “Biology figured this out eons ago,” he said.

“When molecules absorb light, they have more energy,” Therien said. “One of the things that these molecular excited states do is that they move charge. Generally speaking, most solar energy conversion structures that chemists design feature molecules that push electron density in the direction they want charge to move when a photon is absorbed. The solar-fueled microbe, Rhodobacter sphaeroides, however, does the opposite. What Nick and Ting demonstrated is that this could also be a winning strategy for solar cells.”

Ting Jiang
Nick Polizzi

The chemists devised a clever synthetic molecule that shows the advantages of an excited state that pushes electron density in the direction opposite to where charge flows. In effect, this allows more of the energy harvested from a photon to be used in a solar cell. 

“Nick and Ting’s work shows that there are huge advantages to pushing electron density in the exact opposite direction where you want charge to flow,” Therien said in his top-floor office of the French Family Science Center. “The biggest advantage of an excited state that pushes charge the wrong way is it stops a really critical pathway for excited state relaxation.”

“So, in many ways it’s a Rube Goldberg Like conception,” Therien said. “It is a design strategy that’s been maybe staring us in the face for several years, but no one’s connected the dots like Nick and Ting have here.”

In a July 2 commentary for the Proceedings of the National Academy of Sciences, Bowling Green State University chemist and photoscientist Malcom D.E. Forbes calls this work “a great leap forward,” and says it “should be regarded as one of the most beautiful experiments in physical chemistry in the 21st century.”

Here’s a schematic from the paper.
(Image by Nick Polizzi)

CITATION: “Engineering Opposite Electronic Polarization of Singlet and Triplet States Increases the Yield of High-Energy Photoproducts,” Nicholas Polizzi, Ting Jiang, David Beratan, Michael Therien. Proceedings of the National Academy of Sciences, June 10, 2019. DOI: 10.1073/pnas.1901752116 Online: https://www.pnas.org/content/early/2019/07/01/1908872116

Understanding the Universe, Large and Small

From the miniscule particles underlying matter, to vast amounts of data from the far reaches of outer space, Chris Walter, a professor of physics at Duke, pursues research into the great mysteries of the universe, from the infinitesimal to the infinite.

Chris Walter is a professor of physics

As an undergraduate at the University of California at Santa Cruz, he thought he would become a theoretical physicist, but while continuing his education at the California Institute of Technology (Caltech), he found himself increasingly drawn to experimental physics, deriving knowledge of the universe by observing its phenomena.

Neutrinos — miniscule particles emitted during radioactive decay — captured his attention, and he began work with the KamiokaNDE (Kamioka Nucleon Decay Experiment, now typically written as Kamiokande) at the Kamioka Observatory in Hida, Japan. Buried deep underground
in an abandoned mine to shield the detectors from cosmic rays and submerged in water, Kamiokande offered Walter an opportunity to study a long-supposed but still unproven hypothesis: that neutrinos were massless.

Recalling one of his most striking memories from his time in the lab, he described observing and finding answers in Cherenkov light – a ‘sonic boom’ of light. Sonic booms are created by breaking the sound barrier in air.  However, the speed of light changes in different media – the speed of light in water is less than the speed of light in a vacuum — and a particle accelerator could accelerate particles beyond the speed of light in water.  Walter described it like a ring of light bursting out of the darkness.

In his time at the Kamioka Observatory, he was a part of groundbreaking neutrino research on the mass of neutrinos. Long thought to have been massless, Kamiokande discovered the property of neutron oscillation – that neutrinos could change from flavor to flavor, indicating that, contrary to popular belief, they had mass. Seventeen years later, in 2015, the leader of his team, Takaaki Kajita, would be co-awarded the Nobel Prize for Physics, citing research from their collaboration.

Chris Walter (left) and his Duke physics collaborator and partner, Kate Scholberg (right), on a lift inside the Super-Kamiokande neutrino detector.

Neutrinos originated from the cosmic rays in outer space, but soon another mystery from the cosmos captured Walter’s attention.

“If you died and were given the chance to know the answer to just one question,” he said, “for me, it would be, ‘What is dark energy?’”

Observations made in the 1990s, as Walter was concluding his time at the Kamioka Observatory, found that the expansion of the universe was accelerating. The nature of the dark energy causing this accelerating expansion remained unknown to scientists, and it offered a new course of study in the field of astrophysics.

Walter has recently joined the Large Synoptic Survey Telescope (LSST) as part of a 10-year, 3D survey of the entire sky, gathering over 20 terabytes of data nightly and detecting thousands of changes in the night sky, observing asteroids, galaxies, supernovae, and other astronomical phenomena. With new machine learning techniques and supercomputing methods to process the vast quantities of data, the LSST offers incredible new opportunities for understanding the universe. 

To Walter, this is the next big step for research into the nature of dark energy and the great questions of science.

A rendering of the Large Synoptic Survey Telescope. (Note the naked humans for scale)

Guest Post by Thomas Yang, NCSSM 2019

Teaching a Machine to Spot a Crystal

A collection of iridescent crystals grown in space

Not all protein crystals exhibit the colorful iridescence of these crystals grown in space. But no matter their looks, all are important to scientists. Credit: NASA Marshall Space Flight Center (NASA-MSFC).

Protein crystals don’t usually display the glitz and glam of gemstones. But no matter their looks, each and every one is precious to scientists.

Patrick Charbonneau, a professor of chemistry and physics at Duke, along with a worldwide group of scientists, teamed up with researchers at Google Brain to use state-of-the-art machine learning algorithms to spot these rare and valuable crystals. Their work could accelerate drug discovery by making it easier for researchers to map the structures of proteins.

“Every time you miss a protein crystal, because they are so rare, you risk missing on an important biomedical discovery,” Charbonneau said.

Knowing the structure of proteins is key to understanding their function and possibly designing drugs that work with their specific shapes. But the traditional approach to determining these structures, called X-ray crystallography, requires that proteins be crystallized.

Crystallizing proteins is hard — really hard. Unlike the simple atoms and molecules that make up common crystals like salt and sugar, these big, bulky molecules, which can contain tens of thousands of atoms each, struggle to arrange themselves into the ordered arrays that form the basis of crystals.

“What allows an object like a protein to self-assemble into something like a crystal is a bit like magic,” Charbonneau said.

Even after decades of practice, scientists have to rely in part on trial and error to obtain protein crystals. After isolating a protein, they mix it with hundreds of different types of liquid solutions, hoping to find the right recipe that coaxes them to crystallize. They then look at droplets of each mixture under a microscope, hoping to spot the smallest speck of a growing crystal.

“You have to manually say, there is a crystal there, there is none there, there is one there, and usually it is none, none, none,” Charbonneau said. “Not only is it expensive to pay people to do this, but also people fail. They get tired and they get sloppy, and it detracts from their other work.”

Three microscope images of protein crystallization solutions

The machine learning software searches for points and edges (left) to identify crystals in images of droplets of solution. It can also identify when non-crystalline solids have formed (middle) and when no solids have formed (right).

Charbonneau thought perhaps deep learning software, which is now capable of recognizing individual faces in photographs even when they are blurry or caught from the side, should also be able to identify the points and edges that make up a crystal in solution.

Scientists from both academia and industry came together to collect half a million images of protein crystallization experiments into a database called MARCO. The data specify which of these protein cocktails led to crystallization, based on human evaluation.

The team then worked with a group led by Vincent Vanhoucke from Google Brain to apply the latest in artificial intelligence to help identify crystals in the images.

After “training” the deep learning software on a subset of the data, they unleashed it on the full database. The A.I. was able to accurately identify crystals about 95 percent of the time. Estimates show that humans spot crystals correctly only 85 percent of the time.

“And it does remarkably better than humans,” Charbonneau said. “We were a little surprised because most A.I. algorithms are made to recognize cats or dogs, not necessarily geometrical features like the edge of a crystal.”

Other teams of researchers have already asked to use the A.I. model and the MARCO dataset to train their own machine learning algorithms to recognize crystals in protein crystallization experiments, Charbonneau said. These advances should allow researchers to focus more time on biomedical discoveries instead of squinting at samples.

Charbonneau plans to use the data to understand how exactly proteins self-assemble into crystals, so that researchers rely less on chance to get this “magic” to happen.

“We are trying to use this data to see if we can get more insight into the physical chemistry of self-assembly of proteins,” Charbonneau said.

CITATION: “Classification of crystallization outcomes using deep convolutional neural networks,” Andrew E. Bruno, et al. PLOS ONE, June 20, 2018. DOI: 10.1371/journal.pone.0198883

 

Post by Kara Manke

Stretchable, Twistable Wires for Wearable Electronics

A new conductive “felt” carries electricity even when twisted, bent and stretched. Credit: Matthew Catenacci

The exercise-tracking power of a Fitbit may soon jump from your wrist and into your clothing.

Researchers are seeking to embed electronics such as fitness trackers and health monitors into our shirts, hats, and shoes. But no one wants stiff copper wires or silicon transistors deforming their clothing or poking into their skin.

Scientists in Benjamin Wiley’s lab at Duke have created new conductive “felt” that can be easily patterned onto fabrics to create flexible wires. The felt, composed of silver-coated copper nanowires and silicon rubber, carries electricity even when bent, stretched and twisted, over and over again.

“We wanted to create wiring that is stretchable on the body,” said Matthew Catenacci, a graduate student in Wiley’s group.

The conductive felt is made of stacks of interwoven silver-coated copper nanotubes filled with a stretchable silicone rubber (left). When stretched, felt made from more pliable rubber is more resilient to small tears and holes than felts made of stiffer rubber (middle). These tears can be seen in small cavities in the felt (right). Credit: Matthew Catenacci

To create a flexible wire, the team first sucks a solution of copper nanowires and water through a stencil, creating a stack of interwoven nanowires in the desired shape. The material is similar to the interwoven fibers that comprise fabric felt, but on a much smaller scale, said Wiley, an associate professor of chemistry at Duke.

“The way I think about the wires are like tiny sticks of uncooked spaghetti,” Wiley said. “The water passes through, and then you end up with this pile of sticks with a high porosity.”

The interwoven nanowires are heated to 300 F to melt the contacts together, and then silicone rubber is added to fill in the gaps between the wires.

To show the pliability of their new material, Catenacci patterned the nanowire felt into a variety of squiggly, snaking patterns. Stretching and twisting the wires up to 300 times did not degrade the conductivity.

The material maintains its conductivity when twisted and stretched. Credit: Matthew Catenacci

“On a larger scale you could take a whole shirt, put it over a vacuum filter, and with a stencil you could create whatever wire pattern you want,” Catenacci said. “After you add the silicone, so you will just have a patch of fabric that is able to stretch.”

Their felt is not the first conductive material that displays the agility of a gymnast. Flexible wires made of silver microflakes also exhibit this unique set of properties. But the new material has the best performance of any other material so far, and at a much lower cost.

“This material retains its conductivity after stretching better than any other material with this high of an initial conductivity. That is what separates it,” Wiley said.

Stretchable Conductive Composites from Cu-Ag Nanowire Felt,” Matthew J. Catenacci, Christopher Reyes, Mutya A. Cruz and Benjamin J. Wiley. ACS Nano, March 14, 2018. DOI: 10.1021/acsnano.8b00887

Post by Kara Manke

How Earth’s Earliest Lifeforms Protected Their Genes

A colorful hot spring in Yellowstone National Park

Heat-loving thermophile bacteria may have been some of the earliest lifeforms on Earth. Researchers are studying their great great great grandchildren, like those living in Yellowstone’s Grand Prismatic Spring, to understand how these early bacteria repaired their DNA.

Think your life is hard? Imagine being a tiny bacterium trying to get a foothold on a young and desolate Earth. The earliest lifeforms on our planet endured searing heat, ultraviolet radiation and an atmosphere devoid of oxygen.

Benjamin Rousseau, a research technician in David Beratan’s lab at Duke, studies one of the molecular machines that helped these bacteria survive their harsh environment. This molecule, called photolyase, fixes DNA damaged by ultraviolet (UV) radiation — the same wavelengths of sunlight that give us sunburn and put us at greater risk of skin cancer.

“Anything under the sun — in both meanings of the phrase — has to have ways to repair itself, and photolyase proteins are one of them,” Rousseau said. “They are one of the most ancient repair proteins.”

Though these proteins have been around for billions of years, scientists are still not quite sure exactly how they work. In a new study, Rousseau and coworkers, working with Professor David Beratan and Assistant Research Professor Agostino Migliore, used computer simulations to study photolyase in thermophiles, the great great great great grandchildren of Earth’s original bacterial pioneers.

The study appeared in the Feb. 28 issue of the Journal of the American Chemical Society.

DNA is built of chains of bases — A, C, G and T — whose order encodes our genetic information. UV light can trigger two adjacent bases to react and latch onto one other, rendering these genetic instructions unreadable.

Photolyase uses a molecular antenna to capture light from the sun and convert it into an electron. It then hands the electron over to the DNA strand, sparking a reaction that splits the two bases apart and restores the genetic information.

A ribbon diagram of a photolyase protein

Photolyase proteins use a molecular antenna (green, blue and red structure on the right) to harvest light and convert it into an electron. The adenine-containing structure in the middle hands the electron to the DNA strand, splitting apart DNA bases. Credit: Benjamin Rousseau, courtesy of the Journal of the American Chemical Society.

Rousseau studied the role of a molecule called adenine in shuttling the electron  from the molecular antenna to the DNA strand. He looked at photolyase in both the heat-loving ancestors of ancient bacteria, called thermophiles, and more modern bacteria like E. Coli that thrive at moderate temperatures, called mesophiles.

He found that in thermophiles, adenine played a role in transferring the electron to the DNA. But in E. coli, the adenine was in a different position, providing mainly structural support.

The results “strongly suggest that mesophiles and thermophiles fundamentally differ in their use of adenine for this electron transfer repair mechanism,” Rousseau said.

He also found that when he cooled E. Coli down to 20 degrees Celsius — about 68 degrees Fahrenheit — the adenine shifted back in place, resuming its transport function.

“It’s like a temperature-controlled switch,” Rousseau said.

Though humans no longer use photolyase for DNA repair, the protein persists in life as diverse as bacteria, fungi and plants — and is even being studied as an ingredient in sunscreens to help repair UV-damaged skin.

Understanding exactly how photolyase works may also help researchers design proteins with a variety of new functions, Rousseau said.

“Photolyase does all of the work on its own — it harvests the light, it transfers the electron over a huge distance to the other site, and then it cleaves the DNA bases,” Rousseau said. “Proteins with that kind of plethora of functions tend to be an attractive target for protein engineering.”

Post by Kara Manke

Can Science Explain Everything? An Exploration of Faith

The Veritas Forum, Feb. 1 in Penn Pavilion

I found out about this year’s Veritas Forum an hour before it started — a friend, who two years ago helped me explore Christianity (I grew up non-religious and was curious), mentioned it when we ran into each other at the Brodhead Center.

So, to avoid my academic responsibilities, I instead listened to Duke physics professor Ronen Plesser, a non-practicing Jew, Troy Van Voorhis, a Christian who teaches chemistry at MIT, and moderator Ehsan Samei, a professor of radiology and biomedical engineering at Duke. They discussed the God Hypothesis and how it fit in with their views as hard scientists.

Ehsan Samei

As someone who has relied on the scientific method instead of an omniscient, higher power to understand the natural world, I found it amazing how the speakers used relatable examples to demonstrate their belief that humans cannot explain everything. They started answering the classic question “Why is the sky blue?,” using more and more complex chemistry and physics as answers only led to more questions.

At some point, science-based explanations about how and why molecules move the way they do and where they come from didn’t suffice — at some point, it just seems like something, or someone, is responsible for the unexplainable.

Troy Van Voorhis of MIT

Something that Van Voorhis said particularly stuck in my mind. Reproducibility and objectivity form the “bedrock of science,” but are also it’s “grand limitations.” They are essential to corroborating the results of a scientific study or experiment, but can they really confirm something as scientific truth? When does reproducibility adequately overcome variation in data, and can something be defined as truly objective?

So, I sat there in the audience, thinking about alternatives to explaining morals, ethics, and the feeling of being human since, to paraphrase Plesser, science just doesn’t cut it in these cases. He elaborated on faith after branching off Van Voorhis’ point of view. Plesser’s explanation made the overlap of science and religion become more and more prominent. As someone who also does not practice a religion, I felt that his comparison of faith in science and faith in religion comforting.

Ronan Plesser

Even though I still struggle to fully accept Christ, I was aware of the similarities of the path to scientific and spiritual enlightenment. In science, incessant questioning of our surroundings is necessary to understand the Truths of our world (“otherwise we wouldn’t be publishing papers and we would be out of our jobs!”), as are the calls to God to come down and help people improve themselves. It is impossible, then, to avoid faith entirely since being human inherently involves belief in some sort of system.

I was wowed by the connections that the three men were making between the seemingly divergent areas. I was even more astonished, though, by their emphasis on humility. They exemplified the need for understanding and patience when describing scientific theories and religious ideologies. To be humble is to accept that people have differences and to acknowledge these differences is the only way to reduce conflicts between religion and science.

Post by Stella Wang

Farewell, Electrons: Future Electronics May Ride on New Three-in-One Particle

“Trion” may sound like the name of one of the theoretical particles blamed for mucking up operations aboard the Starship Enterprise.

But believe it or not, trions are real — and they may soon play a key role in electronic devices. Duke researchers have for the first time pinned down some of the behaviors of these one-of-a-kind particles, a first step towards putting them to work in electronics.

A carbon nanotube, shaped like a rod, is wrapped in a helical coating of polymer

Three-in-one particles called trions — carrying charge, energy and spin — zoom through special polymer-wrapped carbon nanotubes at room temperature. Credit: Yusong Bai.

Trions are what scientists call “quasiparticles,” bundles of energy, electric charge and spin that zoom around inside semiconductors.

“Trions display unique properties that you won’t be able to find in conventional particles like electrons, holes (positive charges) and excitons (electron-hole pairs that are formed when light interacts with certain materials),” said Yusong Bai, a postdoctoral scholar in the chemistry department at Duke. “Because of their unique properties, trions could be used in new electronics such as photovoltaics, photodetectors, or in spintronics.”

Usually these properties – energy, charge and spin – are carried by separate particles. For example, excitons carry the light energy that powers solar cells, and electrons or holes carry the electric charge that drives electronic devices. But trions are essentially three-in-one particles, combining these elements together into a single entity – hence the “tri” in trion.

A diagram of how a trion is formed in carbon nanotubes.

A trion is born when a particle called a polaron (top) marries an exciton (middle). Credit: Yusong Bai.

“A trion is this hybrid that involves a charge marrying an exciton to become a uniquely distinct particle,” said Michael Therien, the William R. Kenan, Jr. Professor of Chemistry at Duke. “And the reason why people are excited about trions is because they are a new way to manipulate spin, charge, and the energy of absorbed light, all simultaneously.”

Until recently, scientists hadn’t given trions much attention because they could only be found in semiconductors at extremely low temperatures – around 2 Kelvin, or -271 Celcius. A few years ago, researchers observed trions in carbon nanotubes at room temperature, opening up the potential to use them in real electronic devices.

Bai used a laser probing technique to study how trions behave in carefully engineered and highly uniform carbon nanotubes. He examined basic properties including how they are formed, how fast they move and how long they live.

He was surprised to find that under certain conditions, these unusual particles were actually quite easy to create and control.

“We found these particles are very stable in materials like carbon nanotubes, which can be used in a new generation of electronics,” Bai said. “This study is the first step in understanding how we might take advantage of their unique properties.”

The team published their results Jan. 8 in the Proceedings of the National Academy of Sciences.

Dynamics of charged excitons in electronically and morphologically homogeneous single-walled carbon nanotubes,” Yusong Bai, Jean-Hubert Olivier, George Bullard, Chaoren Liu and Michael J. Therien. Proceedings of the National Academy of Sciences, Jan. 8, 2018 (online) DOI: 10.1073/pnas.1712971115

Post by Kara Manke

Glitter and Jell-O Reveal the Science of Oobleck

A black and white image showing a circular disk dropping into a container of oobleck

Mixing black glitter with oobleck allowed researchers to track the movement of individual cornstarch particles after a sudden impact. A computer program locked onto pieces of glitter and illustrated their motion. Credit: Melody Lim.

What do gelatin and glitter have to do with serious science? For some experiments, a lot! Duke alumna Melody Lim used jiggly Jell-O and a just a pinch of glitter to solve a scientific mystery about the curious goo many like to call oobleck.

To the uninitiated, oobleck is almost magic. The simple mixture of cornstarch and water feels solid if you squeeze it, but moments later runs through your fingers like water. You can dance across a bathtub full of oobleck, but stand still for too long and you will be sucked into a goopy mess. Not surprisingly, the stuff is a Youtube favorite.

Oobleck is an example of what scientists call a non-Newtonian fluid, a liquid whose viscosity – how easily it changes shape and flows – depends upon the force that is applied. But exactly how it is that this material switches from solid to liquid and back again has remained a mystery to scientists.

A piece of gelatin being squeezed viewed through a circular polarizer

This blogger mixed up a batch of jello to see the photoelastic effect for herself. When viewed with polarized light – from an iPhone screen and a circular polarizer – the jello changes color when squeezed.

“Water is simple to understand, and so is cornstarch,” said Lim, ’16, who is currently a graduate student at the University of Chicago. “However, a combination of the two produces this ‘liquid’ that ripples and flows, solidifies beneath your feet if you run on it, then turns back into a liquid if you stop running and stand still. I wanted to know why.”

The question beguiling scientists was whether sudden impact causes the cornstarch particles to “jam” into a solid like cement, or whether the suspension remains liquid but simply moves too slowly for its liquid-like properties to be apparent — similar to what happens if you skip a rock off the surface of a lake.

“There are these two opposing pictures,” said Robert Behringer, James B. Duke Professor of Physics at Duke. “Either you squish the material and turn it into cement temporarily, or you simply transmit the stress from the impactor straight to the boundary.”

Lim did two sets of experiments to find out which way oobleck works. In one experiment, she mixed black glitter into a transparent channel filled with oobleck, and then used a high-speed camera to watch how the material responded to the impact. The glitter let her track the motion of individual particles after the disc hit.

A piece of gelatin changes color when you squeeze it.

The photoelastic effect in gelatin.

Her video shows that the particles near the impact site jam and become solid, forming what the researchers call a “mass shock” wave that travels slowly through the suspension.

In a second set of experiments, Lim placed the oobleck in a container lined with gelatin, the main ingredient in Jell-O – besides sugar and food dye, of course. Gelatin is what is called a photoelastic material, which means that applying pressure bends light that travels through it, like a prism.

“Next time you eat Jell-O, get out your sunglasses and get somebody else’s sunglasses and look between them,” Behringer said. “Because if you give it a shake you should see all these stress patterns bouncing around.”

After the metal disc hit the oobleck, the gelatin let Lim see how fast the resulting pressure wave traveled through the material and reached the boundary.

A black and white image showing pressure waves traveling through a transparent material after impact

The researchers poured oobleck into a clear container lined with gelatin, a material that bends light when a pressure is applied to it. They saw that the force of a sudden impact is rapidly transmitted through the oobleck and to the boundary with the gelatin. Credit: Melody Lim.

They found that when the impact is sudden, the pressure wave traveled to the gelatin boundary faster than the “mass shock” wave. This means that the reason oobleck appears solid after a sudden impact is because the force of the collision is quickly transmitted to a solid boundary.

“If you are running across the water, that actually puts you into an impact velocity range where the pressure wave is significantly faster than the mass shock,” Behringer said. “Whereas if you try to walk across it, the impact speeds are slow, and the system actually doesn’t have the ability to transport the momentum quickly through the material and so you just sink in.”

“If you’d told me when I started that I would line a narrow container with Jell-o, add cornstarch, water, and black glitter, drop a piece of metal on it, then publish a paper on the results, I would have laughed at you,” Lim said.

CITATION: “Force and Mass Dynamics in Non-Newtonian Suspensions,” Melody X. Lim, Jonathan Barés, Hu Zheng and Robert P. Behringer. Physical Review Letters, Nov. 3, 2017. DOI: 10.1103/PhysRevLett.119.184501

Post by Kara Manke

Page 2 of 11

Powered by WordPress & Theme by Anders Norén