Duke Research Blog

Following the people and events that make up the research community at Duke.

Category: Art (Page 1 of 3)

Students Share Research Journeys at Bass Connections Showcase

From the highlands of north central Peru to high schools in North Carolina, student researchers in Duke’s Bass Connections program are gathering data in all sorts of unique places.

As the school year winds down, they packed into Duke’s Scharf Hall last week to hear one another’s stories.

Students and faculty gathered in Scharf Hall to learn about each other’s research at this year’s Bass Connections showcase. Photo by Jared Lazarus/Duke Photography.

The Bass Connections program brings together interdisciplinary teams of undergraduates, graduate students and professors to tackle big questions in research. This year’s showcase, which featured poster presentations and five “lightning talks,” was the first to include teams spanning all five of the program’s diverse themes: Brain and Society; Information, Society and Culture; Global Health; Education and Human Development; and Energy.

“The students wanted an opportunity to learn from one another about what they had been working on across all the different themes over the course of the year,” said Lori Bennear, associate professor of environmental economics and policy at the Nicholas School, during the opening remarks.

Students seized the chance, eagerly perusing peers’ posters and gathering for standing-room-only viewings of other team’s talks.

The different investigations took students from rural areas of Peru, where teams interviewed local residents to better understand the transmission of deadly diseases like malaria and leishmaniasis, to the North Carolina Museum of Art, where mathematicians and engineers worked side-by-side with artists to restore paintings.

Machine learning algorithms created by the Energy Data Analytics Lab can pick out buildings from a satellite image and estimate their energy consumption. Image courtesy Hoël Wiesner.

Students in the Energy Data Analytics Lab didn’t have to look much farther than their smart phones for the data they needed to better understand energy use.

“Here you can see a satellite image, very similar to one you can find on Google maps,” said Eric Peshkin, a junior mathematics major, as he showed an aerial photo of an urban area featuring buildings and a highway. “The question is how can this be useful to us as researchers?”

With the help of new machine-learning algorithms, images like these could soon give researchers oodles of valuable information about energy consumption, Peshkin said.

“For example, what if we could pick out buildings and estimate their energy usage on a per-building level?” said Hoël Wiesner, a second year master’s student at the Nicholas School. “There is not really a good data set for this out there because utilities that do have this information tend to keep it private for commercial reasons.”

The lab has had success developing algorithms that can estimate the size and location of solar panels from aerial photos. Peshkin and Wiesner described how they are now creating new algorithms that can first identify the size and locations of buildings in satellite imagery, and then estimate their energy usage. These tools could provide a quick and easy way to evaluate the total energy needs in any neighborhood, town or city in the U.S. or around the world.

“It’s not just that we can take one city, say Norfolk, Virginia, and estimate the buildings there. If you give us Reno, Tuscaloosa, Las Vegas, Pheonix — my hometown — you can absolutely get the per-building energy estimations,” Peshkin said. “And what that means is that policy makers will be more informed, NGOs will have the ability to best service their community, and more efficient, more accurate energy policy can be implemented.”

Some students’ research took them to the sidelines of local sports fields. Joost Op’t Eynde, a master’s student in biomedical engineering, described how he and his colleagues on a Brain and Society team are working with high school and youth football leagues to sort out what exactly happens to the brain during a high-impact sports game.

While a particularly nasty hit to the head might cause clear symptoms that can be diagnosed as a concussion, the accumulation of lesser impacts over the course of a game or season may also affect the brain. Eynde and his team are developing a set of tools to monitor both these impacts and their effects.

A standing-room only crowd listened to a team present on their work “Tackling Concussions.” Photo by Jared Lazarus/Duke Photography.

“We talk about inputs and outputs — what happens, and what are the results,” Eynde said. “For the inputs, we want to actually see when somebody gets hit, how they get hit, what kinds of things they experience, and what is going on in the head. And the output is we want to look at a way to assess objectively.”

The tools include surveys to estimate how often a player is impacted, an in-ear accelerometer called the DASHR that measures the intensity of jostles to the head, and tests of players’ performance on eye-tracking tasks.

“Right now we are looking on the scale of a season, maybe two seasons,” Eynde said. “What we would like to do in the future is actually follow some of these students throughout their career and get the full data for four years or however long they are involved in the program, and find out more of the long-term effects of what they experience.”

Kara J. Manke, PhD

Post by Kara Manke

Visualizing the Fourth Dimension

Living in a 3-dimensional world, we can easily visualize objects in 2 and 3 dimensions. But as a mathematician, playing with only 3 dimensions is limiting, Dr. Henry Segerman laments.  An Assistant Professor in Mathematics at Oklahoma State University, Segerman spoke to Duke students and faculty on visualizing 4-dimensional space as part of the PLUM lecture series on April 18.

What exactly is the 4th dimension?

Let’s break down spatial dimensions into what we know. We can describe a point in 2-dimensional space with two numbers x and y, visualizing an object in the xy plane, and a point in 3D space with 3 numbers in the xyz coordinate system.

Plotting three dimensions in the xyz coordinate system.

While the green right-angle markers are not actually 90 degrees, we are able to infer the 3-dimensional geometry as shown on a 2-dimensional screen.

Likewise, we can describe a point in 4-dimensional space with four numbers – x, y, z, and w – where the purple w-axis is at a right angle to the other regions; in other words, we can visualize 4 dimensions by squishing it down to three.

Plotting four dimensions in the xyzw coordinate system.

One commonly explored 4D object we can attempt to visualize is known as a hypercube. A hypercube is analogous to a cube in 3 dimensions, just as a cube is to a square.

How do we make a hypercube?

To create a 1D line, we take a point, make a copy, move the copied point parallely to some distance away, and then connect the two points with a line.

Similarly, a square can be formed by making a copy of a line and connecting them to add the second dimension.

So, to create a hypercube, we move identical 3D cubes parallel to each other, and then connect them with four lines, as depicted in the image below.

To create an n–dimensional cube, we take 2 copies of the (n−1)–dimensional cube and connecting corresponding corners.

Even with a 3D-printed model, trying to visualize the hypercube can get confusing. 

How can we make a better picture of a hypercube? “You sort of cheat,” Dr. Segerman explained. One way to cheat is by casting shadows.

Parallel projection shadows, depicted in the figure below, are caused by rays of light falling at a  right angle to the plane of the table. We can see that some of the edges of the shadow are parallel, which is also true of the physical object. However, some of the edges that collide in the 2D cast don’t actually collide in the 3D object, making the projection more complicated to map back to the 3D object.

Parallel projection of a cube on a transparent sheet of plastic above the table.

One way to cast shadows with no collisions is through stereographic projection as depicted below.

The stereographic projection is a mapping (function) that projects a sphere onto a plane. The projection is defined on the entire sphere, except the point at the top of the sphere.

For the object below, the curves on the sphere cast shadows, mapping them to a straight line grid on the plane. With stereographic projection, each side of the 3D object maps to a different point on the plane so that we can view all sides of the original object.

Stereographic projection of a grid pattern onto the plane. 3D print the model at Duke’s Co-Lab!

Just as shadows of 3D objects are images formed on a 2D surface, our retina has only a 2D surface area to detect light entering the eye, so we actually see a 2D projection of our 3D world. Our minds are computationally able to reconstruct the 3D world around us by using previous experience and information from the 2D images such as light, shade, and parallax.

Projection of a 3D object on a 2D surface.

Projection of a 4D object on a 3D world

How can we visualize the 4-dimensional hypercube?

To use stereographic projection, we radially project the edges of a 3D cube (left of the image below) to the surface of a sphere to form a “beach ball cube” (right).

The faces of the cube radially projected onto the sphere.

Placing a point light source at the north pole of the bloated cube, we can obtain the projection onto a 2D plane as shown below.

Stereographic projection of the “beach ball cube” pattern to the plane. View the 3D model here.

Applied to one dimension higher, we can theoretically blow a 4-dimensional shape up into a ball, and then place a light at the top of the object, and project the image down into 3 dimensions.

Left: 3D print of the stereographic projection of a “beach ball hypercube” to 3-dimensional space. Right: computer render of the same, including the 2-dimensional square faces.

Forming n–dimensional cubes from (n−1)–dimensional renderings.

Thus, the constructed 3D model of the “beach ball cube” shadow is the projection of the hypercube into 3-dimensional space. Here the 4-dimensional edges of the hypercube become distorted cubes instead of strips.

Just as the edges of the top object in the figure can be connected together by folding the squares through the 3rd dimension to form a cube, the edges of the bottom object can be connected through the 4th dimension

Why are we trying to understand things in 4 dimensions?

As far as we know, the space around us consists of only 3 dimensions. Mathematically, however, there is no reason to limit our understanding of higher-dimensional geometry and space to only 3, since there is nothing special about the number 3 that makes it the only possible number of dimensions space can have.

From a physics perspective, Einstein’s theory of Special Relativity suggests a connection between space and time, so the space-time continuum consists of 3 spatial dimensions and 1 temporal dimension. For example, consider a blooming flower. The flower’s position it not changing: it is not moving up or sideways. Yet, we can observe the transformation, which is proof that an additional dimension exists. Equating time with the 4th dimension is one example, but the 4th dimension can also be positional like the first 3. While it is possible to visualize space-time by examining snapshots of the flower with time as a constant, it is also useful to understand how space and time interrelate geometrically.

Explore more in the 4th dimension with Hypernom or Dr. Segerman’s book “Visualizing Mathematics with 3D Printing“!

Post by Anika Radiya-Dixit.

 

 

Seeing Nano

Take pictures at more than 300,000 times magnification with electron microscopes at Duke

Sewer gnat head

An image of a sewer gnat’s head taken through a scanning electron microscope. Courtesy of Fred Nijhout.

The sewer gnat is a common nuisance around kitchen and bathroom drains that’s no bigger than a pea. But magnified thousands of times, its compound eyes and bushy antennae resemble a first place winner in a Movember mustache contest.

Sewer gnats’ larger cousins, horseflies are known for their painful bite. Zoom in and it’s easy to see how they hold onto their furry livestock prey:  the tiny hooked hairs on their feet look like Velcro.

Students in professor Fred Nijhout’s entomology class photograph these and other specimens at more than 300,000 times magnification at Duke’s Shared Material & Instrumentation Facility (SMIF).

There the insects are dried, coated in gold and palladium, and then bombarded with a beam of electrons from a scanning electron microscope, which can resolve structures tens of thousands of times smaller than the width of a human hair.

From a ladybug’s leg to a weevil’s suit of armor, the bristly, bumpy, pitted surfaces of insects are surprisingly beautiful when viewed up close.

“The students have come to treat travels across the surface of an insect as the exploration of a different planet,” Nijhout said.

Horsefly foot

The foot of a horsefly is equipped with menacing claws and Velcro-like hairs that help them hang onto fur. Photo by Valerie Tornini.

Weevil

The hard outer skeleton of a weevil looks smooth and shiny from afar, but up close it’s covered with scales and bristles. Courtesy of Fred Nijhout.

fruit fly wing

Magnified 500 times, the rippled edges of this fruit fly wing are the result of changes in the insect’s genetic code. Courtesy of Eric Spana.

You, too, can gaze at alien worlds too small to see with the naked eye. Students and instructors across campus can use the SMIF’s high-powered microscopes and other state of the art research equipment at no charge with support from the Class-Based Explorations Program.

Biologist Eric Spana’s experimental genetics class uses the microscopes to study fruit flies that carry genetic mutations that alter the shape of their wings.

Students in professor Hadley Cocks’ mechanical engineering 415L class take lessons from objects that break. A scanning electron micrograph of a cracked cymbal once used by the Duke pep band reveals grooves and ridges consistent with the wear and tear from repeated banging.

cracked cymbal

Magnified 3000 times, the surface of this broken cymbal once used by the Duke Pep Band reveals signs of fatigue cracking. Courtesy of Hadley Cocks.

These students are among more than 200 undergraduates in eight classes who benefitted from the program last year, thanks to a grant from the Donald Alstadt Foundation.

You don’t have to be a scientist, either. Historians and art conservators have used scanning electron microscopes to study the surfaces of Bronze Age pottery, the composition of ancient paints and even dust from Egyptian mummies and the Shroud of Turin.

Instructors and undergraduates are invited to find out how they could use the microscopes and other nanotech equipement in the SMIF in their teaching and research. Queries should be directed to Dr. Mark Walters, Director of SMIF, via email at mark.walters@duke.edu.

Located on Duke’s West Campus in the Fitzpatrick Building, the SMIF is a shared use facility available to Duke researchers and educators as well as external users from other universities, government laboratories or industry through a partnership called the Research Triangle Nanotechnology Network. For more info visit http://smif.pratt.duke.edu/.

Scanning electron microscope

This scanning electron microscope could easily be mistaken for equipment from a dentist’s office.

s200_robin.smith

Post by Robin Smith

When Art Tackles the Invisibly Small

Huddled in a small cinderblock room in the basement of Hudson Hall, visual artist Raewyn Turner and mechatronics engineer Brian Harris watch as Duke postdoc Nick Geitner positions a glass slide under the bulky eyepiece of an optical microscope.

To the naked eye, the slide is completely clean. But after some careful adjustments of the microscope, a field of technicolor spots splashes across the viewfinder. Each point shows light scattering off one of the thousands of silver nanoparticles spread in a thin sheet across the glass.

“It’s beautiful!” Turner said. “They look like a starry sky.”

AgAlgae_40x_Enhanced3

A field of 10-nanometer diameter silver nanoparticles (blue points) and clusters of 2-4 nanoparticles (other colored points) viewed under a dark-field hyperspectral microscope. The clear orbs are cells of live chlorella vulgaris algae. Image courtesy Nick Geitner.

Turner and Harris, New Zealand natives, have traveled halfway across the globe to meet with researchers at the Center for the Environmental Implications of Nanotechnology (CEINT). Here, they are learning all they can about nanoparticles: how scientists go about detecting these unimaginably small objects, and how these tiny bits of matter interact with humans, with the environment and with each other.

img_2842

The mesocosms, tucked deep in the Duke Forest, currently lay dormant.

The team hopes the insights they gather will inform the next phases of Steep, an ongoing project with science communicator Maryse de la Giroday which uses visual imagery to explore how humans interact with and “sense” the nanoparticles that are increasingly being used in our electronics, food, medicines, and even clothing.

“The general public, including ourselves, we don’t know anything about nanoparticles. We don’t understand them, we don’t know how to sense them, we don’t know where they are,” Turner said. “What we are trying to do is see how scientists sense nanoparticles, how they take data about them and translate it into sensory data.”

Duke Professor and CEINT member Mark Wiesner, who is Geitner’s postdoctoral advisor, serves as a scientific advisor on the project.

“Imagery is a challenge when talking about something that is too small to see,” Wiesner said. “Our mesocosm work provides an opportunity to visualize how were are investigating the interactions of nanomaterials with living systems, and our microscopy work provides some useful, if not beautiful images. But Raewyn has been brilliant in finding metaphors, cultural references, and accompanying images to get points across.”

img_2872

Graduate student Amalia Turner describes how she uses the dark-field microscope to characterize gold nanoparticles in soil. From left: Amalia Turner, Nick Geitner, Raewyn Turner, and Brian Harris.

On Tuesday, Geitner led the pair on a soggy tour of the mesocosms, 30 miniature coastal ecosystems tucked into the Duke Forest where researchers are finding out where nanoparticles go when released into the environment. After that, the group retreated to the relative warmth of the laboratory to peek at the particles under a microscope.

Even at 400 times magnification, the silver nanoparticles on the slide can’t really be “seen” in any detail, Geitner explained.

“It is sort of like looking at the stars,” Geitner said. “You can’t tell what is a big star and what is a small star because they are so far away, you just get that point of light.”

But the image still contains loads of information, Geitner added, because each particle scatters a different color of light depending on its size and shape: particles on their own shine a cool blue, while particles that have joined together in clusters appear green, orange or red.

During the week, Harris and Turner saw a number of other techniques for studying nanoparticles, including scanning electron microscopes and molecular dynamics simulations.

steepwashing-cake-copy-23

An image from the Steep collection, which uses visual imagery to explore how humans interact with the increasingly abundant gold nanoparticles in our environment. Credit: Raewyn Turner and Brian Harris.

“What we have found really, really interesting is that the nanoparticles have different properties,” Turner said. “Each type of nanoparticle is different to each other one, and it also depends on which environment you put them into, just like how a human will behave in different environments in different ways.”

Geitner says the experience has been illuminating for him, too. “I have never in my life thought of nanoparticles from this perspective before,” Geitner said. “A lot of their questions are about really, what is the difference when you get down to atoms, molecules, nanoparticles? They are all really, really small, but what does small mean?”

Kara J. Manke, PhD

Post by Kara Manke

Meet the New Blogger: Shanen Ganapathee

Hi y’all! My name is Shanen and I am from the deep, deep South… of the globe. I was born and raised in Mauritius, a small island off the coast of Madagascar, once home to the now-extinct Dodo bird.

Shanen Ganapathee

Shanen Ganapathee is a senior who wishes to be ‘a historian of the brain’

The reason I’m at Duke has to do with a desire to do what I love most — exploring art, science and their intersection. You will often find me writing prose; inspired by lessons in neuroanatomy and casting a DNA strand as the main character in my short story.

I’m excited about Africa, and the future of higher education and research on the continent. I believe in ideas, especially when they are big and bold. I’m a dreamer, an idealist but some might call me naive. I am deeply passionate about research but above all how it is made accessible to a wide audience.

I am currently a senior pursuing a Program II in Human Cognitive Evolution, a major I designed in my sophomore year with the help of my advisor, Dr. Leonard White, whom I had to luck to meet through the Neurohumanities Program in Paris.

This semester, I am working on a thesis project under the guidance of Dr. Greg Wray, inspired by an independent study I did under Dr. Steven Churchill, where we examined the difference in early human and Neandertal cognition and behavior. I am interested in using ancient DNA genomics to answer the age-old question: what makes us human? My claim is that the advent of artistic ventures truly shaped the beginning of behavioral modernity. In a sense, I want to be a historian of the brain.

My first exposure to the world of genomics was through the FOCUS program — Genome in our Lives — my freshman fall. Ever since, I have been fascinated by what the human genome can teach us. It is a window into our collective pasts as much as it informs us about our present and future. I am particularly intrigued by how the forces of evolution have shaped us to become the species we are.

I am excited about joining the Duke Research blog and sharing some great science with you all.

Cracking a Hit-and-Run Whodunit — With Lasers

The scratch was deep, two feet long, and spattered with paint flecks. Another vehicle had clearly grazed the side of Duke graduate student Jin Yu’s silver Honda Accord.

But the culprit had left no note, no phone number, and no insurance information.

Pump-Probe-Microscope-Pigment

Duke graduate student Jin Yu used laser-based imaging to confirm the source of a large scratch on the side of her car. Paint samples from an undamaged area on her Honda Accord (top left) and a suspected vehicle (top right) gave her the unique pump-probe microscopy signatures of the pigments on each car. The damaged areas of the Honda (bottom left) and the suspected vehicle on right (bottom right) show pigment signatures from both vehicles.

The timing of the accident, the location of the scratch, and the color of the foreign paint all pointed to a likely suspect: another vehicle in her apartment complex parking lot, also sporting a fresh gash.

She had a solid lead, but Yu wasn’t quite satisfied. The chemistry student wanted to make sure her case was rock-solid.

“I wanted to show them some scientific evidence,” Yu said.

And lucky for her, she had just the tools to do that.

As a researcher in the Warren Warren lab, Yu spends her days as scientific sleuth, investigating how a laser-based tool called pump-probe microscopy can be used to differentiate between individual pigments of paint, even if they appear identical to the human eye.

The team is developing the technique as a way for art historians and conservators peer under the surface of priceless paintings, without causing damage to the artwork. But Yu thought there was no reason the technique couldn’t be used for forensics, too.

“The idea popped into my mind — car paint is made up of pigments, just like paintings,” Yu said. “So, if I can compare the pigments remaining on my car with the suspected car, and they match up, that would be a pretty nice clue for finding the suspected car.”

Using a clean set of eyebrow tweezers, Yu carefully gathered small flecks of paint from her car and from the suspected vehicle and sealed them up inside individual Ziploc bags. She collected samples both from the scratched up areas, where the paint was mixed, and from undamaged areas on both cars.

She left a note on the car, citing the preliminary evidence and stating her plan to test the paint samples. Then, back at the lab, she examined all four samples with the pump-probe microscope. Unlike a standard optical microscope, this device illuminates each sample with a precisely timed series of laser pulses; each pigment absorbs and then re-emits this laser light in a slightly different pattern depending on its chemical structure, creating a unique signature.

Optical-Microscope-and-Note

After finding the gash on her Accord (top left), Yu left a note (top right) on the car that she suspected of having caused the accident. Under an optical microscope, samples from damaged areas on the cars show evidence of the same two kinds of paint (bottom). Yu used pump-probe microscopy to confirm that the pigments in the paint samples matched.

The samples from the undamaged areas gave her the characteristic pigment signatures from both of the two vehicles.

She then looked at the paint samples taken from the scratched areas. She found clear evidence of paint pigment from the suspected car on her Honda, and clear evidence of paint pigment from her Honda on the suspected car. This was like DNA evidence, of the automotive variety.

Fortunately, the owner of the suspect vehicle contacted Yu to confess and pay to have her car fixed, without demanding the results of the paint analysis. “But it was reassuring to have some scientific evidence in case she denied the accident,” Yu said.

Yu says she had no interest in forensic science when she started the investigation, but the experience has certainly piqued her curiosity.

“I had never imagined that I can use pump-probe microscopy for forensic science before this car accident happened,” Yu said. “But I think it shows some interesting possibilities.”

Kara J. Manke, PhD

Post by Kara Manke

Page 1 of 3

Powered by WordPress & Theme by Anders Norén