Duke Research Blog

Following the people and events that make up the research community at Duke.

Category: Visualization (Page 1 of 11)

3D Virus Cam Catches Germs Red-Handed

A 3D plot of a virus wiggling around

The Duke team used their 3D virus cam to spy on this small lentivirus as it danced through a salt water solution.

Before germs like viruses can make you sick, they first have to make a landing on one of your cells — Mars Rover style — and then punch their way inside.

A team of physical chemists at Duke is building a microscope so powerful that it can spot these minuscule germs in the act of infection.

The team has created a new 3D “virus cam” that can spy on tiny viral germs as they wriggle around in real time. In a video caught by the microscope, you can watch as a lentivirus bounces and jitters through an area a little wider that a human hair.

Next, they hope to develop this technique into a multi-functional “magic camera” that will let them see not only the dancing viruses, but also the much larger cell membranes they are trying breech.

“Really what we are trying to investigate is the very first contacts of the virus with the cell surface — how it calls receptors, and how it sheds its envelope,” said group leader Kevin Welsher, assistant professor of chemistry at Duke. “We want to watch that process in real time, and to do that, we need to be able to lock on to the virus right from the first moment.”

A 3D plot spells out the name "Duke"

To test out the microscope, the team attached a fluorescent bead to a motion controller and tracked its movements as it spelled out a familiar name.

This isn’t the first microscope that can track real-time, 3D motions of individual particles. In fact, as a postdoctoral researcher at Princeton, Welsher built an earlier model and used it to track a bright fluorescent bead as it gets stuck in the membrane of a cell.

But the new virus cam, built by Duke postdoc Shangguo Hou, can track particles that are faster-moving and dimmer compared to earlier microscopes. “We were trying to overcome a speed limit, and we were trying to do so with the fewest number of photons collected possible,” Welsher said.

The ability to spot dimmer particles is particularly important when tracking viruses, Welsher said. These small bundles of proteins and DNA don’t naturally give off any light, so to see them under a microscope, researchers first have to stick something fluorescent on them. But many bright fluorescent particles, such as quantum dots, are pretty big compared to the size of most viruses. Attaching one is kind of like sticking a baseball onto a basketball – there is a good chance it might affect how the virus moves and interacts with cells.

The new microscope can detect the fainter light given off by much smaller fluorescent proteins – which, if the virus is a basketball, are approximately the size of a pea. Fluorescent proteins can also be inserted to the viral genome, which allows them to be incorporated into the virus as it is being assembled.

“That was the big move for us,” Welsher said, “We didn’t need to use a quantum dot, we didn’t need to use an artificial fluorescent bead. As long as the fluorescent protein was somewhere in the virus, we could spot it.” To create their viral video, Welsher’s team enlisted Duke’s Viral Vector Core to insert a yellow fluorescent protein into their lentivirus.

Now that the virus-tracking microscope is up-and-running, the team is busy building a laser scanning microscope that will also be able to map cell surfaces nearby. “So if we know where the particle is, we can also image around it and reconstruct where the particle is going,” Welsher said. “We hope to adapt this to capturing viral infection in real time.”

Robust real-time 3D single-particle tracking using a dynamically moving laser spot,” Shangguo Hou, Xiaoqi Lang and Kevin Welsher. Optics Letters, June 15, 2017. DOI: 10.1364/OL.42.002390

Kara J. Manke, PhDPost by Kara Manke

Immerse Yourself in Virtual Reality on the Quad

Open since September 2016, the Virtual Reality Room on the first floor lounge of Edens 1C allows students to experience virtual reality using the HTC Vive headset and controllers.

DURHAM, N.C. — The virtual reality headset looked like something out of a science fiction film. It was tethered by a long cable to a glass-encased PC, which in turn was connected to thick hoses filled with glowing blue coolant.

I slipped the mask over my head and was literally transported to another world.

In real life, I was in the lower level of Edens residence hall testing out the recently opened BoltVR gaming room during an event hosted by the Duke Digital Initiative (DDI). Virtual reality is one of the technologies that DDI is exploring for its potential in teaching and learning.

Rebekkah Huss shoots invaders with a virtual bow and arrow in Duke's newest virtual reality space.

Rebekkah Huss shoots invaders with a virtual bow and arrow in Duke’s newest virtual reality space. Open to students 4 p.m. to 10 p.m. on weekdays, noon to midnight on weekends.

BoltVR is a virtual reality space outfitted with the immersive room-scale technology of the HTC Vive, an $800 gaming system consisting of the headset, hand-held controllers and motion sensors in the room. The VR experience is a new addition to the Bolt gaming suite that opened in 2015 for Duke students.

Once I had the headset on, suddenly the bare walls and carpet were replaced by the yellow lined grid of the Holodeck from Star Trek. It was like nothing I’d ever seen. This is like the home screen for the gaming system, explained  Mark-Everett McGill the designer of the BoltVR game room, as he scrolled through the more than 70 downloaded VR experiences on the BoltVR online account at Steam.

McGill chose a story experience so that I could adjust to being able to move around physical objects in a virtual space.

It was like the floor melted away. On a tiny asteroid in front of me The Little Prince and his rose played out their drama from the cover of the classic children’s book. The stars surrounded me and I tilted my head back to watch a giant planet fly over.

I could walk around the prince’s tiny asteroid and inspect the little world from all angles, but I found it disorienting to walk with normal stability while my eyes told me that I was floating in space. The HTC Vive has a built-in  guidance system called the Chaperone that used a map of the room to keep me from crashing into the walls, I still somehow managed to bump a spectator.

“A lot of people get motion sickness when they use VR because your eyes are sensing the movement but your ears are telling you, you aren’t doing anything.” said, McGill.

Lucky for me, I have a strong stomach and suffered no ill effects while wearing the headset. The HTC Vive also helps counteract motion sickness because is room scale design allows for normal walking and movement.

There was however, one part of the experience that felt very odd, and that was the handheld controllers. The controllers  are tracked by wall-mounted sensors so they show up really well in the VR headset. The problem was that in the titles I played my hands and body were invisible to me.

The headset and controller themselves are incredibly sensitive and accurate. I think most people would intuitively understand how to use them, especially if they have a gaming background, but I missed having the comfort of my own arms. So while the VR worlds are visually believable and the technology powering them is absolutely fascinating, there is still lots of room for new innovations.

Once I started playing games though, I no longer cared about the limitations of the tech because I was having so much fun!

The most popular student choice in the BoltVR is a subgame of The Lab by Valve, it’s a simple tower defense game where the player uses a bow and arrow to shoot little 2D stickmen and stop their attack.

Everything about using the bow felt pretty realistic like loading arrows, and using angles to control the trajectory of a shot. There was even a torch that I used to light my arrow on fire before launching it at an attacker. With unlimited ammunition, I happily guarded my tower from waves of baddies until I finally had to let someone else have a turn.

To learn more about VR experiences for teaching and learning at Duke, join the listserv at https://lists.duke.edu/sympa/subscribe/vr2learn.

Post by Rebekkah Huss

Post by Rebekkah Huss

Cooking Up “Frustrated” Magnets in Search of Superconductivity

Sara Haravifard

A simplified version of Sara Haravifard’s recipe for new superconductors, by the National High Magnetic Field Laboratory

Duke physics professor Sara Haravifard is mixing, cooking, squishing and freezing “frustrated” magnetic crystals in search of the origins of superconductivity.

Superconductivity refers to the ability of electrons to travel endlessly through certain materials, called superconductors, without adding any energy — think of a car that can drive forever with no gas or electricity. And just the way gas-less, charge-less cars would make travel vastly cheaper, superconductivity has the potential to revolutionize electronics and energy industry.

But superconductors are extremely rare, and are usually only superconductive at extremely cold temperatures — too cold for any but a few highly specialized applications. A few “high-temperature” superconductors have been discovered, but scientists are still flummoxed at why and how these superconductors exist.

Haravifard hopes that her magnet experiments will reveal the origins of high-temperature superconductivity so that researchers can design and build new materials with this amazing property. In the process, her team may also discover materials that are useful in quantum computing, or even entirely new states of matter.

Learn more about their journey on this fascinating infographic by The National High Magnetic Field Laboratory.

Infographic describing magnetic crystal research

Infographic courtesy of the National High Magnetic Field Laboratory

Kara J. Manke, PhD

Post by Kara Manke

Visualizing the Fourth Dimension

Living in a 3-dimensional world, we can easily visualize objects in 2 and 3 dimensions. But as a mathematician, playing with only 3 dimensions is limiting, Dr. Henry Segerman laments.  An Assistant Professor in Mathematics at Oklahoma State University, Segerman spoke to Duke students and faculty on visualizing 4-dimensional space as part of the PLUM lecture series on April 18.

What exactly is the 4th dimension?

Let’s break down spatial dimensions into what we know. We can describe a point in 2-dimensional space with two numbers x and y, visualizing an object in the xy plane, and a point in 3D space with 3 numbers in the xyz coordinate system.

Plotting three dimensions in the xyz coordinate system.

While the green right-angle markers are not actually 90 degrees, we are able to infer the 3-dimensional geometry as shown on a 2-dimensional screen.

Likewise, we can describe a point in 4-dimensional space with four numbers – x, y, z, and w – where the purple w-axis is at a right angle to the other regions; in other words, we can visualize 4 dimensions by squishing it down to three.

Plotting four dimensions in the xyzw coordinate system.

One commonly explored 4D object we can attempt to visualize is known as a hypercube. A hypercube is analogous to a cube in 3 dimensions, just as a cube is to a square.

How do we make a hypercube?

To create a 1D line, we take a point, make a copy, move the copied point parallely to some distance away, and then connect the two points with a line.

Similarly, a square can be formed by making a copy of a line and connecting them to add the second dimension.

So, to create a hypercube, we move identical 3D cubes parallel to each other, and then connect them with four lines, as depicted in the image below.

To create an n–dimensional cube, we take 2 copies of the (n−1)–dimensional cube and connecting corresponding corners.

Even with a 3D-printed model, trying to visualize the hypercube can get confusing. 

How can we make a better picture of a hypercube? “You sort of cheat,” Dr. Segerman explained. One way to cheat is by casting shadows.

Parallel projection shadows, depicted in the figure below, are caused by rays of light falling at a  right angle to the plane of the table. We can see that some of the edges of the shadow are parallel, which is also true of the physical object. However, some of the edges that collide in the 2D cast don’t actually collide in the 3D object, making the projection more complicated to map back to the 3D object.

Parallel projection of a cube on a transparent sheet of plastic above the table.

One way to cast shadows with no collisions is through stereographic projection as depicted below.

The stereographic projection is a mapping (function) that projects a sphere onto a plane. The projection is defined on the entire sphere, except the point at the top of the sphere.

For the object below, the curves on the sphere cast shadows, mapping them to a straight line grid on the plane. With stereographic projection, each side of the 3D object maps to a different point on the plane so that we can view all sides of the original object.

Stereographic projection of a grid pattern onto the plane. 3D print the model at Duke’s Co-Lab!

Just as shadows of 3D objects are images formed on a 2D surface, our retina has only a 2D surface area to detect light entering the eye, so we actually see a 2D projection of our 3D world. Our minds are computationally able to reconstruct the 3D world around us by using previous experience and information from the 2D images such as light, shade, and parallax.

Projection of a 3D object on a 2D surface.

Projection of a 4D object on a 3D world

How can we visualize the 4-dimensional hypercube?

To use stereographic projection, we radially project the edges of a 3D cube (left of the image below) to the surface of a sphere to form a “beach ball cube” (right).

The faces of the cube radially projected onto the sphere.

Placing a point light source at the north pole of the bloated cube, we can obtain the projection onto a 2D plane as shown below.

Stereographic projection of the “beach ball cube” pattern to the plane. View the 3D model here.

Applied to one dimension higher, we can theoretically blow a 4-dimensional shape up into a ball, and then place a light at the top of the object, and project the image down into 3 dimensions.

Left: 3D print of the stereographic projection of a “beach ball hypercube” to 3-dimensional space. Right: computer render of the same, including the 2-dimensional square faces.

Forming n–dimensional cubes from (n−1)–dimensional renderings.

Thus, the constructed 3D model of the “beach ball cube” shadow is the projection of the hypercube into 3-dimensional space. Here the 4-dimensional edges of the hypercube become distorted cubes instead of strips.

Just as the edges of the top object in the figure can be connected together by folding the squares through the 3rd dimension to form a cube, the edges of the bottom object can be connected through the 4th dimension

Why are we trying to understand things in 4 dimensions?

As far as we know, the space around us consists of only 3 dimensions. Mathematically, however, there is no reason to limit our understanding of higher-dimensional geometry and space to only 3, since there is nothing special about the number 3 that makes it the only possible number of dimensions space can have.

From a physics perspective, Einstein’s theory of Special Relativity suggests a connection between space and time, so the space-time continuum consists of 3 spatial dimensions and 1 temporal dimension. For example, consider a blooming flower. The flower’s position it not changing: it is not moving up or sideways. Yet, we can observe the transformation, which is proof that an additional dimension exists. Equating time with the 4th dimension is one example, but the 4th dimension can also be positional like the first 3. While it is possible to visualize space-time by examining snapshots of the flower with time as a constant, it is also useful to understand how space and time interrelate geometrically.

Explore more in the 4th dimension with Hypernom or Dr. Segerman’s book “Visualizing Mathematics with 3D Printing“!

Post by Anika Radiya-Dixit.

 

 

Data Geeks Go Head to Head

For North Carolina college students, “big data” is becoming a big deal. The proof: signups for DataFest, a 48-hour number-crunching competition held at Duke last weekend, set a record for the third time in a row this year.

DataFest 2017

More than 350 data geeks swarmed Bostock Library this weekend for a 48-hour number-crunching competition called DataFest. Photo by Loreanne Oh, Duke University.

Expected turnout was so high that event organizer and Duke statistics professor Mine Cetinkaya-Rundel was even required by state fire code to sign up for “crowd manager” safety training — her certificate of completion is still proudly displayed on her Twitter feed.

Nearly 350 students from 10 schools across North Carolina, California and elsewhere flocked to Duke’s West Campus from Friday, March 31 to Sunday, April 2 to compete in the annual event.

Teams of two to five students worked around the clock over the weekend to make sense of a single real-world data set. “It’s an incredible opportunity to apply the modeling and computing skills we learn in class to actual business problems,” said Duke junior Angie Shen, who participated in DataFest for the second time this year.

The surprise dataset was revealed Friday night. Just taming it into a form that could be analyzed was a challenge. Containing millions of data points from an online booking site, it was too large to open in Excel. “It was bigger than anything I’ve worked with before,” said NC State statistics major Michael Burton.

DataFest 2017

The mystery data set was revealed Friday night in Gross Hall. Photo by Loreanne Oh.

Because of its size, even simple procedures took a long time to run. “The dataset was so large that we actually spent the first half of the competition fixing our crushed software and did not arrive at any concrete finding until late afternoon on Saturday,” said Duke junior Tianlin Duan.

The organizers of DataFest don’t specify research questions in advance. Participants are given free rein to analyze the data however they choose.

“We were overwhelmed with the possibilities. There was so much data and so little time,” said NCSU psychology major Chandani Kumar.

“While for the most part data analysis was decided by our teachers before now, this time we had to make all of the decisions ourselves,” said Kumar’s teammate Aleksey Fayuk, a statistics major at NCSU.

As a result, these budding data scientists don’t just write code. They form theories, find patterns, test hunches. Before the weekend is over they also visualize their findings, make recommendations and communicate them to stakeholders.

This year’s participants came from more than 10 schools, including Duke, UNC, NC State and North Carolina A&T. Students from UC Davis and UC Berkeley also made the trek. Photo by Loreanne Oh.

“The most memorable moment was when we finally got our model to start generating predictions,” said Duke neuroscience and computer science double major Luke Farrell. “It was really exciting to see all of our work come together a few hours before the presentations were due.”

Consultants are available throughout the weekend to help with any questions participants might have. Recruiters from both start-ups and well-established companies were also on site for participants looking to network or share their resumes.

“Even as late as 11 p.m. on Saturday we were still able to find a professor from the Duke statistics department at the Edge to help us,” said Duke junior Yuqi Yun, whose team presented their results in a winning interactive visualization. “The organizers treat the event not merely as a contest but more of a learning experience for everyone.”

Caffeine was critical. “By 3 a.m. on Sunday morning, we ended initial analysis with what we had, hoped for the best, and went for a five-hour sleep in the library,” said NCSU’s Fayuk, whose team DataWolves went on to win best use of outside data.

By Sunday afternoon, every surface of The Edge in Bostock Library was littered with coffee cups, laptops, nacho crumbs, pizza boxes and candy wrappers. White boards were covered in scribbles from late-night brainstorming sessions.

“My team encouraged everyone to contribute ideas. I loved how everyone was treated as a valuable team member,” said Duke computer science and political science major Pim Chuaylua. She decided to sign up when a friend asked if she wanted to join their team. “I was hesitant at first because I’m the only non-stats major in the team, but I encouraged myself to get out of my comfort zone,” Chuaylua said.

“I learned so much from everyone since we all have different expertise and skills that we contributed to the discussion,” said Shen, whose teammates were majors in statistics, computer science and engineering. Students majoring in math, economics and biology were also well represented.

At the end, each team was allowed four minutes and at most three slides to present their findings to a panel of judges. Prizes were awarded in several categories, including “best insight,” “best visualization” and “best use of outside data.”

Duke is among more than 30 schools hosting similar events this year, coordinated by the American Statistical Association (ASA). The winning presentations and mystery data source will be posted on the DataFest website in May after all events are over.

The registration deadline for the next Duke DataFest will be March 2018.

DataFest 2017

Bleary-eyed contestants pose for a group photo at Duke DataFest 2017. Photo by Loreanne Oh.

s200_robin.smith

Post by Robin Smith

Creating Technology That Understands Human Emotions

“If you – as a human – want to know how somebody feels, for what might you look?” Professor Shaundra Daily asked the audience during an ECE seminar last week.

“Facial expressions.”
“Body Language.”
“Tone of voice.”
“They could tell you!”

Over 50 students and faculty gathered over cookies and fruits for Dr. Daily’s talk on designing applications to support personal growth. Dr. Daily is an Associate Professor in the Department of Computer and Information Science and Engineering at the University of Florida interested in affective computing and STEM education.

Dr. Daily explaining the various types of devices used to analyze people’s feelings and emotions. For example, pressure sensors on a computer mouse helped measure the frustration of participants as they filled out an online form.

Affective Computing

The visual and auditory cues proposed above give a human clues about the emotions of another human. Can we use technology to better understand our mental state? Is it possible to develop software applications that can play a role in supporting emotional self-awareness and empathy development?

Until recently, technologists have largely ignored emotion in understanding human learning and communication processes, partly because it has been misunderstood and hard to measure. Asking the questions above, affective computing researchers use pattern analysis, signal processing, and machine learning to extract affective information from signals that human beings express. This is integral to restore a proper balance between emotion and cognition in designing technologies to address human needs.

Dr. Daily and her group of researchers used skin conductance as a measure of engagement and memory stimulation. Changes in skin conductance, or the measure of sweat secretion from sweat gland, are triggered by arousal. For example, a nervous person produces more sweat than a sleeping or calm individual, resulting in an increase in skin conductance.

Galvactivators, devices that sense and communicate skin conductivity, are often placed on the palms, which have a high density of the eccrine sweat glands.

Applying this knowledge to the field of education, can we give a teacher physiologically-based information on student engagement during class lectures? Dr. Daily initiated Project EngageMe by placing galvactivators like the one in the picture above on the palms of students in a college classroom. Professors were able to use the results chart to reflect on different parts and types of lectures based on the responses from the class as a whole, as well as analyze specific students to better understand the effects of their teaching methods.

Project EngageMe: Screenshot of digital prototype of the reading from the galvactivator of an individual student.

The project ended up causing quite a bit of controversy, however, due to privacy issues as well our understanding of skin conductance. Skin conductance can increase due to a variety of reasons – a student watching a funny video on Facebook might display similar levels of conductance as an attentive student. Thus, the results on the graph are not necessarily correlated with events in the classroom.

Educational Research

Daily’s research blends computational learning with social and emotional learning. Her projects encourage students to develop computational thinking through reflecting on the community with digital storytelling in MIT’s Scratch, learning to use 3D printers and laser cutters, and expressing ideas using robotics and sensors attached to their body.

VENVI, Dr. Daily’s latest research, uses dance to teach basic computational concepts. By allowing users to program a 3D virtual character that follows dance movements, VENVI reinforces important programming concepts such as step sequences, ‘for’ and ‘while’ loops of repeated moves, and functions with conditions for which the character can do the steps created!

 

 

Dr. Daily and her research group observed increased interest from students in pursuing STEM fields as well as a shift in their opinion of computer science. Drawings from Dr. Daily’s Women in STEM camp completed on the first day consisted of computer scientist representations as primarily frazzled males coding in a small office, while those drawn after learning with VENVI included more females and engagement in collaborative activities.

VENVI is a programming software that allows users to program a virtual character to perform a sequence of steps in a 3D virtual environment!

In human-to-human interactions, we are able draw on our experiences to connect and empathize with each other. As robots and virtual machines grow to take increasing roles in our daily lives, it’s time to start designing emotionally intelligent devices that can learn to empathize with us as well.

Post by Anika Radiya-Dixit

Page 1 of 11

Powered by WordPress & Theme by Anders Norén