The Fashion Trend Sweeping East Campus

During the months of January and February, there was one essential accessory seen on many first-year Duke students’ wrists: the Jawbone. The students were participating in a study listed on DukeList by Ms. Madeleine George solely for first-year students regarding their lives at Duke. The procedures for the study were simple:

  1. Do a preliminary test involving a game of cyberball, a game psychologists have adapted for data collection.
  2. Wear the Jawbone for the duration of the study (10 days)
  3. Answer the questions sent to your phone every four hours. You will need to answer five a day. The questions are brief.
  4. Answer all the questions every day (you can miss one of the question times) and get $32.

About a hundred first-year Duke students participated.

Some of the questions on the surveys asked how long you slept, how stressed you felt, what time did you woke up, did you talk to your parents today, how many texts did you send, and so on. It truly did feel as though it were a study on the daily life of Duke students. However, there was a narrower focus on this study.

Ms. Madeleine George

Ms. George is a Ph.D. candidate in developmental psychology in her 5th year at Duke. She is interested in relationships and how daily technology usage and social support such as virtual communication can influence adolescent and young adult well-being.

Her dissertation is about how parents may be able to provide daily support to their children in their first year of college as face to face interactions are replaced by virtual communication through technology in modern society. This was done in three pieces.

The jawbone study is the third part. George is exploring why these effects occur, if they are uniquely a response to parents, or if people can simply feel better from other personal interactions. Taking the data from the surveys, George has been using models that allow for comparison between each person to themselves and basic ANOVA tests that allow her to examine the differences between groups. She’s still working on that analysis.

For her first test, she found that students who talked to their parents were feeling worse. But, on days students had a stressor, they were in a better mood after talking to their parents. In addition, based on the cyberball experiment where students texted a parent, stranger, or no one, George infers that texting anyone is better than no one because it can make people feel supported.

So far, George seems to have found that technology doesn’t necessarily take away relationship value and quality. Online relationships tend to reflect offline relationships. While talking with parents might not always make a student feel better, there can be circumstances where it can be beneficial.

Post by Meg Shieh.

Creating Technology That Understands Human Emotions

“If you – as a human – want to know how somebody feels, for what might you look?” Professor Shaundra Daily asked the audience during an ECE seminar last week.

“Facial expressions.”
“Body Language.”
“Tone of voice.”
“They could tell you!”

Over 50 students and faculty gathered over cookies and fruits for Dr. Daily’s talk on designing applications to support personal growth. Dr. Daily is an Associate Professor in the Department of Computer and Information Science and Engineering at the University of Florida interested in affective computing and STEM education.

Dr. Daily explaining the various types of devices used to analyze people’s feelings and emotions. For example, pressure sensors on a computer mouse helped measure the frustration of participants as they filled out an online form.

Affective Computing

The visual and auditory cues proposed above give a human clues about the emotions of another human. Can we use technology to better understand our mental state? Is it possible to develop software applications that can play a role in supporting emotional self-awareness and empathy development?

Until recently, technologists have largely ignored emotion in understanding human learning and communication processes, partly because it has been misunderstood and hard to measure. Asking the questions above, affective computing researchers use pattern analysis, signal processing, and machine learning to extract affective information from signals that human beings express. This is integral to restore a proper balance between emotion and cognition in designing technologies to address human needs.

Dr. Daily and her group of researchers used skin conductance as a measure of engagement and memory stimulation. Changes in skin conductance, or the measure of sweat secretion from sweat gland, are triggered by arousal. For example, a nervous person produces more sweat than a sleeping or calm individual, resulting in an increase in skin conductance.

Galvactivators, devices that sense and communicate skin conductivity, are often placed on the palms, which have a high density of the eccrine sweat glands.

Applying this knowledge to the field of education, can we give a teacher physiologically-based information on student engagement during class lectures? Dr. Daily initiated Project EngageMe by placing galvactivators like the one in the picture above on the palms of students in a college classroom. Professors were able to use the results chart to reflect on different parts and types of lectures based on the responses from the class as a whole, as well as analyze specific students to better understand the effects of their teaching methods.

Project EngageMe: Screenshot of digital prototype of the reading from the galvactivator of an individual student.

The project ended up causing quite a bit of controversy, however, due to privacy issues as well our understanding of skin conductance. Skin conductance can increase due to a variety of reasons – a student watching a funny video on Facebook might display similar levels of conductance as an attentive student. Thus, the results on the graph are not necessarily correlated with events in the classroom.

Educational Research

Daily’s research blends computational learning with social and emotional learning. Her projects encourage students to develop computational thinking through reflecting on the community with digital storytelling in MIT’s Scratch, learning to use 3D printers and laser cutters, and expressing ideas using robotics and sensors attached to their body.

VENVI, Dr. Daily’s latest research, uses dance to teach basic computational concepts. By allowing users to program a 3D virtual character that follows dance movements, VENVI reinforces important programming concepts such as step sequences, ‘for’ and ‘while’ loops of repeated moves, and functions with conditions for which the character can do the steps created!

 

 

Dr. Daily and her research group observed increased interest from students in pursuing STEM fields as well as a shift in their opinion of computer science. Drawings from Dr. Daily’s Women in STEM camp completed on the first day consisted of computer scientist representations as primarily frazzled males coding in a small office, while those drawn after learning with VENVI included more females and engagement in collaborative activities.

VENVI is a programming software that allows users to program a virtual character to perform a sequence of steps in a 3D virtual environment!

In human-to-human interactions, we are able draw on our experiences to connect and empathize with each other. As robots and virtual machines grow to take increasing roles in our daily lives, it’s time to start designing emotionally intelligent devices that can learn to empathize with us as well.

Post by Anika Radiya-Dixit

Seeing Nano

Take pictures at more than 300,000 times magnification with electron microscopes at Duke

Sewer gnat head

An image of a sewer gnat’s head taken through a scanning electron microscope. Courtesy of Fred Nijhout.

The sewer gnat is a common nuisance around kitchen and bathroom drains that’s no bigger than a pea. But magnified thousands of times, its compound eyes and bushy antennae resemble a first place winner in a Movember mustache contest.

Sewer gnats’ larger cousins, horseflies are known for their painful bite. Zoom in and it’s easy to see how they hold onto their furry livestock prey:  the tiny hooked hairs on their feet look like Velcro.

Students in professor Fred Nijhout’s entomology class photograph these and other specimens at more than 300,000 times magnification at Duke’s Shared Material & Instrumentation Facility (SMIF).

There the insects are dried, coated in gold and palladium, and then bombarded with a beam of electrons from a scanning electron microscope, which can resolve structures tens of thousands of times smaller than the width of a human hair.

From a ladybug’s leg to a weevil’s suit of armor, the bristly, bumpy, pitted surfaces of insects are surprisingly beautiful when viewed up close.

“The students have come to treat travels across the surface of an insect as the exploration of a different planet,” Nijhout said.

Horsefly foot

The foot of a horsefly is equipped with menacing claws and Velcro-like hairs that help them hang onto fur. Photo by Valerie Tornini.

Weevil

The hard outer skeleton of a weevil looks smooth and shiny from afar, but up close it’s covered with scales and bristles. Courtesy of Fred Nijhout.

fruit fly wing

Magnified 500 times, the rippled edges of this fruit fly wing are the result of changes in the insect’s genetic code. Courtesy of Eric Spana.

You, too, can gaze at alien worlds too small to see with the naked eye. Students and instructors across campus can use the SMIF’s high-powered microscopes and other state of the art research equipment at no charge with support from the Class-Based Explorations Program.

Biologist Eric Spana’s experimental genetics class uses the microscopes to study fruit flies that carry genetic mutations that alter the shape of their wings.

Students in professor Hadley Cocks’ mechanical engineering 415L class take lessons from objects that break. A scanning electron micrograph of a cracked cymbal once used by the Duke pep band reveals grooves and ridges consistent with the wear and tear from repeated banging.

cracked cymbal

Magnified 3000 times, the surface of this broken cymbal once used by the Duke Pep Band reveals signs of fatigue cracking. Courtesy of Hadley Cocks.

These students are among more than 200 undergraduates in eight classes who benefitted from the program last year, thanks to a grant from the Donald Alstadt Foundation.

You don’t have to be a scientist, either. Historians and art conservators have used scanning electron microscopes to study the surfaces of Bronze Age pottery, the composition of ancient paints and even dust from Egyptian mummies and the Shroud of Turin.

Instructors and undergraduates are invited to find out how they could use the microscopes and other nanotech equipement in the SMIF in their teaching and research. Queries should be directed to Dr. Mark Walters, Director of SMIF, via email at mark.walters@duke.edu.

Located on Duke’s West Campus in the Fitzpatrick Building, the SMIF is a shared use facility available to Duke researchers and educators as well as external users from other universities, government laboratories or industry through a partnership called the Research Triangle Nanotechnology Network. For more info visit http://smif.pratt.duke.edu/.

Scanning electron microscope

This scanning electron microscope could easily be mistaken for equipment from a dentist’s office.

s200_robin.smith

Post by Robin Smith

Acoustic Metamaterials: Designing Plastic to Bend Sound

I recently toured Dr. Steven Cummer’s lab in Duke Engineering to learn about metamaterials, synthetic materials used to manipulate sound and light waves.

Acoustic metamaterials recently bent an incoming sound into the shape of an A, which the researchers called an acoustic hologram.

Acoustic metamaterials recently bent an incoming sound into the shape of an A, which the researchers called an acoustic hologram.

Cummer’s graduate student Abel Xie first showed me the Sound Propagator. It was made of small pieces that looked similar to legos stacked in a wall. These acoustic metamaterials were made of plastic and contained many winding pathways that delay and propagate, or change the direction, of sound waves. The pieces were configured in certain ways so they could design a sound field, a sort of acoustic hologram.

These metamaterials can be configured to direct a 4 kHz sound wave into the shape of a letter ‘A’. The researchers measured the outgoing sound wave using a 2D sweeping microphone that passed back and forth over the A-shaped sound like a lawnmower, moving to the right, then up, then left, etc. The arrangement of metamaterials that reconfigures sound waves is called a lens, because it can focus sound waves to one or more points like a light-bending lens.

Xie then showed me a version of the acoustic metamaterials 10 times smaller that propagated ultrasonic (40 KHz) sound waves. He told me that since 40 kHz was well out of the human range of hearing, it could be a viable option for the wireless non-contact charging of devices like phones. The smaller wave propagator could direct inaudible sound waves to your device, and then another piece of technology called a transfuser would convert acoustic energy into electrical energy.

This structure, with a microphone in the middle, can perform the "cocktail party" trick that humans can -- figuring out where in the room a sound is coming from.

This structure with a microphone in the middle can perform the “cocktail party” trick that humans can — picking out once voice among many.

Now that the waves have been directed, how do we read them? Xie directed me to what looked like a plastic cheesecake in the middle of the table. It was deep and beige and was split into many ‘slices.’ Each slice was further divided into a unique honeycomb of varying depth. The slices were separated from each by glass panes. This directed the soundwaves across the unique honeycomb of each slice towards the lone microphone in the middle. A microphone would be able to recognize where the sound was coming from based on how the wave had changed while it passed over the different honeycomb pattern of each slice.

Xie described the microphone’s ability to distinguish where a sound is coming from and comprehend that specific sound as the “cocktail party effect,” or the human ability to pick out one person speaking in a noisy room. This dense plastic sound sensor is able to distinguish up to three different people speaking and determine where they are in relation to the microphone. He explained how this technology could be miniaturized and implemented in devices like the Amazon Echo to make them more efficient.

Dr. Cummer and Abel Xie’s research is changing the way we think about microphones and sound, and may one day improve all kinds of technology ranging from digital assistants to wirelessly charging your phone.

Frank diLustro

Frank diLustro is a senior at the North Carolina School for Science and Math.

 

X-mas Under X-ray

If, like me, you just cannot wait until Christmas morning to find out what goodies are hiding in those shiny packages under the tree, we have just the solution for you: stick them in a MicroCT scanner.

A christmas present inside a MicroCT scanner.

Our glittery package gets the X-ray treatment inside Duke’s MicroCT scanner. Credit Justin Gladman.

Micro computed-tomography (CT) scanners use X-ray beams and sophisticated visual reconstruction software to “see” into objects and create 3D images of their insides. In recent years, Duke’s MicroCT has been used to tackle some fascinating research projects, including digitizing fossils, reconstructing towers made of stars, peaking inside of 3D-printed electronic devices, and creating a gorgeous 3D reconstruction of organs and muscle tissue inside this Southeast Asian Tree Shrew.

x-ray-view

A 20 minute scan revealed a devilish-looking rubber duck. Credit Justin Gladman.

But when engineer Justin Gladman offered to give us a demo of the machine last week, we both agreed there was only one object we wanted a glimpse inside: a sparkly holiday gift bag.

While securing the gift atop a small, rotating pedestal inside the device, Gladman explained how the device works. Like the big CT scanners you may have encountered at a hospital or clinic, the MicroCT uses X-rays to create a picture of the density of an object at different locations. By taking a series of these scans at different angles, a computer algorithm can then reconstruct a full 3D model of the density, revealing bones inside of animals, individual circuits inside electronics – or a present inside a box.

“Our machine is built to handle a lot of different specimens, from bees to mechanical parts to computer chips, so we have a little bit of a jack-of-all-trades,” Gladman said.

Within a few moments of sticking the package in the beam, a 2D image of the object in the bag appears on the screen. It looks kind of like the Stay Puft Marshmallow Man, but wait – are those horns?

Blue devil ducky in the flesh.

Blue devil ducky in the flesh.

Gladman sets up a full 3D scan of the gift package, and after 20 minutes, the contents of our holiday loot is clear. We have a blue devil rubber ducky on our hands!

Blue ducky is a fun example, but the SMIF lab always welcomes new users, Gladman says, especially students and researchers with creative new applications for the equipment. For more information on how to use Duke’s MicroCT, contact Justin Gladman or visit the Duke SMIF lab at their website, Facebook, Youtube or Instagram pages.

Kara J. Manke, PhD

Post by Kara Manke

Mapping the Brain With Stories

alex-huth_

Dr. Alex Huth. Image courtesy of The Gallant Lab.

On October 15, I attended a presentation on “Using Stories to Understand How The Brain Represents Words,” sponsored by the Franklin Humanities Institute and Neurohumanities Research Group and presented by Dr. Alex Huth. Dr. Huth is a neuroscience postdoc who works in the Gallant Lab at UC Berkeley and was here on behalf of Dr. Jack Gallant.

Dr. Huth started off the lecture by discussing how semantic tasks activate huge swaths of the cortex. The semantic system places importance on stories. The issue was in understanding “how the brain represents words.”

To investigate this, the Gallant Lab designed a natural language experiment. Subjects lay in an fMRI scanner and listened to 72 hours’ worth of ten naturally spoken narratives, or stories. They heard many different words and concepts. Using an imaging technique called GE-EPI fMRI, the researchers were able to record BOLD responses from the whole brain.

Dr. Huth explaining the process of obtaining the new colored models that revealed semantic "maps are consistent across subjects."

Dr. Huth explaining the process of obtaining the new colored models that revealed semantic “maps are consistent across subjects.”

Dr. Huth showed a scan and said, “So looking…at this volume of 3D space, which is what you get from an fMRI scan…is actually not that useful to understanding how things are related across the surface of the cortex.” This limitation led the researchers to improve upon their methods by reconstructing the cortical surface and manipulating it to create a 2D image that reveals what is going on throughout the brain.  This approach would allow them to see where in the brain the relationship between what the subject was hearing and what was happening was occurring.

A model was then created that would require voxel interpretation, which “is hard and lots of work,” said Dr. Huth, “There’s a lot of subjectivity that goes into this.” In order to simplify voxel interpretation, the researchers simplified the dimensional subspace to find the classes of voxels using principal components analysis. This meant that they took data, found the important factors that were similar across the subjects, and interpreted the meaning of the components. To visualize these components, researchers sorted words into twelve different categories.

img_2431

The Four Categories of Words Sorted in an X,Y-like Axis

These categories were then further simplified into four “areas” on what might resemble an x , y axis. On the top right was where violent words were located. The top left held social perceptual words. The lower left held words relating to “social.” The lower right held emotional words. Instead of x , y axis labels, there were PC labels. The words from the study were then colored based on where they appeared in the PC space.

By using this model, the Gallant could identify which patches of the brain were doing different things. Small patches of color showed which “things” the brain was “doing” or “relating.” The researchers found that the complex cortical maps showing semantic information among the subjects was consistent.

These responses were then used to create models that could predict BOLD responses from the semantic content in stories. The result of the study was that the parietal cortex, temporal cortex, and prefrontal cortex represent the semantics of narratives.

meg_shieh_100hedPost by Meg Shieh