Duke Research Blog

Following the people and events that make up the research community at Duke.

Category: Neuroscience (Page 1 of 13)

Students Share Research Journeys at Bass Connections Showcase

From the highlands of north central Peru to high schools in North Carolina, student researchers in Duke’s Bass Connections program are gathering data in all sorts of unique places.

As the school year winds down, they packed into Duke’s Scharf Hall last week to hear one another’s stories.

Students and faculty gathered in Scharf Hall to learn about each other’s research at this year’s Bass Connections showcase. Photo by Jared Lazarus/Duke Photography.

The Bass Connections program brings together interdisciplinary teams of undergraduates, graduate students and professors to tackle big questions in research. This year’s showcase, which featured poster presentations and five “lightning talks,” was the first to include teams spanning all five of the program’s diverse themes: Brain and Society; Information, Society and Culture; Global Health; Education and Human Development; and Energy.

“The students wanted an opportunity to learn from one another about what they had been working on across all the different themes over the course of the year,” said Lori Bennear, associate professor of environmental economics and policy at the Nicholas School, during the opening remarks.

Students seized the chance, eagerly perusing peers’ posters and gathering for standing-room-only viewings of other team’s talks.

The different investigations took students from rural areas of Peru, where teams interviewed local residents to better understand the transmission of deadly diseases like malaria and leishmaniasis, to the North Carolina Museum of Art, where mathematicians and engineers worked side-by-side with artists to restore paintings.

Machine learning algorithms created by the Energy Data Analytics Lab can pick out buildings from a satellite image and estimate their energy consumption. Image courtesy Hoël Wiesner.

Students in the Energy Data Analytics Lab didn’t have to look much farther than their smart phones for the data they needed to better understand energy use.

“Here you can see a satellite image, very similar to one you can find on Google maps,” said Eric Peshkin, a junior mathematics major, as he showed an aerial photo of an urban area featuring buildings and a highway. “The question is how can this be useful to us as researchers?”

With the help of new machine-learning algorithms, images like these could soon give researchers oodles of valuable information about energy consumption, Peshkin said.

“For example, what if we could pick out buildings and estimate their energy usage on a per-building level?” said Hoël Wiesner, a second year master’s student at the Nicholas School. “There is not really a good data set for this out there because utilities that do have this information tend to keep it private for commercial reasons.”

The lab has had success developing algorithms that can estimate the size and location of solar panels from aerial photos. Peshkin and Wiesner described how they are now creating new algorithms that can first identify the size and locations of buildings in satellite imagery, and then estimate their energy usage. These tools could provide a quick and easy way to evaluate the total energy needs in any neighborhood, town or city in the U.S. or around the world.

“It’s not just that we can take one city, say Norfolk, Virginia, and estimate the buildings there. If you give us Reno, Tuscaloosa, Las Vegas, Pheonix — my hometown — you can absolutely get the per-building energy estimations,” Peshkin said. “And what that means is that policy makers will be more informed, NGOs will have the ability to best service their community, and more efficient, more accurate energy policy can be implemented.”

Some students’ research took them to the sidelines of local sports fields. Joost Op’t Eynde, a master’s student in biomedical engineering, described how he and his colleagues on a Brain and Society team are working with high school and youth football leagues to sort out what exactly happens to the brain during a high-impact sports game.

While a particularly nasty hit to the head might cause clear symptoms that can be diagnosed as a concussion, the accumulation of lesser impacts over the course of a game or season may also affect the brain. Eynde and his team are developing a set of tools to monitor both these impacts and their effects.

A standing-room only crowd listened to a team present on their work “Tackling Concussions.” Photo by Jared Lazarus/Duke Photography.

“We talk about inputs and outputs — what happens, and what are the results,” Eynde said. “For the inputs, we want to actually see when somebody gets hit, how they get hit, what kinds of things they experience, and what is going on in the head. And the output is we want to look at a way to assess objectively.”

The tools include surveys to estimate how often a player is impacted, an in-ear accelerometer called the DASHR that measures the intensity of jostles to the head, and tests of players’ performance on eye-tracking tasks.

“Right now we are looking on the scale of a season, maybe two seasons,” Eynde said. “What we would like to do in the future is actually follow some of these students throughout their career and get the full data for four years or however long they are involved in the program, and find out more of the long-term effects of what they experience.”

Kara J. Manke, PhD

Post by Kara Manke

Mental Shortcuts, Not Emotion, May Guide Irrational Decisions

If you participate in a study in my lab, the Huettel Lab at Duke, you may be asked to play an economic game. For example, we may give you $20 in house money and offer you the following choice:

  1. Keep half of the $20 for sure
  2. Flip a coin: heads you keep all $20; tails you lose all $20

In such a scenario, most participants choose 1, preferring a sure win over the gamble.

Now imagine this choice, again starting with $20 in house money:

  1. Lose half of the $20 for sure
  2. Flip a coin: heads you keep all $20; tails you lose all $20

In this scenario, most participants prefer the gamble over a sure loss.

If you were paying close attention, you’ll note that both examples are actually numerically identical – keeping half of $20 is the same as losing half of $20 – but changing whether the sure option is framed as a gain or a loss results in different decisions to play it safe or take a risk. This phenomenon is known as the Framing Effect. The behavior that it elicits is weird, or as psychologists and economists would say, “irrational”, so we think it’s worth investigating!

Brain activity when people make choices consistent with (hot colors) or against (cool colors) the Framing Effect.

Brain activity when people make choices consistent with (hot colors) or against (cool colors) the Framing Effect.

In a study published March 29 in the Journal of Neuroscience, my lab used brain imaging data to test two competing theories for what causes the Framing Effect.

One theory is that framing is caused by emotion, perhaps because the prospect of accepting a guaranteed win feels good while accepting a guaranteed loss feels scary or bad. Another theory is that the Framing Effect results from a decision-making shortcut. It may be that a strategy of accepting sure gains and avoiding sure losses tends to work well, and adopting this blanket strategy saves us from having to spend time and mental effort fully reasoning through every single decision and all of its possibilities.

Using functional magnetic resonance imaging (fMRI), we measured brain activity in 143 participants as they each made over a hundred choices between various gambles and sure gains or sure losses. Then we compared our participants’ choice-related brain activity to brain activity maps drawn from Neurosynth, an analysis tool that combines data from over 8,000 published fMRI studies to generate neural maps representing brain activity associated with different terms, just as “emotions,” “resting,” or “working.”

As a group, when our participants made choices consistent with the Framing Effect, their average brain activity was most similar to the brain maps representing mental disengagement (i.e. “resting” or “default”). When they made choices inconsistent with the Framing Effect, their average brain activity was most similar to the brain maps representing mental engagement (i.e. “working” or task”). These results supported the theory that the Framing Effect results from a lack of mental effort, or using a decision-making shortcut, and that spending more mental effort can counteract the Framing Effect.

Then we tested whether we could use individual participants’ brain activity to predict participants’ choices on each trial. We found that the degree to which each trial’s brain activity resembled the brain maps associated with mental disengagement predicted whether that trial’s choice would be consistent with the Framing Effect. The degree to which each trial’s brain activity resembled brain maps associated with emotion, however, was not predictive of choices.

Our findings support the theory that the biased decision-making seen in the Framing Effect is due to a lack of mental effort rather than due to emotions.

This suggests potential strategies for prompting people to make better decisions. Instead of trying to appeal to people’s emotions – likely a difficult task requiring tailoring to different individuals – we would be better off taking the easier and more generalizable approach of making good decisions quick and easy for everyone to make.

Guest post by Rosa Li

What is Money Really Worth?

“Yesterday, I was at an event and I sat next to an economist,” Brian Hare told my class. “I asked him: how old is money? He was completely lost.”

I was in Hare’s class on a Monday at noon, laughing at his description of the interaction. We had so far been exploring the origins of humans’ particular ways of making sense of the world through his course in Human Cognitive Evolution and we were faced with a slide that established the industrial period as less than 200 years old. As compared to a hunting and gathering lifestyle, this stretch of time is minuscule on an evolutionary scale.

Slide from Dr. Hare’s class. Reproduced with permission.

Why then do so many studies employ money as a proxy for the measurement of human behaviors that have been shaped by hundreds of thousands of years? This kind of research is trying to get at “prosociality,” (the ability to be altruistic and cooperative towards others) or empathy and guilt aversion, just to name a few.

I had started to wonder about this months before as a summer intern at the University of Tokyo. As I listened to a graduate student describe an experiment employing money to understand how humans behaved cooperatively, I grew puzzled. I eventually asked: Why was money used in this experiment? The argument was made that money was enough of a motivator for this sample population of college students to generalize that if they chose to share it, it must mean something.

During a panel discussion about prosociality at the American Association for the Advancement of Science meeting in Boston last month, my chance came to ask the question again. Alan Sanfey, professor at the Donders Institute for Brain, Cognition and Behavior, used experimental paradigms that rewarded participants with money to tease out the particular effects of guilt on generous behavior.

“Is money a good proxy for understanding evolutionarily ancient behavior?” I asked. Robin Dunbar, professor of evolutionary psychology at Oxford University took a dig at my question and mentioned that the barter system would have likely been the best ancient representative of money. However, the barter system likely came to life during the agricultural period, which itself is less than 10,000 years old.

Dollar bills. Public domain.

Stephen Pluháček, an attendee at the event and a senior scholar at the University of New Hampshire, said in a followup email to me that he “was interested in [my] question to the panel and disappointed by their response — which I found indicative of the ways we can become so habituated to a way of looking at things that we find it difficult to even hear questions that challenge our foundational assumptions.”

“As I said in our brief conversation, I am not convinced that money can stand as a proxy for prosocial behavior (trust, generosity) in humans prior to the advent of agriculture,” Pluháček wrote. “And even barter or gift exchange may be limited in their applicability to early humans (as well as to modern humans prior to the cognitive revolution).” 

So, I’m not alone in my skepticism. However, in my discussion with Leonard White, my advisor and associate director for education in the Duke Institute for Brain Sciences, he pointed out:

“The brain is remarkably facile. We have this amazing capacity for proxy substitution.”

In essence, this would mean that our brain would be able to consider money as a reward just like any reward that might have mediated the evolution of our behavior over time. We would thus be able to test subjects with “modern” stimuli, it appears.

It is clear that an evolutionary narrative is important to creating a more complete picture of contemporary human behavior. But sometimes the proxies we choose to make these measures don’t fit very well with our long history.

By Shanen Ganapathee

 

Creating Technology That Understands Human Emotions

“If you – as a human – want to know how somebody feels, for what might you look?” Professor Shaundra Daily asked the audience during an ECE seminar last week.

“Facial expressions.”
“Body Language.”
“Tone of voice.”
“They could tell you!”

Over 50 students and faculty gathered over cookies and fruits for Dr. Daily’s talk on designing applications to support personal growth. Dr. Daily is an Associate Professor in the Department of Computer and Information Science and Engineering at the University of Florida interested in affective computing and STEM education.

Dr. Daily explaining the various types of devices used to analyze people’s feelings and emotions. For example, pressure sensors on a computer mouse helped measure the frustration of participants as they filled out an online form.

Affective Computing

The visual and auditory cues proposed above give a human clues about the emotions of another human. Can we use technology to better understand our mental state? Is it possible to develop software applications that can play a role in supporting emotional self-awareness and empathy development?

Until recently, technologists have largely ignored emotion in understanding human learning and communication processes, partly because it has been misunderstood and hard to measure. Asking the questions above, affective computing researchers use pattern analysis, signal processing, and machine learning to extract affective information from signals that human beings express. This is integral to restore a proper balance between emotion and cognition in designing technologies to address human needs.

Dr. Daily and her group of researchers used skin conductance as a measure of engagement and memory stimulation. Changes in skin conductance, or the measure of sweat secretion from sweat gland, are triggered by arousal. For example, a nervous person produces more sweat than a sleeping or calm individual, resulting in an increase in skin conductance.

Galvactivators, devices that sense and communicate skin conductivity, are often placed on the palms, which have a high density of the eccrine sweat glands.

Applying this knowledge to the field of education, can we give a teacher physiologically-based information on student engagement during class lectures? Dr. Daily initiated Project EngageMe by placing galvactivators like the one in the picture above on the palms of students in a college classroom. Professors were able to use the results chart to reflect on different parts and types of lectures based on the responses from the class as a whole, as well as analyze specific students to better understand the effects of their teaching methods.

Project EngageMe: Screenshot of digital prototype of the reading from the galvactivator of an individual student.

The project ended up causing quite a bit of controversy, however, due to privacy issues as well our understanding of skin conductance. Skin conductance can increase due to a variety of reasons – a student watching a funny video on Facebook might display similar levels of conductance as an attentive student. Thus, the results on the graph are not necessarily correlated with events in the classroom.

Educational Research

Daily’s research blends computational learning with social and emotional learning. Her projects encourage students to develop computational thinking through reflecting on the community with digital storytelling in MIT’s Scratch, learning to use 3D printers and laser cutters, and expressing ideas using robotics and sensors attached to their body.

VENVI, Dr. Daily’s latest research, uses dance to teach basic computational concepts. By allowing users to program a 3D virtual character that follows dance movements, VENVI reinforces important programming concepts such as step sequences, ‘for’ and ‘while’ loops of repeated moves, and functions with conditions for which the character can do the steps created!

 

 

Dr. Daily and her research group observed increased interest from students in pursuing STEM fields as well as a shift in their opinion of computer science. Drawings from Dr. Daily’s Women in STEM camp completed on the first day consisted of computer scientist representations as primarily frazzled males coding in a small office, while those drawn after learning with VENVI included more females and engagement in collaborative activities.

VENVI is a programming software that allows users to program a virtual character to perform a sequence of steps in a 3D virtual environment!

In human-to-human interactions, we are able draw on our experiences to connect and empathize with each other. As robots and virtual machines grow to take increasing roles in our daily lives, it’s time to start designing emotionally intelligent devices that can learn to empathize with us as well.

Post by Anika Radiya-Dixit

Science Meets Policy, and Maybe They Even Understand Each Other!

As we’ve seen many times, when complex scientific problems like stem cells, alternative energy or mental illness meet the policy world, things can get a little messy. Scientists generally don’t know much about law and policy, and very few policymakers are conversant with the specialized dialects of the sciences.

A screenshot of SciPol’s handy news page.

Add the recent rapid emergence of autonomous vehicles, artificial intelligence and gene editing, and you can see things aren’t going to get any easier!

To try to help, Duke’s Science and Society initiative has launched an ambitious policy analysis group called SciPol that hopes to offer great insights into the intersection of scientific knowledge and policymaking. Their goal is to be a key source of non-biased, high-quality information for policymakers, academics, commercial interests, nonprofits and journalists.

“We’re really hoping to bridge the gap and make science and policy accessible,” said Andrew Pericak, a contributor and editor of the service who has a 2016 masters in environmental management from the Nicholas School.

The program also will serve as a practical training ground for students who aspire to live and work in that rarefied space between two realms, and will provide them with published work to help them land internships and jobs, said SciPol director Aubrey Incorvaia, a 2009 masters graduate of the Sanford School of Public Policy.

Aubrey Incorvaia chatted with law professor Jeff Ward (center) and Science and Society fellow Thomas Williams at the kickoff event.

SciPol launched quietly in the fall with a collection of policy development briefs focused on neuroscience, genetics and genomics. Robotics and artificial intelligence coverage began at the start of January. Nanotechnology will launch later this semester and preparations are being made for energy to come online later in the year. Nearly all topics are led by a PhD in that field.

“This might be a different type of writing than you’re used to!” Pericak told a meeting of prospective undergraduate and graduate student authors at an orientation session last week.

Some courses will be making SciPol brief writing a part of their requirements, including law professor Jeff Ward’s section on the frontier of robotics law and ethics. “We’re doing a big technology push in the law school, and this is a part of it,” Ward said.

Because the research and writing is a learning exercise, briefs are published only after a rigorous process of review and editing.

A quick glance at the latest offerings shows in-depth policy analyses of aerial drones, automated vehicles, genetically modified salmon, sports concussions and dietary supplements that claim to boost brain power.

To keep up with the latest developments, the SciPol staff maintains searches on WestLaw, the Federal Register and other sources to see where science policy is happening. “But we are probably missing some things, just because the government does so much,” Pericak said.

Post by Karl Leif Bates

Brain Makes Order From Disorder

A team of scientists from Duke, the National Institutes of Health and Johns Hopkins biomedical engineering has found that the formation and retrieval of new memories relies on disorganized brain waves, not organized ones, which is somewhat contrary to what neuroscientists have previously believed. Brain waves, or oscillations, are the brain’s way of organizing activity and are known to be important to learning, memory, and thinking.

Alex Vaz is a Duke MD/PhD student and biomedical engineering alumnus.

Although brain waves have been measured and studied for decades, neuroscientists still aren’t sure what they mean and whether or not they help cognition, said Alex Vaz, an M.D.-Ph.D. student at Duke who is the first author on the paper.

In a study appearing Jan. 6 in NeuroImage, the neuroscientists showed that brain activity became less synchronized during the formation and retrieval of new memories. This was particularly true in a brain region known as the medial temporal lobe, a structure thought to play a critical role in the formation of both short-term and long-term memories

Excessive synchronization of brain oscillations has been implicated in Parkinson’s disease, epilepsy, and even psychiatric disorders. Decreasing brain wave synchronization by electrical stimulation deep in the brain has been found to decrease the tremors of Parkinson’s. But the understanding of brain waves in movement disorders is ahead of the understanding of human memory.

The researchers had neurosurgeons at the National Institutes of Health implant recording electrodes onto the brain surface of 33 epileptic patients during seizure evaluation and then asked them to form and retrieve memories of unrelated pairs of words, such as ‘dog’ and ‘lime.’

They found that  during memory formation, brain activity became more disorganized in the frontal lobe, an area involved in

A graphical abstract from Alex’s paper.

executive control and attention, and in the temporal lobe, an area more implicated in memory and language.

“We think this study, and others like it, provide a good starting point for understanding possible treatments for memory disorders,” Vaz said. “The aging American population will be facing major neurocognitive disorders such as Alzheimer’s disease and vascular dementia and will be demanding more medical attention.”

CITATION: “Dual origins of measured phase-amplitude coupling reveal distinct neural mechanisms underlying episodic memory in the human cortex,” Alex P. Vaz, Robert B. Yaffe, John H. Wittig, Sara K. Inati, Kareem A. Zaghloul. NeuroImage, Online Jan. 6, 2017. DOI: 10.1016/j.neuroimage.2017.01.001

http://www.sciencedirect.com/science/article/pii/S1053811917300010

Post by Karl Leif Bates

Karl Leif Bates

Page 1 of 13

Powered by WordPress & Theme by Anders Norén