Duke Research Blog

Following the people and events that make up the research community at Duke.

Category: Art (Page 1 of 3)

Sizing Up Hollywood’s Gender Gap

DURHAM, N.C. — A mere seven-plus decades after she first appeared in comic books in the early 1940s, Wonder Woman finally has her own movie.

In the two months since it premiered, the film has brought in more than $785 million worldwide, making it the highest grossing movie of the summer.

But if Hollywood has seen a number of recent hits with strong female leads, from “Wonder Woman” and “Atomic Blonde” to “Hidden Figures,” it doesn’t signal a change in how women are depicted on screen — at least not yet.

Those are the conclusions of three students who spent ten weeks this summer compiling and analyzing data on women’s roles in American film, through the Data+ summer research program.

The team relied on a measure called the Bechdel test, first depicted by the cartoonist Alison Bechdel in 1985.

Bechdel test

The “Bechdel test” asks whether a movie features at least two women who talk to each other about anything besides a man. Surprisingly, a lot of films fail. Art by Srravya [CC0], via Wikimedia Commons.

To pass the Bechdel test, a movie must satisfy three basic requirements: it must have at least two named women in it, they must talk to each other, and their conversation must be about something other than a man.

It’s a low bar. The female characters don’t have to have power, or purpose, or buck gender stereotypes.

Even a movie in which two women only speak to each other briefly in one scene, about nail polish — as was the case with “American Hustle” —  gets a passing grade.

And yet more than 40 percent of all U.S. films fail.

The team used data from the bechdeltest.com website, a user-compiled database of over 7,000 movies where volunteers rate films based on the Bechdel criteria. The number of criteria a film passes adds up to its Bechdel score.

“Spider Man,” “The Jungle Book,” “Star Trek Beyond” and “The Hobbit” all fail by at least one of the criteria.

Films are more likely to pass today than they were in the 1970s, according to a 2014 study by FiveThirtyEight, the data journalism site created by Nate Silver.

The authors of that study analyzed 1,794 movies released between 1970 and 2013. They found that the number of passing films rose steadily from 1970 to 1995 but then began to stall.

In the past two decades, the proportion of passing films hasn’t budged.

Since the mid-1990s, the proportion of films that pass the Bechdel test has flatlined at about 50 percent.

Since the mid-1990s, the proportion of films that pass the Bechdel test has flatlined at about 50 percent.

The Duke team was also able to obtain data from a 2016 study of the gender breakdown of movie dialogue in roughly 2,000 screenplays.

Men played two out of three top speaking roles in more than 80 percent of films, according to that study.

Using data from the screenplay study, the students plotted the relationship between a movie’s Bechdel score and the number of words spoken by female characters. Perhaps not surprisingly, films with higher Bechdel scores were also more likely to achieve gender parity in terms of speaking roles.

“The Bechdel test doesn’t really tell you if a film is feminist,” but it’s a good indicator of how much women speak, said team member Sammy Garland, a Duke sophomore majoring in statistics and Chinese.

Previous studies suggest that men do twice as much talking in most films — a proportion that has remained largely unchanged since 1995. The reason, researchers say, is not because male characters are more talkative individually, but because there are simply more male roles.

“To close the gap of speaking time, we just need more female characters,” said team member Selen Berkman, a sophomore majoring in math and computer science.

Achieving that, they say, ultimately comes down to who writes the script and chooses the cast.

The team did a network analysis of patterns of collaboration among 10,000 directors, writers and producers. Two people are joined whenever they worked together on the same movie. The 13 most influential and well-connected people in the American film industry were all men, whose films had average Bechdel scores ranging from 1.5 to 2.6 — meaning no top producer is regularly making films that pass the Bechdel test.

“What this tells us is there is no one big influential producer who is moving the needle. We have no champion,” Garland said.

Men and women were equally represented in fewer than 10 percent of production crews.

But assembling a more gender-balanced production team in the early stages of a film can make a difference, research shows. Films with more women in top production roles have female characters who speak more too.

“To better represent women on screen you need more women behind the scenes,” Garland said.

Dollar for dollar, making an effort to close the Hollywood gender gap can mean better returns at the box office too. Films that pass the Bechdel test earn $2.68 for every dollar spent, compared with $2.45 for films that fail — a 23-cent better return on investment, according to FiveThirtyEight.

Other versions of the Bechdel test have been proposed to measure race and gender in film more broadly. The advantage of analyzing the Bechdel data is that thousands of films have already been scored, said English major and Data+ team member Aaron VanSteinberg.

“We tried to watch a movie a week, but we just didn’t have time to watch thousands of movies,” VanSteinberg said.

A new report on diversity in Hollywood from the University of Southern California suggests the same lack of progress is true for other groups as well. In nearly 900 top-grossing films from 2007 to 2016, disabled, Latino and LGBTQ characters were consistently underrepresented relative to their makeup in the U.S. population.

Berkman, Garland and VanSteinberg were among more than 70 students selected for the 2017 Data+ program, which included data-driven projects on photojournalism, art restoration, public policy and more.

They presented their work at the Data+ Final Symposium on July 28 in Gross Hall.

Data+ is sponsored by Bass Connections, the Information Initiative at Duke, the Social Science Research Institute, the departments of mathematics and statistical science and MEDx. 

Writing by Robin Smith; video by Lauren Mueller and Summer Dunsmore

Students Share Research Journeys at Bass Connections Showcase

From the highlands of north central Peru to high schools in North Carolina, student researchers in Duke’s Bass Connections program are gathering data in all sorts of unique places.

As the school year winds down, they packed into Duke’s Scharf Hall last week to hear one another’s stories.

Students and faculty gathered in Scharf Hall to learn about each other’s research at this year’s Bass Connections showcase. Photo by Jared Lazarus/Duke Photography.

The Bass Connections program brings together interdisciplinary teams of undergraduates, graduate students and professors to tackle big questions in research. This year’s showcase, which featured poster presentations and five “lightning talks,” was the first to include teams spanning all five of the program’s diverse themes: Brain and Society; Information, Society and Culture; Global Health; Education and Human Development; and Energy.

“The students wanted an opportunity to learn from one another about what they had been working on across all the different themes over the course of the year,” said Lori Bennear, associate professor of environmental economics and policy at the Nicholas School, during the opening remarks.

Students seized the chance, eagerly perusing peers’ posters and gathering for standing-room-only viewings of other team’s talks.

The different investigations took students from rural areas of Peru, where teams interviewed local residents to better understand the transmission of deadly diseases like malaria and leishmaniasis, to the North Carolina Museum of Art, where mathematicians and engineers worked side-by-side with artists to restore paintings.

Machine learning algorithms created by the Energy Data Analytics Lab can pick out buildings from a satellite image and estimate their energy consumption. Image courtesy Hoël Wiesner.

Students in the Energy Data Analytics Lab didn’t have to look much farther than their smart phones for the data they needed to better understand energy use.

“Here you can see a satellite image, very similar to one you can find on Google maps,” said Eric Peshkin, a junior mathematics major, as he showed an aerial photo of an urban area featuring buildings and a highway. “The question is how can this be useful to us as researchers?”

With the help of new machine-learning algorithms, images like these could soon give researchers oodles of valuable information about energy consumption, Peshkin said.

“For example, what if we could pick out buildings and estimate their energy usage on a per-building level?” said Hoël Wiesner, a second year master’s student at the Nicholas School. “There is not really a good data set for this out there because utilities that do have this information tend to keep it private for commercial reasons.”

The lab has had success developing algorithms that can estimate the size and location of solar panels from aerial photos. Peshkin and Wiesner described how they are now creating new algorithms that can first identify the size and locations of buildings in satellite imagery, and then estimate their energy usage. These tools could provide a quick and easy way to evaluate the total energy needs in any neighborhood, town or city in the U.S. or around the world.

“It’s not just that we can take one city, say Norfolk, Virginia, and estimate the buildings there. If you give us Reno, Tuscaloosa, Las Vegas, Pheonix — my hometown — you can absolutely get the per-building energy estimations,” Peshkin said. “And what that means is that policy makers will be more informed, NGOs will have the ability to best service their community, and more efficient, more accurate energy policy can be implemented.”

Some students’ research took them to the sidelines of local sports fields. Joost Op’t Eynde, a master’s student in biomedical engineering, described how he and his colleagues on a Brain and Society team are working with high school and youth football leagues to sort out what exactly happens to the brain during a high-impact sports game.

While a particularly nasty hit to the head might cause clear symptoms that can be diagnosed as a concussion, the accumulation of lesser impacts over the course of a game or season may also affect the brain. Eynde and his team are developing a set of tools to monitor both these impacts and their effects.

A standing-room only crowd listened to a team present on their work “Tackling Concussions.” Photo by Jared Lazarus/Duke Photography.

“We talk about inputs and outputs — what happens, and what are the results,” Eynde said. “For the inputs, we want to actually see when somebody gets hit, how they get hit, what kinds of things they experience, and what is going on in the head. And the output is we want to look at a way to assess objectively.”

The tools include surveys to estimate how often a player is impacted, an in-ear accelerometer called the DASHR that measures the intensity of jostles to the head, and tests of players’ performance on eye-tracking tasks.

“Right now we are looking on the scale of a season, maybe two seasons,” Eynde said. “What we would like to do in the future is actually follow some of these students throughout their career and get the full data for four years or however long they are involved in the program, and find out more of the long-term effects of what they experience.”

Kara J. Manke, PhD

Post by Kara Manke

Visualizing the Fourth Dimension

Living in a 3-dimensional world, we can easily visualize objects in 2 and 3 dimensions. But as a mathematician, playing with only 3 dimensions is limiting, Dr. Henry Segerman laments.  An Assistant Professor in Mathematics at Oklahoma State University, Segerman spoke to Duke students and faculty on visualizing 4-dimensional space as part of the PLUM lecture series on April 18.

What exactly is the 4th dimension?

Let’s break down spatial dimensions into what we know. We can describe a point in 2-dimensional space with two numbers x and y, visualizing an object in the xy plane, and a point in 3D space with 3 numbers in the xyz coordinate system.

Plotting three dimensions in the xyz coordinate system.

While the green right-angle markers are not actually 90 degrees, we are able to infer the 3-dimensional geometry as shown on a 2-dimensional screen.

Likewise, we can describe a point in 4-dimensional space with four numbers – x, y, z, and w – where the purple w-axis is at a right angle to the other regions; in other words, we can visualize 4 dimensions by squishing it down to three.

Plotting four dimensions in the xyzw coordinate system.

One commonly explored 4D object we can attempt to visualize is known as a hypercube. A hypercube is analogous to a cube in 3 dimensions, just as a cube is to a square.

How do we make a hypercube?

To create a 1D line, we take a point, make a copy, move the copied point parallely to some distance away, and then connect the two points with a line.

Similarly, a square can be formed by making a copy of a line and connecting them to add the second dimension.

So, to create a hypercube, we move identical 3D cubes parallel to each other, and then connect them with four lines, as depicted in the image below.

To create an n–dimensional cube, we take 2 copies of the (n−1)–dimensional cube and connecting corresponding corners.

Even with a 3D-printed model, trying to visualize the hypercube can get confusing. 

How can we make a better picture of a hypercube? “You sort of cheat,” Dr. Segerman explained. One way to cheat is by casting shadows.

Parallel projection shadows, depicted in the figure below, are caused by rays of light falling at a  right angle to the plane of the table. We can see that some of the edges of the shadow are parallel, which is also true of the physical object. However, some of the edges that collide in the 2D cast don’t actually collide in the 3D object, making the projection more complicated to map back to the 3D object.

Parallel projection of a cube on a transparent sheet of plastic above the table.

One way to cast shadows with no collisions is through stereographic projection as depicted below.

The stereographic projection is a mapping (function) that projects a sphere onto a plane. The projection is defined on the entire sphere, except the point at the top of the sphere.

For the object below, the curves on the sphere cast shadows, mapping them to a straight line grid on the plane. With stereographic projection, each side of the 3D object maps to a different point on the plane so that we can view all sides of the original object.

Stereographic projection of a grid pattern onto the plane. 3D print the model at Duke’s Co-Lab!

Just as shadows of 3D objects are images formed on a 2D surface, our retina has only a 2D surface area to detect light entering the eye, so we actually see a 2D projection of our 3D world. Our minds are computationally able to reconstruct the 3D world around us by using previous experience and information from the 2D images such as light, shade, and parallax.

Projection of a 3D object on a 2D surface.

Projection of a 4D object on a 3D world

How can we visualize the 4-dimensional hypercube?

To use stereographic projection, we radially project the edges of a 3D cube (left of the image below) to the surface of a sphere to form a “beach ball cube” (right).

The faces of the cube radially projected onto the sphere.

Placing a point light source at the north pole of the bloated cube, we can obtain the projection onto a 2D plane as shown below.

Stereographic projection of the “beach ball cube” pattern to the plane. View the 3D model here.

Applied to one dimension higher, we can theoretically blow a 4-dimensional shape up into a ball, and then place a light at the top of the object, and project the image down into 3 dimensions.

Left: 3D print of the stereographic projection of a “beach ball hypercube” to 3-dimensional space. Right: computer render of the same, including the 2-dimensional square faces.

Forming n–dimensional cubes from (n−1)–dimensional renderings.

Thus, the constructed 3D model of the “beach ball cube” shadow is the projection of the hypercube into 3-dimensional space. Here the 4-dimensional edges of the hypercube become distorted cubes instead of strips.

Just as the edges of the top object in the figure can be connected together by folding the squares through the 3rd dimension to form a cube, the edges of the bottom object can be connected through the 4th dimension

Why are we trying to understand things in 4 dimensions?

As far as we know, the space around us consists of only 3 dimensions. Mathematically, however, there is no reason to limit our understanding of higher-dimensional geometry and space to only 3, since there is nothing special about the number 3 that makes it the only possible number of dimensions space can have.

From a physics perspective, Einstein’s theory of Special Relativity suggests a connection between space and time, so the space-time continuum consists of 3 spatial dimensions and 1 temporal dimension. For example, consider a blooming flower. The flower’s position it not changing: it is not moving up or sideways. Yet, we can observe the transformation, which is proof that an additional dimension exists. Equating time with the 4th dimension is one example, but the 4th dimension can also be positional like the first 3. While it is possible to visualize space-time by examining snapshots of the flower with time as a constant, it is also useful to understand how space and time interrelate geometrically.

Explore more in the 4th dimension with Hypernom or Dr. Segerman’s book “Visualizing Mathematics with 3D Printing“!

Post by Anika Radiya-Dixit.

 

 

Seeing Nano

Take pictures at more than 300,000 times magnification with electron microscopes at Duke

Sewer gnat head

An image of a sewer gnat’s head taken through a scanning electron microscope. Courtesy of Fred Nijhout.

The sewer gnat is a common nuisance around kitchen and bathroom drains that’s no bigger than a pea. But magnified thousands of times, its compound eyes and bushy antennae resemble a first place winner in a Movember mustache contest.

Sewer gnats’ larger cousins, horseflies are known for their painful bite. Zoom in and it’s easy to see how they hold onto their furry livestock prey:  the tiny hooked hairs on their feet look like Velcro.

Students in professor Fred Nijhout’s entomology class photograph these and other specimens at more than 300,000 times magnification at Duke’s Shared Material & Instrumentation Facility (SMIF).

There the insects are dried, coated in gold and palladium, and then bombarded with a beam of electrons from a scanning electron microscope, which can resolve structures tens of thousands of times smaller than the width of a human hair.

From a ladybug’s leg to a weevil’s suit of armor, the bristly, bumpy, pitted surfaces of insects are surprisingly beautiful when viewed up close.

“The students have come to treat travels across the surface of an insect as the exploration of a different planet,” Nijhout said.

Horsefly foot

The foot of a horsefly is equipped with menacing claws and Velcro-like hairs that help them hang onto fur. Photo by Valerie Tornini.

Weevil

The hard outer skeleton of a weevil looks smooth and shiny from afar, but up close it’s covered with scales and bristles. Courtesy of Fred Nijhout.

fruit fly wing

Magnified 500 times, the rippled edges of this fruit fly wing are the result of changes in the insect’s genetic code. Courtesy of Eric Spana.

You, too, can gaze at alien worlds too small to see with the naked eye. Students and instructors across campus can use the SMIF’s high-powered microscopes and other state of the art research equipment at no charge with support from the Class-Based Explorations Program.

Biologist Eric Spana’s experimental genetics class uses the microscopes to study fruit flies that carry genetic mutations that alter the shape of their wings.

Students in professor Hadley Cocks’ mechanical engineering 415L class take lessons from objects that break. A scanning electron micrograph of a cracked cymbal once used by the Duke pep band reveals grooves and ridges consistent with the wear and tear from repeated banging.

cracked cymbal

Magnified 3000 times, the surface of this broken cymbal once used by the Duke Pep Band reveals signs of fatigue cracking. Courtesy of Hadley Cocks.

These students are among more than 200 undergraduates in eight classes who benefitted from the program last year, thanks to a grant from the Donald Alstadt Foundation.

You don’t have to be a scientist, either. Historians and art conservators have used scanning electron microscopes to study the surfaces of Bronze Age pottery, the composition of ancient paints and even dust from Egyptian mummies and the Shroud of Turin.

Instructors and undergraduates are invited to find out how they could use the microscopes and other nanotech equipement in the SMIF in their teaching and research. Queries should be directed to Dr. Mark Walters, Director of SMIF, via email at mark.walters@duke.edu.

Located on Duke’s West Campus in the Fitzpatrick Building, the SMIF is a shared use facility available to Duke researchers and educators as well as external users from other universities, government laboratories or industry through a partnership called the Research Triangle Nanotechnology Network. For more info visit http://smif.pratt.duke.edu/.

Scanning electron microscope

This scanning electron microscope could easily be mistaken for equipment from a dentist’s office.

s200_robin.smith

Post by Robin Smith

When Art Tackles the Invisibly Small

Huddled in a small cinderblock room in the basement of Hudson Hall, visual artist Raewyn Turner and mechatronics engineer Brian Harris watch as Duke postdoc Nick Geitner positions a glass slide under the bulky eyepiece of an optical microscope.

To the naked eye, the slide is completely clean. But after some careful adjustments of the microscope, a field of technicolor spots splashes across the viewfinder. Each point shows light scattering off one of the thousands of silver nanoparticles spread in a thin sheet across the glass.

“It’s beautiful!” Turner said. “They look like a starry sky.”

AgAlgae_40x_Enhanced3

A field of 10-nanometer diameter silver nanoparticles (blue points) and clusters of 2-4 nanoparticles (other colored points) viewed under a dark-field hyperspectral microscope. The clear orbs are cells of live chlorella vulgaris algae. Image courtesy Nick Geitner.

Turner and Harris, New Zealand natives, have traveled halfway across the globe to meet with researchers at the Center for the Environmental Implications of Nanotechnology (CEINT). Here, they are learning all they can about nanoparticles: how scientists go about detecting these unimaginably small objects, and how these tiny bits of matter interact with humans, with the environment and with each other.

img_2842

The mesocosms, tucked deep in the Duke Forest, currently lay dormant.

The team hopes the insights they gather will inform the next phases of Steep, an ongoing project with science communicator Maryse de la Giroday which uses visual imagery to explore how humans interact with and “sense” the nanoparticles that are increasingly being used in our electronics, food, medicines, and even clothing.

“The general public, including ourselves, we don’t know anything about nanoparticles. We don’t understand them, we don’t know how to sense them, we don’t know where they are,” Turner said. “What we are trying to do is see how scientists sense nanoparticles, how they take data about them and translate it into sensory data.”

Duke Professor and CEINT member Mark Wiesner, who is Geitner’s postdoctoral advisor, serves as a scientific advisor on the project.

“Imagery is a challenge when talking about something that is too small to see,” Wiesner said. “Our mesocosm work provides an opportunity to visualize how were are investigating the interactions of nanomaterials with living systems, and our microscopy work provides some useful, if not beautiful images. But Raewyn has been brilliant in finding metaphors, cultural references, and accompanying images to get points across.”

img_2872

Graduate student Amalia Turner describes how she uses the dark-field microscope to characterize gold nanoparticles in soil. From left: Amalia Turner, Nick Geitner, Raewyn Turner, and Brian Harris.

On Tuesday, Geitner led the pair on a soggy tour of the mesocosms, 30 miniature coastal ecosystems tucked into the Duke Forest where researchers are finding out where nanoparticles go when released into the environment. After that, the group retreated to the relative warmth of the laboratory to peek at the particles under a microscope.

Even at 400 times magnification, the silver nanoparticles on the slide can’t really be “seen” in any detail, Geitner explained.

“It is sort of like looking at the stars,” Geitner said. “You can’t tell what is a big star and what is a small star because they are so far away, you just get that point of light.”

But the image still contains loads of information, Geitner added, because each particle scatters a different color of light depending on its size and shape: particles on their own shine a cool blue, while particles that have joined together in clusters appear green, orange or red.

During the week, Harris and Turner saw a number of other techniques for studying nanoparticles, including scanning electron microscopes and molecular dynamics simulations.

steepwashing-cake-copy-23

An image from the Steep collection, which uses visual imagery to explore how humans interact with the increasingly abundant gold nanoparticles in our environment. Credit: Raewyn Turner and Brian Harris.

“What we have found really, really interesting is that the nanoparticles have different properties,” Turner said. “Each type of nanoparticle is different to each other one, and it also depends on which environment you put them into, just like how a human will behave in different environments in different ways.”

Geitner says the experience has been illuminating for him, too. “I have never in my life thought of nanoparticles from this perspective before,” Geitner said. “A lot of their questions are about really, what is the difference when you get down to atoms, molecules, nanoparticles? They are all really, really small, but what does small mean?”

Kara J. Manke, PhD

Post by Kara Manke

Meet the New Blogger: Shanen Ganapathee

Hi y’all! My name is Shanen and I am from the deep, deep South… of the globe. I was born and raised in Mauritius, a small island off the coast of Madagascar, once home to the now-extinct Dodo bird.

Shanen Ganapathee

Shanen Ganapathee is a senior who wishes to be ‘a historian of the brain’

The reason I’m at Duke has to do with a desire to do what I love most — exploring art, science and their intersection. You will often find me writing prose; inspired by lessons in neuroanatomy and casting a DNA strand as the main character in my short story.

I’m excited about Africa, and the future of higher education and research on the continent. I believe in ideas, especially when they are big and bold. I’m a dreamer, an idealist but some might call me naive. I am deeply passionate about research but above all how it is made accessible to a wide audience.

I am currently a senior pursuing a Program II in Human Cognitive Evolution, a major I designed in my sophomore year with the help of my advisor, Dr. Leonard White, whom I had to luck to meet through the Neurohumanities Program in Paris.

This semester, I am working on a thesis project under the guidance of Dr. Greg Wray, inspired by an independent study I did under Dr. Steven Churchill, where we examined the difference in early human and Neandertal cognition and behavior. I am interested in using ancient DNA genomics to answer the age-old question: what makes us human? My claim is that the advent of artistic ventures truly shaped the beginning of behavioral modernity. In a sense, I want to be a historian of the brain.

My first exposure to the world of genomics was through the FOCUS program — Genome in our Lives — my freshman fall. Ever since, I have been fascinated by what the human genome can teach us. It is a window into our collective pasts as much as it informs us about our present and future. I am particularly intrigued by how the forces of evolution have shaped us to become the species we are.

I am excited about joining the Duke Research blog and sharing some great science with you all.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén