What Makes a Face? Art and Science Team Up to Find Out

From the man in the moon to the slots of an electrical outlet, people can spot faces just about everywhere.

As part of a larger Bass Connections project exploring how our brains make sense of faces, a Duke team of students and faculty is using state-of-the-art eye-tracking to examine how the presence of faces — from the purely representational to the highly abstract — influences our perception of art.

The Making Faces exhibit is on display in the Nasher Museum of Art’s Academic Focus Gallery through July 24th.

The artworks they examined are currently on display at the Nasher Museum of Art in an installation titled, “Making Faces: At the Intersection of Art and Neuroscience.”

“Faces really provide the most absorbing source of information for us as humans,” Duke junior Sophie Katz said during a gallery talk introducing the installation last week. “We are constantly attracted to faces and we see them everywhere. Artists have always had an obsession with faces, and recently scientists have also begun grappling with this obsession.”

Katz said our preoccupation with faces evolved because they provide us with key social cues, including information about another individual’s gender, identity, and emotional state. Studies using functional Magnetic Resonance Imaging (fMRI) even indicate that we have a special area of the brain, called the fusiform face area, that is specifically dedicated to processing facial information.

The team used eye-tracking in the lab and newly developed eye-tracking glasses in the Nasher Museum as volunteers viewed artworks featuring both abstract and representational images of faces. They created “heat maps” from these data to illustrate where viewers gazed most on a piece of art to explore how our facial bias might influence our perception of art.

This interactive website created by the team lets you observe these eye-tracking patterns firsthand.

When looking at faces straight-on, most people direct their attention on the eyes and the mouth, forming a triangular pattern. Katz said the team was surprised to find that this pattern held even when the faces became very abstract.

“Even in a really abstract representation of a face, people still scan it like they would a face. They are looking for the same social information regardless of how abstract the work is,” said Katz.


A demonstration of the eye-tracking technology used to track viewers gaze at the Nasher Museum of Art. Credit: Shariq Iqbal, John Pearson Lab, Duke University.

Sophomore Anuhita Basavaraju pointed out how a Lonnie Holley piece titled “My Tear Becomes the Child,” in which three overlapping faces and a seated figure emerge from a few contoured lines, demonstrates how artists are able to play with our facial perception.

“There really are very few lines being used, but at the same time it’s so intricate, and generates the interesting conversation of how many lines are there, and which face you see first,” said Basavaraju. “That’s what’s so interesting about faces. Because human evolution has made us so drawn towards faces, artists are able to create them out of really very few contours in a really intricate way.”

IMG_8354

Sophomore Anuhita Basavaraju discusses different interpretations of the face in Pablo Picasso’s “Head of a Woman.”

In addition to comparing ambiguous and representational faces, the team also examined how subtle changes to a face, like altering the color contrast or applying a mask, might influence our perception.

Sophomore Eduardo Salgado said that while features like eyes and a nose and mouth are the primary components that allow our brains to construct a face, masks may remove the subtler dimensions of facial expression that we rely on for social cues.

For instance, participants viewing a painting titled “Decompositioning” by artist Jeff Sonhouse, which features a masked man standing before an exploding piano, spent most of their time dwelling on the man’s covered face, despite the violent scene depicted on the rest of the canvas.

“When you cover a face, it’s hard to know what the person is thinking,” Salgado said. “You lack information, and that calls more attention to it. If he wasn’t masked, the focus on his face might have been less intense.”

In connection with the exhibition, Nasher MUSE, DIBS, and the Bass Connections team will host visiting illustrator Hanoch Piven this Thursday April 7th and Friday April 8th  for a lunchtime conversation and hands-on workshop about his work creating portraits with found objects.

Making Faces will be on display in the Nasher Museum of Art’s Academic Focus Gallery through July 24th.

Kara J. Manke, PhD

Post by Kara Manke

The Art of Asking Questions at DataFest 2016

During DataFest, students engaged in intense collaboration. Image courtesy of Rita Lo.

Students engaged in intense collaboration during DataFest 2016, a stats and data analysis competition held from April 1-3 at Duke. Image courtesy of Rita Lo.

On Saturday night, while most students were fast asleep or out partying, Duke junior Callie Mao stayed up until the early hours of the morning pushing and pulling a real-world data set to see what she could make of it — for fun. Callie and her team had planned for months in advance to take part in DataFest 2016, a statistical analysis competition that occurred from April 1 to April 3.

A total of 277 students, hailing from schools as disparate as Duke, UNC Chapel Hill, NCSU, Meredith College, and even one high school, the North Carolina School of Science and Mathematics, gathered in the Edge to extract insight from a mystery data set. The camaraderie was palpable, as students animatedly sketched out their ideas on whiteboard walls and chatted while devouring mountains of free food.

Callie Mao ponders which aspects of data to include in her analysis.

Duke junior Callie Mao ponders which aspects of the data to include in her analysis.

Callie observed that the challenges the students faced at DataFest were extremely unique: “The most difficult part of DataFest is coming up with an idea. In class, we get specific problems, but at DataFest, we are thrown a massive data set and must figure out what to do with it. We originally came up with a lot of ideas, but the data set just didn’t have enough information to fully visualize though.”

At the core, Callie and her team, instead of answering questions posed in class, had to come up with innovative and insightful questions to pose themselves. With virtually no guidance, the team chose which aspects of the data to include and which to exclude.

Another principal consideration across all categories was which tools to use to quickly and clearly represent the data. Callie and her team used R to parse the relevant data, converted their desired data into JSON files, and used D3, a Javascript library, to code graphics to visualize the data. Other groups, however, used Tableau, a drag and drop interface that provided an expedited method for creating beautiful graphics.

Mentors assisted participants with formulating insights and presenting their results

Mentors assisted participants with formulating insights and presenting their results. Image courtesy of Rita Lo.

On Sunday afternoon, students presented their findings to their attentive peers and to a panel of judges, comprised of industry professionals, statistics professors from various universities, and representatives from Data and Visualization Services at Duke Libraries. Judges commended projects based on aspects such as incorporation of other data sources, like Google Adwords, comprehensibility of the data presentation, and the applicability of findings in a real industry setting.

Students competed in four categories:  best use of outside data, best data insight, best visualization, and best recommendation. The Baeesians, pictured below, took first place in best outside data, the SuperANOVA team won best data insight, the Standard Normal team won best visualization, and the Sample Solution team won best recommendation. The winning presentations will be available to view by May 2 at http://www2.stat.duke.edu/datafest/.

Bayesian, the winner of the Best Outside Data category

The Baeasians, winner of the Best Outside Data category at DataFest 2016: Rahul Harikrishnan, Peter Shi, Qian Wang, Abhishek Upadhyaya. (Not pictured Justin Wang) Image courtesy of Rita Lo.

 

By student writer Olivia Zhu  professionalpicture

When the Data Get Tough, These Researchers Go Visual

Ever wondered what a cleaner shrimp can see?

Or how the force of a footstep moves from particle to particle through a layer of sand?

How about what portion of our renewable energy comes from wind versus solar power?

The winning submission, created by Nicholas School PhD candidate Brandon Morrison, illustrates the flow of agricultural and forestry crops from raw materials to consumer products. The colors correspond to the type of crop – brown for wood, green for vegetables, etc. – and the width of the lines correspond to the quantity of the crop. You can check out the full image and caption on the Duke Data Visualization Flickr Gallery.

The winning submission, created by Nicholas School PhD candidate Brandon Morrison, illustrates the flow of agricultural and forestry crops from raw materials to consumer products. The colors correspond to the type of crop – brown for wood, green for vegetables, etc. – and the width of the lines correspond to the quantity of the crop. You can check out the full image and caption on the Duke Data Visualization Flickr Gallery.

The answers to these questions and more are stunningly rendered in the entries to the 2016 Student Data Visualization Contest, which you can check out now on the Duke Data Visualization Flickr Gallery.

“Visualizations take advantage of our powerful ability to detect and process shapes to reveal detailed trends that you otherwise wouldn’t be able to see,” said Angela Zoss, Data Visualization Coordinator at Duke Data and Visualization Services (DVS), who runs the contest. “This year’s winners were all able to take very complex topics and use visualization to make them more accessible.”

One winner and two finalists were selected from the 14 submissions on the basis of five criteria: insightfulness, broad appeal, aesthetics, technical merit, and novelty. The submissions represent data from all areas of research at Duke – from politics and health to fundamental physics and biology.

“This year’s entrants showed a lot of sophistication and advanced scholarship,” Zoss said.  “We’re seeing more advanced graduate work and multi-year research projects that are really benefiting from visualization.”

Eric Monson, a Data Visualization Analyst with DVS, hopes the contest will inspire more students to consider data visualization when grappling with intricate data sets.

“A lot of this work only gets shared within courses or small academic communities, so it’s exciting to give people this opportunity to have their work reach a broader audience,” Monson said.

Posters of the winning submissions will soon be on display in the Brandaleone Lab for Data and Visualization Services in The Edge on the first floor of Bostock Library.

The second-place entry, by Art History PhD student Katherine McCusker, depicts an archaeological site in Viterbo, Italy. The colored lines indicate the likely locations of buried structures like walls, platforms, and pavement, based on an interpretation of data from ground-penetrating radar (represented by a dark red, yellow, white colormap). You can check out the full image and caption on the Duke Data Visualization Flickr Gallery.

The second-place entry, by Art History PhD student Katherine McCusker, depicts an archaeological site in Viterbo, Italy. The colored lines indicate the likely locations of buried structures like walls, platforms, and pavement, based on an interpretation of data from ground-penetrating radar (represented by a dark red, yellow, white colormap). You can check out the full image and caption on the Duke Data Visualization Flickr Gallery.

Kara J. Manke, PhD

Post by Kara Manke

 

Geography and the Web: A new frontier for data vizualization

A GIS Day earth cake made by the Collegiate Baker

You might be forgiven if you missed GIS Day at The Levine Science Research Center Nov. 18, but it was your loss. Students and faculty enjoyed a delightful geography-themed afternoon of professional panels, lightning talks, and even a geospatial research-themed cake contest.

What is GIS and why is it important?

Geographic information systems (GIS) give us the power to visualize, question, analyze, and interpret data to understand relationships, patterns, and trends in the world around us. Those who work with data and analytics have a responsibility to contribute to this change by helping us make the right decisions for our future. As noted during ESRI’s 2015 User Conference in the video below, “We have a unique ability to impact and shape the world around us. [Yet] for all of our wisdom, our vast intellectual marvels, we still choose a path of unsustainability and continue to make decisions that negatively impact the Earth and ourselves. […]We must accept our responsibility as stewards of the Earth. […] We must apply our best technology, our best thinking, our best values. Now is the time to act. Now is the time for change.”

 

How does GIS help?

Doreen Whitley Rogers, Geospatial Information Officer for the National Audubon Society, led a lively discussion about GIS and the World Wide Web at Duke’s GIS Day. She said GIS is essential to understand what is happening in the geographic space around us. As GIS becomes increasingly web-based, efficiently distributing the system to other people is crucial in a time when new data about the environment is being created every second.

3D map displaying the height of buildings that birds hit windows

3D map displaying the height of buildings at which birds fly into windows in Charlotte, NC

Rogers and her team are aiming to move authoritative GIS data to web for visualizations and create a centralized system with the potential to change our culture and transform the world. As the technology manager, she is working on bringing the information to people with proper security and integrity.

In order to get people to use GIS data in a generalized way, Rogers needed to implement several core capabilities to assist those integrating GIS into their workflow. These include socializing GIS as a technology to everybody, creating mobile apps to work with data in real time, and 3D maps such as this one of bird-strikes in downtown Charlotte.

Case Studies

ClimateWatch helps us predict the seasonal behaviour plants and animals.

Mobile apps connecting to the GIS platform promise a strong “return on mission” due to the vast number of people using maps on phones. By mobilizing everyone to use GIS and input data about birds and geography in their area, the platform quickly scales over millions of acres. In the Bahamas, an  app allows users to take pictures to support bird protection programs.

ClimateWatch is an app that gives us a better understanding of how bird habitats are affected during temperature and rainfall variations – motivating people to speak up and act towards minimizing anthropogenic climate change. Developed by Earthwatch with the Bureau of Meteorology and The University of Melbourne, the app enables every Australian to be involved in collecting and recording data to help shape the country’s scientific response to climate change.

Virtual simulation of scenic flights as an endangered bird.

Virtual simulation of scenic flights from the perspective of an endangered bird.

Apps such as the 3-D flight map give users the vicarious thrill of cruising through nature landscapes from the view of endangered birds.

With the movements toward cleaning air and water in our communities, our planet’s birds will once again live in healthier habitats. As the Audubon Society likes to say: “Where birds thrive, people prosper.”

 

 

 

For more information about bird-friendly community programs, you can visit Audubon‘s site or send them a message.

Doreen Rogers after her presentation on National GIS day.

 

 

To learn more about data visualization in GIS, you can contact Doreen Whitley Rogers via email here.

Anika_RD_hed100_2

Post by Anika Radiya-Dixit

HTC Vive: A New Dimension of Creativity

“I just threw eggs at the robot!” grad student Keaton Armentrout said to Amitha Gade, a fellow biomedical engineering master’s student.

image2

“He just said, ‘Thank you for the egg, human. Give me another one.’ It was really fun.”

In what world does one throw eggs at grateful robots? In the virtual world of the HTC Vive, a 360 degree room-size virtual reality experience created by Steam and HTC that is now offering demos on the Duke campus from November 9 – 13. There is a noticeable buzz about Vive throughout campus.

I stepped in to the atrium of Fitzpatrick CIEMAS expecting a straightforward demonstration of how to pick up objects and look around in virtual reality. Instead, I found myself standing on the bow of a realistic ship, face to face with a full-size blue whale.

A Tiltbrush drawing I created with HTC Vive during my internship at Google. (Tiltbrush was acquired by Google/Alphabet).

A Tiltbrush drawing I created with HTC Vive during my internship at Google. (Tiltbrush was acquired by Google/Alphabet).

Peering over the side of the shipwreck into a deep ravine, I seriously pondered what would happen if I jumped over the railing –even though both my feet were planted firmly on the ground of CIEMAS.

Armentrout observed that the Vive differentiates itself from other VR devices like Oculus by allowing a full range of motion of the head: “I could actually bend down and look at the floorboards of the ship.”

In Valve’s Aperture Science demo, based on their game Portal, I attempted to repair a broken robot so real it was terrifying. I was nearly blown to bits by my robot overseer when I failed at my task. In total, I progressed through four modules, including the shipwreck, robot repair, a cooking lesson, and Tiltbrush, a three-dimensional drawing experience.

Game developers naturally are pursuing in virtual reality, but technologies like HTC Vive have implications far beyond the gaming realm. One of the applications of the Vive, explained one of the Vive representatives, could be virtual surgeries in medical schools. Medical schools could conserve cadavers by assigning medical students to learn operations on virtual bodies instead of human bodies. The virtual bodies would ideally provide the same experience as the operating room itself, revolutionizing the teaching of hands-on surgical skills.

Gade brainstormed further potential applications, such as using robots controlled by virtual reality to navigate search-and-rescue situations after a crisis, reducing danger to rescue crews.

The first time I tried the HTC Vive was not at Duke; it was at a Tiltbrush art show in San Francisco.

HTC Vive Tiltbrush masterpiece displayed at the San Francisco Tiltbrush art show

HTC Vive Tiltbrush masterpiece displayed at the San Francisco Tiltbrush art show

On the stage, an artist was moving her limbs in grand arcs as she painted the leaves of trees and brushing the ground to create a sparkling river. A large screen projected her virtual 3-D masterpiece for the audience.

Gilded frames on stands emphasized the interactive Vive devices, each of which housed a Tiltbrush masterpiece created by a local artist trained in the technique. Well-dressed attendees marvelled at seemingly invisible waterfalls and starry skies in the virtual reality paintings. Clearly, the Vive, by opening another dimension of artistic creation, is changing our notions of space and pushing the bounds of creativity.

12188016_10204922617616904_5669989382191630573_oBy Olivia Zhu Olivia_Zhu_100

Spice up learning with interactive visualizations

Hannah Jacobs is a Multimedia analyst at the Duke Wired! Lab who aims to change learning the humanities from A to B, much to the excitement of students and faculty packed into the Visualization Friday Forum on Oct. 16. Using visualization as a tool to show connections between space, time, and culture, she hopes to augment the humanities classroom by supplementing lecture with interactive maps and timelines.

 

BlackAndWhite

Colorful

 

The virtual maps created for Professor Caroline Bruzelius’ Art History 101 course were built using Omeka’s plugin Neatline, a “geotemporal exhibit builder that allows [the user] to create beautiful, a complex, maps, image annotations, and narrative sequences” such as the Satellite view below.

 

Demo Neatline visualization

 

Using the simple interface, Jacobs created a syllabus with an outline of units and individual lectures, each course point connected to written information, point(s) on the map, and period or span on timeline.

 

Syllabus using Neatline interface

 

Jacobs also implemented clickable points on the map to display supplementary information, such as specific trade routes used of certain raw materials, video clips, and even links to recent pertinent articles. With such an interface, students are better able to understand how the different lectures move backward and forward in time and make connections with previously learned topics.

 

Supplementary video clips

 

For the Art History 101 class,Professor Bruzelius assigned her students a project in which they use Neatline to map the movement of people and materials for a specific culture. One student graphed the Athenian use and acquisition of timber accompanied by an essay with hyperlinks to highlight various parts of the map; another visualized the development of Greek coinage with mapped points of mining locations.

 

Visualization accompanied by essay

Displaying development of Greek coinage

 

The students were excited to use the interactive software and found that they learned history more thoroughly than by completing purely paper assignments. Their impressive projects can be viewed on the Art History website.

As we continue to create interactive visualizations for learning, students in the future may study space, time, and culture using a touchscreen display like the one below.

 

Interactive learning of the future

Interactive learning of the future

 

 

 

hjaccobs

Hannah joined the Wired! Lab in September 2014 after studying Digital Humanities at King’s College London. Previously, she obtained a BA in English/Theatre from Warren Wilson College, and she worked at Duke’s Franklin Humanities Institute from 2011-2013 before departing for London.

 

 


 

Post written by Anika Radiya-Dixit