Duke Research Blog

Following the people and events that make up the research community at Duke.

Category: Computers/Technology (Page 1 of 16)

Immerse Yourself in Virtual Reality on the Quad

Open since September 2016, the Virtual Reality Room on the first floor lounge of Edens 1C allows students to experience virtual reality using the HTC Vive headset and controllers.

DURHAM, N.C. — The virtual reality headset looked like something out of a science fiction film. It was tethered by a long cable to a glass-encased PC, which in turn was connected to thick hoses filled with glowing blue coolant.

I slipped the mask over my head and was literally transported to another world.

In real life, I was in the lower level of Edens residence hall testing out the recently opened BoltVR gaming room during an event hosted by the Duke Digital Initiative (DDI). Virtual reality is one of the technologies that DDI is exploring for its potential in teaching and learning.

Rebekkah Huss shoots invaders with a virtual bow and arrow in Duke's newest virtual reality space.

Rebekkah Huss shoots invaders with a virtual bow and arrow in Duke’s newest virtual reality space. Open to students 4 p.m. to 10 p.m. on weekdays, noon to midnight on weekends.

BoltVR is a virtual reality space outfitted with the immersive room-scale technology of the HTC Vive, an $800 gaming system consisting of the headset, hand-held controllers and motion sensors in the room. The VR experience is a new addition to the Bolt gaming suite that opened in 2015 for Duke students.

Once I had the headset on, suddenly the bare walls and carpet were replaced by the yellow lined grid of the Holodeck from Star Trek. It was like nothing I’d ever seen. This is like the home screen for the gaming system, explained  Mark-Everett McGill the designer of the BoltVR game room, as he scrolled through the more than 70 downloaded VR experiences on the BoltVR online account at Steam.

McGill chose a story experience so that I could adjust to being able to move around physical objects in a virtual space.

It was like the floor melted away. On a tiny asteroid in front of me The Little Prince and his rose played out their drama from the cover of the classic children’s book. The stars surrounded me and I tilted my head back to watch a giant planet fly over.

I could walk around the prince’s tiny asteroid and inspect the little world from all angles, but I found it disorienting to walk with normal stability while my eyes told me that I was floating in space. The HTC Vive has a built-in  guidance system called the Chaperone that used a map of the room to keep me from crashing into the walls, I still somehow managed to bump a spectator.

“A lot of people get motion sickness when they use VR because your eyes are sensing the movement but your ears are telling you, you aren’t doing anything.” said, McGill.

Lucky for me, I have a strong stomach and suffered no ill effects while wearing the headset. The HTC Vive also helps counteract motion sickness because is room scale design allows for normal walking and movement.

There was however, one part of the experience that felt very odd, and that was the handheld controllers. The controllers  are tracked by wall-mounted sensors so they show up really well in the VR headset. The problem was that in the titles I played my hands and body were invisible to me.

The headset and controller themselves are incredibly sensitive and accurate. I think most people would intuitively understand how to use them, especially if they have a gaming background, but I missed having the comfort of my own arms. So while the VR worlds are visually believable and the technology powering them is absolutely fascinating, there is still lots of room for new innovations.

Once I started playing games though, I no longer cared about the limitations of the tech because I was having so much fun!

The most popular student choice in the BoltVR is a subgame of The Lab by Valve, it’s a simple tower defense game where the player uses a bow and arrow to shoot little 2D stickmen and stop their attack.

Everything about using the bow felt pretty realistic like loading arrows, and using angles to control the trajectory of a shot. There was even a torch that I used to light my arrow on fire before launching it at an attacker. With unlimited ammunition, I happily guarded my tower from waves of baddies until I finally had to let someone else have a turn.

To learn more about VR experiences for teaching and learning at Duke, join the listserv at https://lists.duke.edu/sympa/subscribe/vr2learn.

Post by Rebekkah Huss

Post by Rebekkah Huss

Trapping Light to Enhance Material Properties

Professor Mikkelsen is the Nortel Networks Assistant Professor of Electrical and Computer Engineering and Assistant Professor of Physics at Duke University.

A version of this article appeared in Pratt’s 2017 DukEngineer magazine.

Professor Maiken H. Mikkelsen uses optics to tailor the properties of materials, making them stronger and lighter than anything found in nature. This distinguished researcher also teaches my ECE 340: Optics and Photonics course, giving me a wonderful opportunity to ask about her research and experience at the Photonics Asia conference held in China in October 2016.

Below is an edited transcript of our interview.

Q: What sparked your interest in optics and photonics?
I was really excited about doing hands-on research where you could actually probe nanoscale and quantum phenomena from optical experiments. I started out looking into condensed matter and quantum information science and currently observe delicately designed nanostructures. Optics is, to some extent, a tool to modify the properties of materials.

Q: What does your lab do and how do students contribute?
During the last few years, my students and I have been structuring materials on the nanoscale to modify the local electromagnetic environment, which makes these materials behave in new ways. Students play a key role in all aspects of the research, from nanofabrication to performing optical experiments and presenting the results to the scientific community at conferences all over the world. The lab uses tiny metal structures to concentrate the incoming electromagnetic field of light to very small volumes — a research area known as plasmonics. Placing other materials in the near field of this modified environment causes the electrons to behave completely differently.

Platform based on metal nanostructures that allows the lab to dramatically enhance the radiative properties of emitters and other materials.

By controlling how these electrons behave and modifying the geometry of the material, we can gain a deeper understanding of the light-matter interactions. Combining these techniques with our optical experiments shows modifications to material properties that are much stronger than has been seen before. It’s been very exciting!

Q: And this research is what you presented at the Photonics Asia conference?
Yes. With this knowledge, we can enhance the properties of materials significantly, which in the future could lead to ultra-fast and much better LEDs, more efficient photodetectors, or more efficient solar cells and sensors. In Beijing, China, I gave an overview of this research at the leading meeting for the photonics and optics industries in Asia, as well as several other conferences and universities. It was very fulfilling to see how the research I do in a dark lab actually gets noticed around the world. It is always deeply inspiring to learn about recent research breakthroughs from other research groups.

Q: What is the main purpose of trying to find these improved materials?
I am motivated by furthering our fundamental understanding, such as how do light and matter interact when we get to really small scales and how this interaction can be leveraged to achieve useful properties. I believe you often achieve the biggest technological breakthroughs when you’re not trying to solve one particular problem, but creating new materials that could lay the groundwork for a wide range of new technologies. For example, semiconductor materials, with a set of properties that are found naturally, are the cornerstone of most modern technologies. But if you imagine that you now have an entirely new set of building blocks with tailored properties instead, we could revolutionize a lot of different technologies down the road.

The Mikkelsen Research Group. Back row, left to right: Qixin Shen, Andrew Traverso, Maiken Mikkelsen, Guoce Yang, Jon Stewart, Andrew Boyce. Front row, left to right: Wade Wilson, Daniela Cruz, Jiani Huang, Tamra Nebabu.

By improving or completely changing the fabrication technique of these light-matter interactions, new properties begin to emerge. Generally, there’s always a big desire to have something that’s lighter, smaller, more efficient and more flexible. One of the applications we’re targeting with this research is ultrafast LEDs. While future devices might not use this exact approach, the underlying physics will be crucial.

About a year ago, Facebook contacted me and was interested in utilizing our research for omnidirectional detectors that could be ultrafast and detect signals from a large range of incidence angles. This has led to a fruitful collaboration and is one example of how fundamental research can have applications in a wide range of areas — some that you may not even have imagined when you started!

 

Q: What would be your advice to young researchers still trying to decide a career path for themselves or those interested in optics and photonics?
What really helped me was starting to do undergraduate research. I listened to talks by different faculty, asked them to do undergraduate research, and worked on a volunteer basis in their labs. I think that’s really a great way to see if you’re interested in research — use the amazing opportunities both at Duke and around the country. Doing research requires a lot of patience, but I think no two days are the same; there’s always a lot of creativity involved while troubleshooting new problems. After all, if it was easy or if we knew how to do it, it would have already been done. But it hasn’t, so we have to figure it out — I think that is a lot of fun. Doing internships in optics and photonics companies is also another option to learn more about research and development in the industry. Get as many experiences as possible and give things a chance!

Professor Mikkelsen is best known for the first demonstration of nondestructive readout of a single electron spin, ultrafast manipulation of a single spin using all-optical techniques, and extreme radiative decay engineering using nanoantennas.

Mikkelsen has received numerous accolades, including the Cottrell Scholar Award, the Maria Goeppert Mayer Award, and a “triple crown” of Young Investigator Awards from the Air Force, Army and Navy. Her work has been published in the journals Science, Nature Photonics, and Nature Physics, to name a few. Professor Mikkelsen enjoys hiking, gardening, playing tennis, and traveling in her free time.

Learn more at mikkelsen.pratt.duke.edu.

Written by Anika Radiya-Dixit

Visualizing the Fourth Dimension

Living in a 3-dimensional world, we can easily visualize objects in 2 and 3 dimensions. But as a mathematician, playing with only 3 dimensions is limiting, Dr. Henry Segerman laments.  An Assistant Professor in Mathematics at Oklahoma State University, Segerman spoke to Duke students and faculty on visualizing 4-dimensional space as part of the PLUM lecture series on April 18.

What exactly is the 4th dimension?

Let’s break down spatial dimensions into what we know. We can describe a point in 2-dimensional space with two numbers x and y, visualizing an object in the xy plane, and a point in 3D space with 3 numbers in the xyz coordinate system.

Plotting three dimensions in the xyz coordinate system.

While the green right-angle markers are not actually 90 degrees, we are able to infer the 3-dimensional geometry as shown on a 2-dimensional screen.

Likewise, we can describe a point in 4-dimensional space with four numbers – x, y, z, and w – where the purple w-axis is at a right angle to the other regions; in other words, we can visualize 4 dimensions by squishing it down to three.

Plotting four dimensions in the xyzw coordinate system.

One commonly explored 4D object we can attempt to visualize is known as a hypercube. A hypercube is analogous to a cube in 3 dimensions, just as a cube is to a square.

How do we make a hypercube?

To create a 1D line, we take a point, make a copy, move the copied point parallely to some distance away, and then connect the two points with a line.

Similarly, a square can be formed by making a copy of a line and connecting them to add the second dimension.

So, to create a hypercube, we move identical 3D cubes parallel to each other, and then connect them with four lines, as depicted in the image below.

To create an n–dimensional cube, we take 2 copies of the (n−1)–dimensional cube and connecting corresponding corners.

Even with a 3D-printed model, trying to visualize the hypercube can get confusing. 

How can we make a better picture of a hypercube? “You sort of cheat,” Dr. Segerman explained. One way to cheat is by casting shadows.

Parallel projection shadows, depicted in the figure below, are caused by rays of light falling at a  right angle to the plane of the table. We can see that some of the edges of the shadow are parallel, which is also true of the physical object. However, some of the edges that collide in the 2D cast don’t actually collide in the 3D object, making the projection more complicated to map back to the 3D object.

Parallel projection of a cube on a transparent sheet of plastic above the table.

One way to cast shadows with no collisions is through stereographic projection as depicted below.

The stereographic projection is a mapping (function) that projects a sphere onto a plane. The projection is defined on the entire sphere, except the point at the top of the sphere.

For the object below, the curves on the sphere cast shadows, mapping them to a straight line grid on the plane. With stereographic projection, each side of the 3D object maps to a different point on the plane so that we can view all sides of the original object.

Stereographic projection of a grid pattern onto the plane. 3D print the model at Duke’s Co-Lab!

Just as shadows of 3D objects are images formed on a 2D surface, our retina has only a 2D surface area to detect light entering the eye, so we actually see a 2D projection of our 3D world. Our minds are computationally able to reconstruct the 3D world around us by using previous experience and information from the 2D images such as light, shade, and parallax.

Projection of a 3D object on a 2D surface.

Projection of a 4D object on a 3D world

How can we visualize the 4-dimensional hypercube?

To use stereographic projection, we radially project the edges of a 3D cube (left of the image below) to the surface of a sphere to form a “beach ball cube” (right).

The faces of the cube radially projected onto the sphere.

Placing a point light source at the north pole of the bloated cube, we can obtain the projection onto a 2D plane as shown below.

Stereographic projection of the “beach ball cube” pattern to the plane. View the 3D model here.

Applied to one dimension higher, we can theoretically blow a 4-dimensional shape up into a ball, and then place a light at the top of the object, and project the image down into 3 dimensions.

Left: 3D print of the stereographic projection of a “beach ball hypercube” to 3-dimensional space. Right: computer render of the same, including the 2-dimensional square faces.

Forming n–dimensional cubes from (n−1)–dimensional renderings.

Thus, the constructed 3D model of the “beach ball cube” shadow is the projection of the hypercube into 3-dimensional space. Here the 4-dimensional edges of the hypercube become distorted cubes instead of strips.

Just as the edges of the top object in the figure can be connected together by folding the squares through the 3rd dimension to form a cube, the edges of the bottom object can be connected through the 4th dimension

Why are we trying to understand things in 4 dimensions?

As far as we know, the space around us consists of only 3 dimensions. Mathematically, however, there is no reason to limit our understanding of higher-dimensional geometry and space to only 3, since there is nothing special about the number 3 that makes it the only possible number of dimensions space can have.

From a physics perspective, Einstein’s theory of Special Relativity suggests a connection between space and time, so the space-time continuum consists of 3 spatial dimensions and 1 temporal dimension. For example, consider a blooming flower. The flower’s position it not changing: it is not moving up or sideways. Yet, we can observe the transformation, which is proof that an additional dimension exists. Equating time with the 4th dimension is one example, but the 4th dimension can also be positional like the first 3. While it is possible to visualize space-time by examining snapshots of the flower with time as a constant, it is also useful to understand how space and time interrelate geometrically.

Explore more in the 4th dimension with Hypernom or Dr. Segerman’s book “Visualizing Mathematics with 3D Printing“!

Post by Anika Radiya-Dixit.

 

 

Data Geeks Go Head to Head

For North Carolina college students, “big data” is becoming a big deal. The proof: signups for DataFest, a 48-hour number-crunching competition held at Duke last weekend, set a record for the third time in a row this year.

DataFest 2017

More than 350 data geeks swarmed Bostock Library this weekend for a 48-hour number-crunching competition called DataFest. Photo by Loreanne Oh, Duke University.

Expected turnout was so high that event organizer and Duke statistics professor Mine Cetinkaya-Rundel was even required by state fire code to sign up for “crowd manager” safety training — her certificate of completion is still proudly displayed on her Twitter feed.

Nearly 350 students from 10 schools across North Carolina, California and elsewhere flocked to Duke’s West Campus from Friday, March 31 to Sunday, April 2 to compete in the annual event.

Teams of two to five students worked around the clock over the weekend to make sense of a single real-world data set. “It’s an incredible opportunity to apply the modeling and computing skills we learn in class to actual business problems,” said Duke junior Angie Shen, who participated in DataFest for the second time this year.

The surprise dataset was revealed Friday night. Just taming it into a form that could be analyzed was a challenge. Containing millions of data points from an online booking site, it was too large to open in Excel. “It was bigger than anything I’ve worked with before,” said NC State statistics major Michael Burton.

DataFest 2017

The mystery data set was revealed Friday night in Gross Hall. Photo by Loreanne Oh.

Because of its size, even simple procedures took a long time to run. “The dataset was so large that we actually spent the first half of the competition fixing our crushed software and did not arrive at any concrete finding until late afternoon on Saturday,” said Duke junior Tianlin Duan.

The organizers of DataFest don’t specify research questions in advance. Participants are given free rein to analyze the data however they choose.

“We were overwhelmed with the possibilities. There was so much data and so little time,” said NCSU psychology major Chandani Kumar.

“While for the most part data analysis was decided by our teachers before now, this time we had to make all of the decisions ourselves,” said Kumar’s teammate Aleksey Fayuk, a statistics major at NCSU.

As a result, these budding data scientists don’t just write code. They form theories, find patterns, test hunches. Before the weekend is over they also visualize their findings, make recommendations and communicate them to stakeholders.

This year’s participants came from more than 10 schools, including Duke, UNC, NC State and North Carolina A&T. Students from UC Davis and UC Berkeley also made the trek. Photo by Loreanne Oh.

“The most memorable moment was when we finally got our model to start generating predictions,” said Duke neuroscience and computer science double major Luke Farrell. “It was really exciting to see all of our work come together a few hours before the presentations were due.”

Consultants are available throughout the weekend to help with any questions participants might have. Recruiters from both start-ups and well-established companies were also on site for participants looking to network or share their resumes.

“Even as late as 11 p.m. on Saturday we were still able to find a professor from the Duke statistics department at the Edge to help us,” said Duke junior Yuqi Yun, whose team presented their results in a winning interactive visualization. “The organizers treat the event not merely as a contest but more of a learning experience for everyone.”

Caffeine was critical. “By 3 a.m. on Sunday morning, we ended initial analysis with what we had, hoped for the best, and went for a five-hour sleep in the library,” said NCSU’s Fayuk, whose team DataWolves went on to win best use of outside data.

By Sunday afternoon, every surface of The Edge in Bostock Library was littered with coffee cups, laptops, nacho crumbs, pizza boxes and candy wrappers. White boards were covered in scribbles from late-night brainstorming sessions.

“My team encouraged everyone to contribute ideas. I loved how everyone was treated as a valuable team member,” said Duke computer science and political science major Pim Chuaylua. She decided to sign up when a friend asked if she wanted to join their team. “I was hesitant at first because I’m the only non-stats major in the team, but I encouraged myself to get out of my comfort zone,” Chuaylua said.

“I learned so much from everyone since we all have different expertise and skills that we contributed to the discussion,” said Shen, whose teammates were majors in statistics, computer science and engineering. Students majoring in math, economics and biology were also well represented.

At the end, each team was allowed four minutes and at most three slides to present their findings to a panel of judges. Prizes were awarded in several categories, including “best insight,” “best visualization” and “best use of outside data.”

Duke is among more than 30 schools hosting similar events this year, coordinated by the American Statistical Association (ASA). The winning presentations and mystery data source will be posted on the DataFest website in May after all events are over.

The registration deadline for the next Duke DataFest will be March 2018.

DataFest 2017

Bleary-eyed contestants pose for a group photo at Duke DataFest 2017. Photo by Loreanne Oh.

s200_robin.smith

Post by Robin Smith

The Fashion Trend Sweeping East Campus

During the months of January and February, there was one essential accessory seen on many first-year Duke students’ wrists: the Jawbone. The students were participating in a study listed on DukeList by Ms. Madeleine George solely for first-year students regarding their lives at Duke. The procedures for the study were simple:

  1. Do a preliminary test involving a game of cyberball, a game psychologists have adapted for data collection.
  2. Wear the Jawbone for the duration of the study (10 days)
  3. Answer the questions sent to your phone every four hours. You will need to answer five a day. The questions are brief.
  4. Answer all the questions every day (you can miss one of the question times) and get $32.

About a hundred first-year Duke students participated.

Some of the questions on the surveys asked how long you slept, how stressed you felt, what time did you woke up, did you talk to your parents today, how many texts did you send, and so on. It truly did feel as though it were a study on the daily life of Duke students. However, there was a narrower focus on this study.

Ms. Madeleine George

Ms. George is a Ph.D. candidate in developmental psychology in her 5th year at Duke. She is interested in relationships and how daily technology usage and social support such as virtual communication can influence adolescent and young adult well-being.

Her dissertation is about how parents may be able to provide daily support to their children in their first year of college as face to face interactions are replaced by virtual communication through technology in modern society. This was done in three pieces.

The jawbone study is the third part. George is exploring why these effects occur, if they are uniquely a response to parents, or if people can simply feel better from other personal interactions. Taking the data from the surveys, George has been using models that allow for comparison between each person to themselves and basic ANOVA tests that allow her to examine the differences between groups. She’s still working on that analysis.

For her first test, she found that students who talked to their parents were feeling worse. But, on days students had a stressor, they were in a better mood after talking to their parents. In addition, based on the cyberball experiment where students texted a parent, stranger, or no one, George infers that texting anyone is better than no one because it can make people feel supported.

So far, George seems to have found that technology doesn’t necessarily take away relationship value and quality. Online relationships tend to reflect offline relationships. While talking with parents might not always make a student feel better, there can be circumstances where it can be beneficial.

Post by Meg Shieh.

Creating Technology That Understands Human Emotions

“If you – as a human – want to know how somebody feels, for what might you look?” Professor Shaundra Daily asked the audience during an ECE seminar last week.

“Facial expressions.”
“Body Language.”
“Tone of voice.”
“They could tell you!”

Over 50 students and faculty gathered over cookies and fruits for Dr. Daily’s talk on designing applications to support personal growth. Dr. Daily is an Associate Professor in the Department of Computer and Information Science and Engineering at the University of Florida interested in affective computing and STEM education.

Dr. Daily explaining the various types of devices used to analyze people’s feelings and emotions. For example, pressure sensors on a computer mouse helped measure the frustration of participants as they filled out an online form.

Affective Computing

The visual and auditory cues proposed above give a human clues about the emotions of another human. Can we use technology to better understand our mental state? Is it possible to develop software applications that can play a role in supporting emotional self-awareness and empathy development?

Until recently, technologists have largely ignored emotion in understanding human learning and communication processes, partly because it has been misunderstood and hard to measure. Asking the questions above, affective computing researchers use pattern analysis, signal processing, and machine learning to extract affective information from signals that human beings express. This is integral to restore a proper balance between emotion and cognition in designing technologies to address human needs.

Dr. Daily and her group of researchers used skin conductance as a measure of engagement and memory stimulation. Changes in skin conductance, or the measure of sweat secretion from sweat gland, are triggered by arousal. For example, a nervous person produces more sweat than a sleeping or calm individual, resulting in an increase in skin conductance.

Galvactivators, devices that sense and communicate skin conductivity, are often placed on the palms, which have a high density of the eccrine sweat glands.

Applying this knowledge to the field of education, can we give a teacher physiologically-based information on student engagement during class lectures? Dr. Daily initiated Project EngageMe by placing galvactivators like the one in the picture above on the palms of students in a college classroom. Professors were able to use the results chart to reflect on different parts and types of lectures based on the responses from the class as a whole, as well as analyze specific students to better understand the effects of their teaching methods.

Project EngageMe: Screenshot of digital prototype of the reading from the galvactivator of an individual student.

The project ended up causing quite a bit of controversy, however, due to privacy issues as well our understanding of skin conductance. Skin conductance can increase due to a variety of reasons – a student watching a funny video on Facebook might display similar levels of conductance as an attentive student. Thus, the results on the graph are not necessarily correlated with events in the classroom.

Educational Research

Daily’s research blends computational learning with social and emotional learning. Her projects encourage students to develop computational thinking through reflecting on the community with digital storytelling in MIT’s Scratch, learning to use 3D printers and laser cutters, and expressing ideas using robotics and sensors attached to their body.

VENVI, Dr. Daily’s latest research, uses dance to teach basic computational concepts. By allowing users to program a 3D virtual character that follows dance movements, VENVI reinforces important programming concepts such as step sequences, ‘for’ and ‘while’ loops of repeated moves, and functions with conditions for which the character can do the steps created!

 

 

Dr. Daily and her research group observed increased interest from students in pursuing STEM fields as well as a shift in their opinion of computer science. Drawings from Dr. Daily’s Women in STEM camp completed on the first day consisted of computer scientist representations as primarily frazzled males coding in a small office, while those drawn after learning with VENVI included more females and engagement in collaborative activities.

VENVI is a programming software that allows users to program a virtual character to perform a sequence of steps in a 3D virtual environment!

In human-to-human interactions, we are able draw on our experiences to connect and empathize with each other. As robots and virtual machines grow to take increasing roles in our daily lives, it’s time to start designing emotionally intelligent devices that can learn to empathize with us as well.

Post by Anika Radiya-Dixit

Page 1 of 16

Powered by WordPress & Theme by Anders Norén