Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Computers/Technology Page 1 of 19

Democracy Threatened: Can We Depolarize Digital Spaces?

Sticky post

“Israeli Mass Slaughter.” “Is Joe Biden Fit to be President?” Each time we log on to social media, potent headlines encircle us, as do the unwavering and charged opinions that fill the comment spaces. Each like, repost, or slight interaction we have with social media content is devoured by the “algorithm,” which tailors the space to our demonstrated beliefs.

So, where does this leave us? In our own personal “echo chamber,” claim the directors of Duke’s Political Polarization Lab in a recent panel.

Founded in 2018, the lab’s 40 scholars enact cutting edge research on politics and social media. This unique intersection requires a diverse team, evident in its composition of seven different disciplines and career stages. The research has proven valuable: beneficiaries include government policy-makers, non-profit organizations, and social media companies. 

The lab’s recent research project sought to probe the underlying mechanisms of our digital echo-chambers: environments where we only connect with like-minded individuals. Do we have the power to shatter the glass and expand perspectives? Researchers used bots to generate social media content of opposing party views. The content was intermixed with subject’s typical feeds, and participants were evaluated to see if their views would gradually moderate.

The results demonstrated that the more people paid attention to the bots, the more grounded in their viewpoints or polarized they became. 

Clicking the iconic Twitter bird or new “X” logo signifies a step onto the battlefield, where posts are ambushed by a flurry of rebuttals upon release.

Chris Bail, Professor of Political and Data Science, shared that 90% of these tweets are generated by a meager 6% of Twitter’s users. Those 6% identify as either very liberal or very conservative, rarely settling in a midde area. Their commitment to propagating their opinions is rewarded by the algorithm, which thrives on engagement. When reactive comments filter in, the post is boosted even more. The result is a distorted perception of social media’s community, when in truth the bulk of users are moderate and watching on the sidelines. 

Graphic from the Political Polarization Lab presentation at Duke’s 2024 Research & Innovation Week

Can this be changed? Bail described the exploration of incentives for social media users. This means rewarding both sides, fighting off the “trolls” who wreak havoc on public forums. Enter a new strategy: using bots to retweet top content creators that receive engagement from both parties.

X’s (formerly Twitter’s) Community Notes feature allows users to annotate tweets that they find misleading. This strategy includes boosting notes that annotate bipartisan creators, after finding that notes tended towards the polarized tweets.

 The results were hard to ignore: misinformation decreased by 25-35%, said Bail, saving companies millions of dollars.

Social media is democracy’s public square

Christopher bail

Instead of simply bashing younger generation’s fixation on social media, Bail urged the audience to consider the bigger picture.

“What do we want to get out of social media?” “

What’s the point and how can it be made more productive?”

On a mission to answer these questions, the Polarization Lab has set out to develop evidence-based social media by creating custom platforms. In order to test the platforms out, researchers prompted A.I. to create “digital twins” of real people, to simulate users. 

Co-Director Alex Volfovsky described the thought process that led to this idea: Running experiments on existing social media often requires dumping data into an A.I. system and interpreting results. But by building an engaging social network, researchers were able to manipulate conditions and observe causal effects.

How can the presence of a “like button” or “repost” feature affect our activity on platforms? On LinkedIn, even tweaking recommended users showed that people gain the most value from semi-distant connections.

In this exciting new field, unanswered questions ring loud. It can be frightening to place our trust in ambiguous algorithms for content moderation, especially when social media usage is at an all-time high.

After all, the media I consume has clearly trickled into my day-to-day decisions. I eat at restaurants I see on my Instagram feed, I purchase products that I see influencers promote, and I tend to read headlines that are spoon-fed to me. As a frequent social media user, I face the troubling reality of being susceptible to manipulation.

Amidst the fear, panelists stress that their research will help create a safer and more informed culture surrounding social media in pressing efforts to preserve democracy.

Post by Ana Lucia Ochoa, class of 2026
Post by Ana Lucia Ochoa, class of 2026

Your AI Survival Guide: Everything You Need to Know, According to an Expert

Sticky post

What comes to your mind when you hear the term ‘artificial intelligence’? Scary, sinister robots? Free help on assignments? Computers taking over the world?

Pictured: Media Architect Stephen Toback

Well, on January 24, Duke Media Architect Stephen Toback hosted a lively conversation on all things AI. An expert in the field of technology and media production, Toback discussed some of the practical applications of artificial intelligence in academic and professional settings.

According to Toback, enabling machines to think like humans is the essence of artificial intelligence. He views AI as a humanities discipline — an attempt to understand human intelligence. “AI is really a digital brain. You can’t digitize it unless you know how it actually works,” he began. Although AI has been around since 1956, the past year has seen an explosion in usage. ChatGPT, for example, became the fastest-growing user application in the world in less than 6 months. “One thing I always talk about is that AI is not gonna take your job, but someone using AI will.”

During his presentation, he referenced five dominant AI platforms on the market. The first one is ChatGPT, created by OpenAI. Released to the public in November 2022, it has over 100 million users every single month. The second is BardAI, which was created by Google in March 2023. Although newer on the market, the chatbot has gained significant traction online.

Pictured: Toback explaining the recent release of Meta’s AI “Characters.”

Next, we have LLama, owned by tech giant Meta. Last September, Meta launched AI ‘characters’ based on famous celebs including Paris Hilton and Snoop Dog, which users could chat with online. “They’ve already started commercializing AI,” Toback explained.

Then there’s Claude, by Anthropic. Claude is an AI assistant for a variety of digital tasks. “Writers tend to use Claude,” Toback said. “Its language models are more attuned to text.”

And finally on Toback’s list is Microsoft Copilot, which is changing the AI game. “It’s integrating ChatGPT into the apps that we use every day. And that’s the next step in this evolution of AI tools.” Described on Microsoft’s website as ‘AI for everything you do,’ Copilot embeds artificial intelligence models into the entire Microsoft 365 suite (which includes apps such as Word, Excel, PowerPoint, and Outlook). “I don’t have to copy and paste into ChatGPT and come back- It’s built right into the app.” It’s also the first AI tool on the market that provides integration into a suite of applications, instead of just one.

Pictured: A presentation created by Toback using Copilot in PowerPoint

He outlined several features of the software, such as: summarizing and responding to email threads on Outlook, creating intricate presentations from a simple text document in PowerPoint, and generating interview questions and resume comparisons in Word. “There’s a great example of using AI for something that I have to do… but now I can do it a little bit better and a little bit faster.”

Throughout his presentation, Toback also touched on the practical use of ChatGPT. “AI is not perfect,” he began. “If you just ask it a question, you’re like ‘Oh that sounds reasonable’, and it might not be right.” He emphasized challenges such as the rapidly changing nature of the platform, inherent biases, and incorrect data/information as potential challenges for practical use.

“Rather than saying I don’t know, it acts a lot like a middle schooler and says it knows everything and gives you a very convincing answer.”

Stephen Toback

These challenges have been felt nationwide. In early 2023, for example, lawyers for a federal court case used ChatGPT to find previous claims in an attempt to show precedent. However, after presenting the claims to a judge, the court found that the claims didn’t actually exist. “It cited all of these fake cases that look like real citations and then the judge considered sanctions, ” said Toback. ‘AI hallucinations’ such as this one, have caused national controversy over the use and accuracy of AI-generated content. “You need to be able to double-check and triple-check anything that you’re using through ChatGPT,” Toback said.

So how can we use ChatGPT more accurately? According to Toback, there are a variety of approaches, but the main one is called prompt engineering: the process of structuring text so that it can be understood by an AI model. “Prompts are really the key to all of this,” he revealed. “The better formed your question is, the more data you’re giving ChatGPT, the better the response you’re going to get.” Below is Toback’s 6-step template to make sure you are engineering prompts correctly for ChatGPT.

Pictured: Toback’s template for ChatGPT prompt engineering

So there you have it — your 2024 AI survival guide. It’s clear from the past few years that artificial intelligence is here to stay, and with that comes a need for improved understanding and use. As AI expert Oren Etzioni proclaims, “AI is a tool. The choice about how it gets deployed is ours.”

Have more questions about AI tools such as ChatGPT? Reach out to the Duke Office of Information Technology here.

Written by Skylar Hughes, Class of 2025

Computer Science Students Say: Let’s Talk About Microaggressions

Sticky post

Soon after taking a seat in her high-level computer science class, Duke student Kiara de Lande surveyed the room. The realization that she was one of only three women of color washed over her. It left a tang of discomfort and confusion. In her gut, she knew that she was capable of success. But then, why were there so few students that looked like her? Doubt ensued: perhaps this was not a place for her. 

de Lande was one of five members of the student advisory board for AiiCE (Alliance for Identity-Inclusive Computing Education) who reflected on their experiences as minority students in computer science in a virtual panel held Jan. 23.

As de Lande shared her story, undergraduate Kianna Bolante nodded in agreement. She too, felt that she had to “second-guess her sense of belonging and how she was perceived.” 

Berkeley ’24 graduate Bridget Agyare added that group work is crucial to success in CS classes, stressing the need for inclusion. The harm of peer micro-aggressions was brought up, the panel emphasizing the danger of stifling minority voices: “When in groups of predominantly males,” de Lande said, “my voice is on the back-burner.”

To not feel heard is to feel isolated, compounding the slam of under-confidence. Small comments here and there. Anxiety trickling in when the professor announces a group project. Peers delegating to you the “front-end” or “design” aspects, leaving the more intricate back-end components for themselves. It’s subtle. It feels like nothing glaring enough to bring attention to. So you shove the feelings to the side.

“No one reaches this level of education by mistake,” said Duke CS graduate student Jabari Kwesi. But over time, these subtle slights chip away at the assurance in your capabilities. 

Kwesi remembers the first time he spoke to a Black female professional software engineer (SWE). “Finally,” he said, “someone who understands what you’re talking about for your experience in and outside academia.”

He made this connection in a Duke course structured to facilitate conversations between students and professionals in the technology industry. In similar efforts, the Duke organization DTech is devoted to non-males in tech. Mentors provide support with peer advisors, social gatherings, and recruiter connections. It also provides access to a database of internships, guiding members during competitive job-hunting cycles. 

As university support continues to grow, students have not shied away from taking action. Bolante, for example, created her own social computing curriculum: focused on connecting student’s identities to the course material. The initiative reflects her personal realization of finding the value in her voice. 

“My personal experiences, opinions, ideas are things no one can take away from me. My voice is my strongest asset and power,” she said. 

As I listened to the declaration, I felt the resilience behind her words. It was evident that the AiiCE panelists are united in their passion for an inclusive and action-driven community. 

Kwesi highlighted the concept of “intentionality.” As a professor, one has to be conscious of the commitment to improvement. This includes making themselves available to students and accepting feedback. Some suggestions amongst the panel were “spotlights” on impactful minorities in CS. Similarly, in every technical class, mandating a societal impact section is key. Technology does not exist in a vacuum: deployment affects real people. For example, algorithms are susceptible to biases against certain groups. Algorithms are designed for tools like resume scanners and medical evaluations. These are not just lines of code- people’s livelihoods are at stake. With the surge of developments in artificial intelligence, technology is advancing more rapidly than ever. To keep bias in check, assembling interdisciplinary teams can help ensure diverse perspectives.

Above all, we must be willing to continue this conversation. There is no singular curriculum or resource that will permanently correct inequities. Johns Hopkins ’25 graduate Rosa Gao reminded the audience that inclusivity efforts are “a practice,” and “a way of moving through space” for professors and peers alike.

It can be as simple as a quick self-evaluation. As a peer: “Am I being dismissive?” “Am I holding everyone’s opinions at an equal weight?” As a professor: “How can I create assignments that will leverage the student voice?”

Each individual experience must be valued, and even successful initiatives should continue to be reinvented. As minorities, to create positive change, we must take up space. As a greater community, we must continue to care, to discuss, and to evolve. 

By Ana Lucia Ochoa, Class of 2026

How Do Animals – Alone or in Groups – Get Where They’re Going?

Note: Each year, we partner with Dr. Amy Sheck’s students at the North Carolina School of Science and Math to profile some unsung heroes of the Duke research community. This is the of fourth eight posts.

In the intricate world of biology, where the mysteries of animal behavior unfold, Dr. Jesse Granger emerges as a passionate and curious scientist with a Ph.D. in biology and a penchant for unraveling the secrets of how animals navigate their surroundings.

Her journey began in high school when she posed a question to her biology teacher about the effect of eye color on night vision. Unable to find an answer, they embarked together on a series of experiments, igniting a passion that would shape Granger’s future in science.

Jesse Granger in her lab at Duke

Granger’s educational journey was marked by an honors thesis at the College of  William & Mary that delved into the potential of diatoms, single-cell algae known for their efficiency in capturing light, to enhance solar panel efficiency. This early exploration of light structures paved the way for a deeper curiosity about electricity and magnetism, leading to her current research on how animals perceive and use the electromagnetic spectrum.

Currently, Granger is involved in projects that explore the dynamics of animal group navigation. She is investigating how animals travel in groups to find food, with collective movement and decision-making.  

Among her countless research endeavors, one project holds a special place in Granger’s heart. Her study involved creating a computational model to explore the dynamics of group travel among animals.  She found that agents, a computational entity mimicking the behavior of an animal, are way better at getting where they are going as part of a group than agents who are traveling alone.

Granger’s daily routine in the Sönke Johnson Lab revolves around computational work. While it may not seem like a riveting adventure to an outsider, to her, the glow of computer screens harbors the key to unlocking the secrets of animal behavior. Coding becomes her toolkit, enabling her to analyze data, develop models, and embark on simulations that mimic the complexities of the natural world.

Granger’s expertise in coding extends to using R for data wrangling and NetLogo, an agent-based modeling program, for simulations. She describes the simulation process as akin to creating a miniature world where coded animals follow specific rules, giving rise to emergent properties and valuable insights into their behavior. This skill set seamlessly intertwined with her favorite project, where the exploration of group dynamics and navigation unfolded within the intricate landscapes of her simulated miniature world.

In the tapestry of scientific exploration, Jesse Granger emerges as a weaver of knowledge, blending biology, physics, and computation to unravel the mysteries of animal navigation. Her journey, marked by curiosity and innovation, not only enriches our understanding of the natural world but also inspires the next generation of  scientists to embark on their unique scientific odysseys.      

Guest Post by Mansi Malhotra, North Carolina School of Science and Math, Class of 2025.

Putting Stronger Guardrails Around AI

AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.
AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.

DURHAM, N.C. — It’s been a busy season for AI policy.

The rise of ChatGPT unleashed a frenzy of headlines around the promise and perils of artificial intelligence, and raised concerns about how AI could impact society without more rules in place.

Consequently, government intervention entered a new phase in recent weeks as well. On Oct. 30, the White House issued a sweeping executive order regulating artificial intelligence.

The order aims to establish new standards for AI safety and security, protect privacy and equity, stand up for workers and consumers, and promote innovation and competition. It’s the U.S. government’s strongest move yet to contain the risks of AI while maximizing the benefits.

“It’s a very bold, ambitious executive order,” said Duke executive-in-residence Lee Tiedrich, J.D., who is an expert in AI law and policy.

Tiedrich has been meeting with students to unpack these and other developments.

“The technology has advanced so much faster than the law,” Tiedrich told a packed room in Gross Hall at a Nov. 15 event hosted by Duke Science & Society.

“I don’t think it’s quite caught up, but in the last few weeks we’ve taken some major leaps and bounds forward.”

Countries around the world have been racing to establish their own guidelines, she explained.

The same day as the US-led AI pledge, leaders from the Group of Seven (G7) — which includes Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — announced that they had reached agreement on a set of guiding principles on AI and a voluntary code of conduct for companies.

Both actions came just days before the first ever global summit on the risks associated with AI, held at Bletchley Park in the U.K., during which 28 countries including the U.S. and China pledged to cooperate on AI safety.

“It wasn’t a coincidence that all this happened at the same time,” Tiedrich said. “I’ve been practicing law in this area for over 30 years, and I have never seen things come out so fast and furiously.”

The stakes for people’s lives are high. AI algorithms do more than just determine what ads and movie recommendations we see. They help diagnose cancer, approve home loans, and recommend jail sentences. They filter job candidates and help determine who gets organ transplants.

Which is partly why we’re now seeing a shift in the U.S. from what has been a more hands-off approach to “Big Tech,” Tiedrich said.

Tiedrich presented Nov. 15 at an event hosted by Duke Science & Society.

In the 1990s when the internet went public, and again when social media started in the early 2000s, “many governments — the U.S. included — took a light touch to regulation,” Tiedrich said.

But this moment is different, she added.

“Now, governments around the world are looking at the potential risks with AI and saying, ‘We don’t want to do that again. We are going to have a seat at the table in developing the standards.’”

Power of the Purse

Biden’s AI executive order differs from laws enacted by Congress, Tiedrich acknowledged in a Nov. 3 meeting with students in Pratt’s Master of Engineering in AI program.

Congress continues to consider various AI legislative proposals, such as the recently introduced bipartisan Artificial Intelligence Research, Innovation and Accountability Act, “which creates a little more hope for Congress,” Tiedrich said.

What gives the administration’s executive order more force is that “the government is one of the big purchasers of technology,” Tiedrich said.

“They exercise the power of the purse, because any company that is contracting with the government is going to have to comply with those standards.”

“It will have a trickle-down effect throughout the supply chain,” Tiedrich said.

The other thing to keep in mind is “technology doesn’t stop at borders,” she added.

“Most tech companies aren’t limiting their market to one or two particular jurisdictions.”

“So even if the U.S. were to have a complete change of heart in 2024” and the next administration were to reverse the order, “a lot of this is getting traction internationally,” she said.

“If you’re a U.S. company, but you are providing services to people who live in Europe, you’re still subject to those laws and regulations.”

From Principles to Practice

Tiedrich said a lot of what’s happening today in terms of AI regulation can be traced back to a set of guidelines issued in 2019 by the Organization for Economic Cooperation and Development, where she serves as an AI expert.

These include commitments to transparency, inclusive growth, fairness, explainability and accountability.

For example, “we don’t want AI discriminating against people,” Tiedrich said. “And if somebody’s dealing with a bot, they ought to know that. Or if AI is involved in making a decision that adversely affects somebody, say if I’m denied a loan, I need to understand why and have an opportunity to appeal.”

“The OECD AI principles really are the North Star for many countries in terms of how they develop law,” Tiedrich said.

“The next step is figuring out how to get from principles to practice.”

“The executive order was a big step forward in terms of U.S. policy,” Tiedrich said. “But it’s really just the beginning. There’s a lot of work to be done.”

Robin Smith
By Robin Smith

Leveraging Google’s Technology to Improve Mental Health

Last Tuesday, October 10 was World Mental Health Day. To mark the holiday, the Duke Institute for Brain Sciences, in partnership with other student wellness organizations, welcomed Dr. Megan Jones Bell, PsyD, the clinical director of consumer and mental health at Google, to discuss mental health. Bell was formerly chief strategy and science officer at Headspace and helped guide Headspace through its transformation from a meditation app into a comprehensive digital mental health platform, Headspace Health. Bell also founded one of the first digital mental health start-ups, Lantern, where she pioneered blended mental health interventions leveraging software and coaching. In her conversation with Dr. Murali Doraiswamy, Duke professor of psychiatry and behavioral sciences, and Thomas Szigethy, Associate Dean of Students and Director of Duke’s Student Wellness Center, Bell revealed the actions Google is taking to improve the health of the billions of people who use their platform. 

She began by defining mental health, paraphrasing the World Health Organization’s definition. She said, “Mental health, to me, is a state of wellbeing in which the individual realizes his or her or their own abilities, can cope with the normal stresses of life, work productively and fruitfully, and can contribute to their own community.” Rather than taking a medicalized approach to mental health, she argued, mental health should be recognized as something that we all have. Critically, she said that mental health is not just mental  disorders; the first step to improving mental health is recognition and upstream intervention.

Underlining the critical role Google plays in global mental health, Bell cited multiple statistics: three out of four people turn to the internet first for health information. On Google Search, there are 100 million searches on health everyday; Youtube boasts 25 billion views of mental health content. Given their billions of users, Bell intimated Google’s huge responsibility to provide people with accurate, authoritative, and empathetic information. The company has multiple goals in terms of mental health that are specific to different communities. There are three principal audiences that Bell described Google’s goals for: consumers, caregivers, and communities. 

Google’s consumer-facing focus is providing access to high quality information and tools to manage their users’ health. With regards to caregivers, Google strives to create strong partnerships to create solutions to transform care delivery. In terms of community health, the company works with public health organizations worldwide, focusing on social determinants of health and aiming to open up data and insights to the public health community. 

Szigethy followed by launching a discussion of Google’s efforts to protect adolescents. He referenced the growing and urgent mental health crisis amongst adolescents; what is Google doing to protect them? 

Bell mentioned multiple projects across different platforms in order to provide youth with safer online experiences. Key to these projects is the desire to promote their mental health by default. On Google Search, this takes the form of the SafeSearch feature. SafeSearch is on by default, filtering out explicit or inappropriate results. On Youtube, default policies include various prevention measures, one of which automatically removes content that is considered “immitable.” Bell used the example of disordered eating content in order to explain the policy– in accordance with their prevention approach, YouTube removes dangerous eating-related content containing anything that the viewer can copy. YouTube also has age-restricted videos, unavailable to users under 18, as well as certain product features that can be blocked. Google also created an eating disorder hotline with experts online 24/7. 

Jokingly, Bell assured the Zoom audience that Google wouldn’t be creating a therapist chatbot anytime soon — she asserted that digital tools are not “either or.” When the conversation veered towards generative AI, Bell admitted that AI has enormous potential for helping billions of people, but maintained that it needs to be developed in a responsible way. At Google, the greatest service AI provides is scalability. Google.org, Bell said, recently worked with The Trevor Project and ReflexAI on a crisis hotline for veterans called HomeTeam. Google used AI that stimulated crises to help scale up training for volunteers. Bell said, “The human is still on the other side of the phone, and AI helped achieve that”. 

Next, Bell tackled the question of health information and misinformation– what she called a significant area of focus for Google. Before diving in, however, Bell clarified, “It’s not up to Google to decide what is accurate and what is not accurate.” Rather, she said that anchoring to trusted organizations is critical to embedding mental health into the culture of a community. When it comes to health information and misinformation, Bell encapsulated Google’s philosophy in this phrase: “define, operationalize, and elevate high quality information.” In order to combat misinformation on their platform, Google asked the National Academy of Medicine to help define what accurate medical sources are. The Academy then put together a framework of authoritative health info, which WHO then nationalized. YouTube then launched its “health sources” feature, where videos from the framework are the first thing that you see. In effect, the highest quality information is raised to the top of your page when you make a search. Videos in this framework also have a visible badge on the watch panel that features a  phrase like “from a healthcare professional” or “from an organization with a healthcare professional.” Bell suggested that this also helps people to remember where their information is coming from, acting as a guardrail in itself. Additionally, Google continues to fight medical misinformation with an updated medical misinformation policy, which enables them to remove content that is contradictory to medical authorities or medical consensus. 

Near the end of the conversation, Szigethy asked Bell if she would recommend any behaviors for embracing wellbeing. A prevention researcher by background, Bell stressed the importance of early and regular action. Our biggest leverage point for changing mental health, she asserted, is upstream intervention and embracing routines that foster our mental health. She breaks these down into five dimensions of wellbeing: mindfulness, sleep, movement and exercise, nutrition, and social connection. Her advice is to ask the question: what daily/weekly routines do I have that foster each of these? Make a list, she suggests, and try to incorporate a daily routine that addresses each of the five dimensions. 

Before concluding, Bell advocated that the best thing that we can do is to approach mental health issues with humility and listen to a community first. She shared that, at Headspace, her team worked with the mayor’s office and community organizations in Hartford, Connecticut to co-define their mental health goals and map the strengths and assets of the community. Then, they could start to think about how to contextualize Headspace in that community. Bell graciously entered the Duke community with the same humility, and her conversation was a wonderful commemoration of World Mental Health Day. 

By Isa Helton, Class of 2026

My Face Belongs to The Hive (and Yours Does Too)

Imagine having an app that could identify almost anyone using only a photograph of their face. For example, you could take a photograph of a stranger in a dimly lit restaurant and know within seconds who they are.

This technology exists, and Kashmir Hill has reported on several companies that offer these services.

An investigative journalist with the New York Times, Hill visited Duke Law Sept. 27 to talk about her new book, Your Face Belongs To Us.

The book is about a company that developed powerful facial recognition technology based on images harnessed from our social media profiles. To learn more about Clearview AI, the unlikely duo who were behind it, and how they sold it to law enforcement, I highly recommend reading this book.

Hill demonstrated for me a facial recognition app that provides subscribers with up to 25 face searches a day. She offered to let me see how well it worked.

Screen shot of the search app with Hill’s quick photo of me.

She snapped a quick photo of my face in dim lighting. Within seconds (3.07 to be exact), several photos of my face appeared on her phone.

The first result (top left) is unsurprising. It’s the headshot I use for the articles I write on the Duke Research Blog. The second result (top right) is a photo of me at my alma mater in 2017, where I presented at a research conference. The school published an article about the event, and I remember the photographer coming around to take photos. I was able to easily figure out exactly where on the internet both results had been pulled from.

The third result (second row, left) unsettled me. I had never seen this photo before.

A photo of me sitting between friends. Their faces have been blurred out.

After a quick search of the watermark on the photo (which has been blurred for safety), I discovered that the photograph was from an event I attended several years ago. Apparently, the venue had used the image for marketing on their website. Using these facial recognition results, I was able to easily find out the exact location of the event, its date, and who I had gone with.

What is Facial Recognition Technology?

Researchers have been trying for decades to produce a technology that could accurately identify human faces. The invention of neural network artificial intelligence has made it possible for computer algorithms to do this with increasing accuracy and speed. However, this technology requires large sets of data, in this case, hundreds of thousands of examples of human faces, to work.

Just think about how many photos of you exist online. There are the photos that you have taken and shared or that your friends and family have taken of you. Then there are photos that you’re unaware that you’re in – perhaps you walked by as someone snapped a picture and accidentally ended up in the frame. I don’t consider myself a heavy user of social media, but I am sure there are thousands of pictures of my face out there. I’ve uploaded and classified hundreds of photos of myself across platforms like Facebook, Instagram, LinkedIn, and even Venmo.

The developers behind Clearview AI recognized the potential in all these publicly accessible photographs and compiled them to create a massive training dataset for their facial recognition AI. They did this by scraping the social media profiles of hundreds of thousands of people. In fact, they got something like 2.1 million images of faces from Venmo and Tinder (a dating app) alone.

Why does this matter?

Clearly, there are major privacy concerns for this kind of technology. Clearview AI was marketed as being only available to law enforcement. In her book, Hill gives several examples of why this is problematic. People have been wrongfully accused, arrested, detained, and even jailed for the crime of looking (to this technology) like someone else.

We also know that AI has problems with bias. Facial recognition technology was first developed by mostly white, mostly male researchers, using photographs of mostly white, mostly male faces. The result of this has had a lasting effect. Marginalized communities targeted by policing are at increased risk, leading many to call for limits on the use of facial recognition by police.

It’s not just government agencies who have access to facial recognition. Other companies have developed off-the-shelf products that anyone can buy, like the app Hill demonstrated to me. This technology is now available to anyone willing to pay for a subscription. My own facial recognition results show how easy it is to find out a lot about a person (like their location, acquaintances, and more) using these apps. It’s easy to imagine how this could be dangerous.

There remain reasons to be optimistic about the future of privacy, however. Hill closed her talk by reminding everyone that with every technological breakthrough, there is opportunity for ethical advancement reflected by public policy. With facial recognition, policy makers have previously relied on private companies to make socially responsible decisions. As we face the results of a few radical actors using the technology maliciously, we can (and should) respond by developing legal restraints that safeguard our privacy.

On this front, Europe is leading by example. It’s likely that the actions of Clearview AI are already illegal in Europe, and they are expanding privacy rights with the European Commission’s (EC) proposed Artificial Intelligence (AI) regulation. These rules include requirements for technology developers to certify the quality of their processes, rather than algorithm performance, which would mitigate some of these harms. This regulation aims to take a technology-neutral approach and stratifies facial recognition technology by it’s potential for risk to people’s safety, livelihoods, and rights.

Post by Victoria Wilson, MA Bioethics and Science Policy, 2023

New Blogger Isa Helton: Asking AND Listening

When I studied abroad in Paris, France, this summer, I became very familiar with the American tendencies that French people collectively despise. As I sat in a windowless back room of the school I would be studying at in the sixth arrondissement of Paris, the program director carefully warned us of the biggest faux-pas that would make our host families regret welcoming a foreign student into their home and the habitudes that would provoke irritated second glances on the street.

Eiffel Tower and the Seine at dusk
La Seine at dusk with Tour Eiffel.

One: American people are loud. Don’t be loud. We are loud when we talk on the phone, loud putting on our shoes, loud stomping around the Haussmanian apartment built in the 1800s with creaky parquet flooring.

Two: Americans smile too much. Don’t smile at people on the street. No need for a big, toothy grin at every passerby and at every unsuspecting dog-walker savoring the few tourist-free morning hours.

Three: Why do Americans love to ask questions without any intention of sticking around to hear the response? When French people ask you how you’re doing – Comment ça va?– how you slept – Vous-avez bien dormi? – how the meal was – Ça vous a plu? – they stand there and wait for an answer after asking the question. So when Americans exchange a jolly “How are you today!” in passing, it drives French people crazy. Why ask a question if you don’t even want an answer?

This welcome post feels a little bit like that American “How are you today!” Not to say that you, reader, are not a patient, intrigued Frenchman or woman, who is genuinely interested in a response –  I am well-assured that the readers of Duke’s Research Blog are just the opposite. That is to say that the question of “who are you?” is quite complicated to answer in a single, coherent blog post. I will proudly admit that I am still in the process of figuring out who I am. And isn’t that what I’m supposed to be doing in college, anyway?

I can satisfyingly answer a few questions about me, though, starting with where I am from. I’m lucky enough to call Trabuco Canyon, California my home– a medium-sized city about fifteen minutes from the beach, and smack-dab in the middle of San Diego and Los Angeles. Demographically, it’s fairly uninteresting; 68% White, 19% Hispanic, and 8% Asian. I’ve never moved, so I suppose this would imply that most of my life has been fairly unexposed to cultural diversity. However, I think one of the things that has shaped me the most has been experiencing different cultures in my travels growing up.

My dad is a classically-trained archaeologist turned environmental consultant, and I grew up observing his constant anthropological analysis of people and situations in the countries we traveled to. I learned from him the richness of a compassionate, empathetic, multi-faceted life that comes from traveling, talking to people, and being curious. I am impassioned by discovering new cultures and uncovering new schools of thought through breaking down linguistic barriers, which is one of the reasons I am planning on majoring in French Studies.

Perhaps from my Korean mother I learned perseverance, mental strength, and toughness. I also gained practicality, which explains my second major, Computer Science. Do I go crazy over coding a program that creates a simulation of the universe (my latest assignment in one of my CS classes)? Not particularly. But, you have to admit, the degree is a pretty good security blanket.

Why blog? Writing is my therapy and has always been one of my passions. Paired with an unquenchable curiosity and a thirst to converse with people different from me, writing for the Duke Research Blog gives me what my boss Karl Bates – Executive Director, Research Communications – calls “a license to hunt.”

Exclusive, top-researcher-only, super-secret conference on campus about embryonics? I’ll be making a bee-line to the speakers with my notepad in hand, thank you. Completely-sold-out talk by the hottest genome researcher on the academic grapevine? You can catch me in the front row. In short, blogging on Duke Research combines multiple passions of mine and gives me the chance to flex my writing muscles.

Thus, I am also cognizant of the privilege and the responsibility that this license to hunt endows me with. It must be said that elite universities are famously – and in reality – extremely gated-off from the rest of society. While access to Duke’s physical space may still be exclusive, the knowledge within is for anyone’s taking.

In this blog, I hope to dismantle the barrier between you and what can sometimes seem like intimidating, high-level research that is being undertaken on Duke’s campus. I hope to make my blogs a mini bi-monthly revelation that can enrich your intellect and widen your perspective. And don’t worry – when it comes to posing questions to researchers, I plan to stick around to hear the response.

Read my summer blogs from my study abroad in Paris HERE!

Post by Isabella Helton, Class of 2026

Shifting from Social Comparison to “Social Savoring” Seems to Help

The face of a brown-eyed girl with freckles, bangs and new adult teeth fills most of the frame. Superimposed to the right are the icons of multiple real and imagined social media apps in a semicircular arrangement. Image by geralt, via Pixabay.
Image by geralt, via pixabay.

The literature is clear: there is a dark side to engaging with social media, with linkages to depressive symptoms, a sense of social isolation, and dampened self-esteem recently revealed in the global discourse as alarming potential harms.

Underlying the pitfalls of social media usage is social comparison—the process of evaluating oneself relative to another person—to the extent that those who engage in more social comparison are at a significantly higher risk of negative health outcomes linked to their social media consumption.

Today, 72 percent of Americans use some type of social media, with most engaging daily with at least one platform.(1) Particularly for adolescents and young adults, interactions on social media are an integral part of building and maintaining social networks.(2-5) While the potential risks to psychosocial well-being posed by chronic engagement with these platforms have increasingly come to light within the past several years, mitigating these adverse downstream effects poses a novel and ongoing challenge to researchers and healthcare professionals alike.

The intervention aimed to supplant college students’ habitual social comparison … with social savoring: experiencing joyful emotions about someone else’s experiences.

A team of researchers led by Nancy Zucker, PhD, professor in Psychiatry & Behavioral Sciences and director of graduate studies in psychology and neuroscience at Duke University, recently investigated this issue and found promising results for a brief online intervention targeted at altering young adults’ manner of engagement with social media. The intervention aimed to supplant college students’ habitual social comparison when active on social media with social savoring: experiencing joyful emotions about someone else’s experiences.

A cartoon depicts a small man in a ball cap standing on a table with a smartphone nearby. A larger person on the right with a cat-like nose regards him with tears in her eyes.
Image from Andrade et al

Zucker’s team followed a final cohort of 55 college students (78 percent female, 42 percent White, with an average age of 19.29) over a two-week period, first taking baseline measures of their mental well-being, connectedness, and social media usage before the students returned to daily social media usage. On day 8, a randomized group of students received the experimental intervention: an instructional video on the skill of social savoring. These students were then told to implement this new skill when active on social media throughout days 8 to 14, before being evaluated with the rest of the cohort at the two-week mark.

For those taught how and why to socially savor their daily social media intake, shifting focus from social comparison to social savoring measurably increased their performance self-esteem—their positive evaluation—as compared with the control group, who received no instructional video. Consciously practicing social savoring even seemed to enable students to toggle their self-esteem levels up or down: those in the intervention group reported significantly higher levels of self-esteem on days during which they engaged in more social savoring.

Encouragingly, the students who received the educational intervention on social media engagement also opted to practice more social savoring over time, suggesting they found this mode of digesting their daily social media feeds to be enduringly preferable to that of social comparison. The team’s initial findings suggest a promising future for targeted educational interventions as an effective way to improve facets of young adults’ mental health without changing the quantity or quality of their media consumption.

Of course, the radical alternative—forgoing social media platforms altogether in the name of improved well-being—looms in the distance as an appealing yet often unrealistic option for many; therefore, thoughtfully designed, evidence-based interventions such as this research team’s program seem to offer a more realistic path forward.

Read the full journal article.

References

  1. Auxier B, Anderson M. Social media use in 2021: A majority of Americans say they use YouTube and Facebook, while use of Instagram, Snapchat and TikTok is especially common among adults under 30. 2021.
    2. McKenna KYA, Green AS, Gleason MEJ. Relationship formation on the Internet: What’s the big attraction? J Soc Issues. 2002;58(1):9-31.
    3.Blais JJ, Craig WM, Pepler D, Connolly J. Adolescents online: The importance of Internet activity choices to salient relationships. J Youth Adolesc. 2008;37(5):522-536.
    4. Valkenburg PM, Peter J. Preadolescents’ and adolescents’ online communication and their closeness to friends. Dev Psychol. 2007;43(2):267-277.
    5. Michikyan M, Subrahmanyam K. Social networking sites: Implications for youth. In: Encyclopedia of Cyber Behavior, Vols. I – III. Information Science Reference/IGI Global; 2012:132-147.

Guest Post by Eleanor Robb, Class of 2023

When Art and Science Meet as Equals

Artists and scientists in today’s world often exist in their own disciplinary silos. But the Laboratory Art in Practice Bass Connections team hopes to rewrite this narrative, by engaging Duke students from a range of disciplines in a 2-semester series of courses designed to join “the artist studio, the humanities seminar room, and the science lab bench.” Their work culminated in “re:process” – an exhibition of student artwork on Friday, April 28, in the lobby of the French Family Science Center. Rather than science simply engaging artistic practice for the sake of science, or vice versa, the purpose of these projects was to offer an alternate reality where “art and science meet as equals.”

The re:process exhibition

Liuren Yin, a junior double-majoring in Computer Science and Visual and Media Studies, developed an art project to focus on the experience of prosopagnosia, or face blindness. Individuals with this condition are unable to tell two distinct faces apart, including their own, often relying on body language, clothing, and the sound of a person’s voice to determine the identity of a person. Using her experience in computer science, she developed an algorithm that inputs distinct faces and outputs the way that these faces are perceived by someone who has prosopagnosia.

Yin’s project exploring prosopagnosia

Next to the computer and screen flashing between indistinguishable faces, she’s propped up a mirror for passers-by to look at themselves and contemplate the questions that inspired her to create this piece. Yin says that as she learned about prosopagnosia, where every face looks the same, she found herself wondering, “how am I different from a person that looks like me?” Interrogating the link between our physical appearance and our identity is at the root of Yin’s piece. Especially in an era where much of our identity exists online and appearance can be curated any way one wants, Yin considers this artistic piece especially timely. She writes in her program note that “my exposure to technologies such as artificial intelligence, generative algorithms, and augmented reality makes me think about the combination and conflict between human identity and these futuristic concepts.”

Eliza Henne, a junior majoring in Art History with a concentration in Museum Theory and Practice, focused more on the biological world in her project, which used a lavender plant in different forms to ask questions like “what is truthful, and what do we consider real?” By displaying a live plant, an illustration of a plant, and pressings from a plant, she invites viewers to consider how every rendition of a commonly used model organism in scientific experiments omits some information about the reality of the organism.

Junior Eliza Henne

For example, lavender pressings have materiality, but there’s no scent or dimension to the plant. A detailed illustration is able to capture even the way light illuminates the thin veins of the leaf, but is merely an illustration of a live being. The plant itself, which is conventionally real, can only further be seen in this sort of illustrative detail under a microscope or in a diagram.

In walking through the lobby of FFSC, where these projects and more are displayed, you’re surrounded by conventionally scientific materials, like circuit boards, wires, and petri dishes, which, in an unusual turn of events are being used for seemingly unscientific endeavors. These endeavors – illustrating the range of human emotion, showcasing behavioral patterns like overconsumption, or demonstrating the imperfection inherent to life – might at first glance feel more appropriate in an art museum or a performing arts stage.

But the students and faculty involved in this exhibition see that as the point. Maybe it isn’t so unnatural to build a bridge between the arts and the sciences – maybe, they are simply two sides of the same coin.

Post by Meghna Datta, Class of 2023

Page 1 of 19

Powered by WordPress & Theme by Anders Norén