CASBS Symposium Spotlights Room for Further Progress for Women in Tech
In recent years we’ve seen a lot of efforts to block biases against women working in tech.
But despite some gains, the industry continues to seek new insights and solutions in order to advance greater gender equity. Important questions remain: Where are we still stuck, why, and what can we do about it?
These questions were front and center on January 31, as the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford hosted the second installment in its 2016-17 symposium series. The symposium featured presentations by 2016-17 CASBS fellow Sapna Cheryan, a social psychologist at the University of Washington; and 2015-16 CASBS fellow Shelley Correll, a sociologist and director of the Michele R. Clayman Institute for Gender Research at Stanford. The Clayman Institute co-sponsored the event.
The two scholars’ presentations served as excellent complements, as one focused on cultural stereotypes that discourage women’s and girls' entry into the field, while the other focused on gender biases that affect the performance assessment of women already in the field.
Stereotype and Entry
Observing an upward trend over four decades in the percentage of women earning bachelor’s degrees in certain science fields, Cheryan called-out computer science for its notable downward trend. Ample research shows that social constraints can lead people to make different choices than they otherwise would have, thus leading to underrepresentation (in the workforce, for example). Many of these constraints – socialization effects, discrimination, low availability of role models, and perception of work-family conflict, among others – apply to many fields. But there’s “something different or more potent” going on in explaining gender disparity in computer science compared with fields like biology and chemistry, according to Cheryan. What?
Initially inspired by her own experience interviewing for summer jobs while a Stanford graduate student, Cheryan identified a “geek factor” stereotype – a perception fueled in large part by popular culture media and imagery – that is very powerful, even if inaccurate or divorced from reality, and which deters women more than men from entering computer science. Drawing on results from dozens of experiments she and her collaborators performed in university and corporate settings, Cheryan showed that physically manipulating computer science environments in a stereotypical or non-stereotypical manner disproportionately discourages women from expressing interest in computer science.
Cheryan calls the phenomenon “ambient belonging” -- the comfort people feel with the material components of an environment and the people imagined to occupy that environment. She and collaborators have conducted a series of role model experiments involving simple two-minute interactions between subjects with role models (who appear to fit or not fit a computer science stereotype) that reinforce the finding.
Though the stereotype’s effect deters many women from entering computer science, Cheryan does not prescribe “geek culture” eradication; rather, we should seek to broaden the computer science image.
“Expanding this image to include other types of people that belong in the field can be a powerful way to get women in the field,” said Cheryan.
Once we get women in the field, how can we make sure they will want to stay? While diversifying imagery to reduce disparities is important and helpful, it’s not enough. Environmental change must be accompanied by change in the broader culture. Accordingly, Cheryan previewed her CASBS fellowship project that extends beyond altering physical environments to encompassing changes in policies, practices, and interaction styles. To help achieve this, she is developing a “culture bias tool” – an assessment tool that tech companies and departments can use to examine their cultures and determine whether they may be unwelcoming to women.
Stereotype and Ascent
Cheryan's project dovetails nicely with recent work by Shelley Correll, who presented on barriers to women's advancement, with an analysis of the gendered language of performance assessment. She did so with a focus on tech, as research shows women’s increased dissatisfaction with promotion and pay-setting systems in that sector relative to other STEM fields.
In order to address barriers to women in tech, many companies employ now-popular “unconscious bias” and diversity training programs. According to Correll, the education efforts are necessary but not sufficient to institute meaningful, lasting change. Tech companies continue to see progress stalled at executive and management levels. Women continue to face the "classic double bind," where it is challenging to be seen simultaneously as competent and likable. (The two attributes correlate negatively for women but not so for men).
Correll’s ongoing project – a two-and-a-half-year “deep dive” into a large tech company (unidentified for confidentiality) in Silicon Valley with employee gender diversity typical of the region – shows that the biasing effects of stereotypes are greater when criteria for evaluation are ambiguous. As Correll repeated for emphasis, “Ambiguity opens the door to bias.”
The implication: in addition to training individuals, the evaluation process itself is a prime target for change.
Correll’s team accessed and analyzed a sample of actual performance reviews (stripped of gendered names and pronouns as part of the blinded research design), coding dozens of terms that could be distilled into a few linguistic themes. The data, presented in rich detail in Correll’s slides, reveal that men are more likely described with language of distinction (e.g. “take-charge,” “visionary,” “game-changer”); women are more likely described with communal language (e.g. “helpful,” “dedicated,” “loyal”) yet criticized more often for negative or aggressive communication styles. Moreover, men’s performance reviews included more decisively developmental feedback, while women’s reviews (both praise and criticism) contained more vague feedback. In short, language analysis reveals gender differences in performance descriptions and types of feedback.
These gender differences in language codes matter. When mapped against the company’s own rating system, language of distinction helps men move up the ratings ladder more than women. Take-charge-type language ascribed to women does not help them attain the highest rating, and therefore does not provide the same payoffs. Vague feedback precipitates a steeper drop-off along the ratings scale for women compared with men. In short, the relationship between language and ratings differs in a way that disadvantages women in terms of real-world payment and promotion outcomes.
“Women aren’t getting the kind of developmental feedback that would help them advance their career,” said Correll. “This should be very concerning,” both to the company under consideration and other tech firms.
In response, Correll is piloting an intervention with teams of managers aimed at reducing ambiguity in the performance assessment process. This involves, among other things, a performance review checklist that’s customizable for each team. None of this suggests that managers are bad people; rather, “they’re in a situation where they don’t know what to do, and when you don’t know what to do gender stereotypes affect your decision making more,” Correll said.
Early results indicate that, as a result of the intervention, managers indeed are developing more confidence in the evaluation process. Even those managers who at first don’t buy into the initiative are throwing themselves into the new problem-solving role. Correll is so encouraged, in fact, that she is now hopeful that “we can create change agents, even among those not committed to gender diversity.”
In the post-presentation Q&A session, both presenters agreed that their findings – on ambient belonging in Cheryan’s case and gendered language in Correll’s – yield “immediately actionable” improvements that can make an impact in the real world. And that’s the whole point.
“I think the whole reason [Sapna and I] do the kind of work we do is we believe we can make a difference,” said Correll.
View full video of Sapna Cheryan and Shelley Correll’s presentations and joint Q&A session above or on the CASBS YouTube channel.
The cover story of the April 2017 issue of The Atlantic, "Why is Silicon Valley So Awful to Women," features Shelley Correll, among others.
Going Further: Related Works Authored or Coauthored by Sapna Cheryan and Shelley Correll
- Why are Some STEM Fields More Gender Balanced Than Others?
- Computing Whether She Belongs: Stereotypes Undermine Girls' Interest and Sense of Belonging in Computer Science
- Enduring Influence of Stereotypical Computer Science Role Models on Women's Academic Aspirations
- Ambient Belonging: How Stereotypical Cues Impact Gender Participation in Computer Science
- A New Study Shows How Star Trek Jokes and Geek Culture Make Women Feel Unwelcome in Computer Science
- Masculine Culture Responsible for Keeping Women Out of Computer Science, Engineering
- To Succeed in Tech, Women Need More Visibility
- Vague Feedback is Holding Women Back
- Gendering the Election (video)
- Leveling the Playing Field (video)