Several months ago, I read an article in the San Francisco Chronicle about Facebook advertising. No, this wasn’t an article about privacy or data security, this was an article that illuminated the challenges of Facebook’s advertising artificial intelligence algorithm that interprets “explicit” images. Now, I will be frank. Normally I tune out as soon as someone says “artificial intelligence.” I know this is practically blasphemy here in Silicon Valley, but it has always seemed too ethereal or irrelevant to me since I live in the social service world and have never fashioned myself a techie…until I read this article.
To paraphrase, the challenge in front of Facebook was that their advertising algorithm interpreted male and female bodies differently in advertising images, rejecting some ads, while accepting others. For example, when a romance book featured a woman with a bare back on its cover, it was rejected. But in a similar ad, this time with a male baring their torso, it was accepted.
We could have an extensive discussion about Facebook’s Community Standards and why certain anatomical parts are deemed appropriate or inappropriate, or how Facebook’s human reviewers interpret those standards. But what I am interested in goes beyond how Facebook filters their ads. This goes all the way back to when the algorithm was created, when the computer code was written.
Think about this for a moment. Computer code is developed by human beings. We like to think of artificial intelligence as completely objective – it’s a machine, after all. But at some point, there was a conversation among Facebook software engineers about the parameters to put in place so that their code could efficiently interpret the vast amounts of imagery that advertisers submit for approval. For the purposes of discussion, I will assume that there was a thoughtful conversation among these hypothetical engineers and experts, about wanting to make sure that sexually explicit graphics were rejected, which most people can broadly agree is important. But how do they decide what is considered “sexually explicit?” Is any bare skin on any body considered sexually explicit? What if it’s a bare hand or foot? What about different colors of skin? What is considered “sexually suggestive?” How does the computer code interpret male bodies and female bodies? Is it based on standard prototypes of male and female bodies? How does it know the difference between a nipple on a male body and a nipple on a female body? How do age, weight, height and other factors play in? The list of questions is practically endless.
The point here isn’t to criticize Facebook for their advertising algorithm. I’m certain that this is a challenge every social media company faces, not just with advertising, but with filtering the vast amounts of other imagery they encounter. The point is to highlight the fact that artificial intelligence is developed by human beings, which means that we are programming our social norms into the algorithms that control all of our AI systems.
So what can we do about this? And what does it have to do with sex ed?
First, I think this highlights the importance of having diverse software engineering teams who aren’t afraid to have thoughtful and intentional conversations about how gender expectations and norms about sexuality are being programmed into the vast amounts of computer code we use every day. Teams comprised of the same types of people – whether it all male, all female, of all one color of skin, all English-speaking, all from large metropolitan areas, take your pick – are simply not going to be able to think of all of the possible interpretations or outcomes of particular programming choices.
We need a variety of perspectives on those engineering teams to ask why a female back is considered explicit and a male torso is not - to ensure that the bias implicit in all of us doesn’t take over.
And, yes, sex ed has a role to play in this, too. Health Connected spends significant time in our classes asking students to consider their own values when it comes to gender norms, to listen and accept that different people have different values about bodies and sexuality, and to ask questions about the ways in which different genders are portrayed in the media. What do male and female power look like in popular songs, shows, and movies? How is sex portrayed in those songs and shows? Do all bodies look like the ones portrayed in movies, shows, and, yes, advertising? We are teaching them to think critically think about the imagery that is provided to them through the vast amounts of media that they consume.
Many of these young people we teach will eventually go on to become software engineers 5, 10, or 15 years from now. These are the people we need on those engineering teams who are creating the artificial intelligence algorithms of tomorrow. Perhaps in the not-too-distant future we will get to the point where our machines can think beyond the limitations of their programmers, which has its own set of complicated ethical questions. But for now, it is the students of today who will ask the critical questions about social norms that we need to ask so that our artificial intelligence is shifting our social norms, not amplifying them.