Leonard
Leonard Cortana is a Ph.D. candidate in Cinema Studies at New York University / Tisch School of the Arts and an affiliate researcher at the Berkman Klein Center for Internet and Society at Harvard University. In his work, Leonard researches the intersection of digital networks, race and technology, and activism with a particular emphasis on digital visual activism, the memorialization of assassinated activists, and the online protection of human rights defenders.
How would you describe your general relationship with the various forms of technology you use?
I have friends all over the world, and I feel very grateful to be able to keep up with them and to use different platforms such as WhatsApp or Telegram to call areas that might not have network coverage. This ability to connect easily is a real asset of technology development. I did have a life before the massive boom of technology, and sometimes I would like to alternate between the quietnesses of the offline world and the often vibrant and addictive virtual world.
As a college professor, seeing the obsession of Gen Zers who are constantly using their phones, I worry because we don’t have any solutions to these compulsive and obsessive relationships with technology. Many of my students comment on the pressure and loneliness they feel with the widespread social networks and the obligation of being hyper-visible to be “marketable.”
Do you consider yourself to be concerned with data and privacy when it comes to utilizing technology?
When you research technology, you’re more aware than worried – sometimes a bit disillusioned.
I have become more and more suspicious of the spread of misinformation practices through algorithmic logic and other distracting tactics. I find it surprising that even though I follow many Black, Brown, and Asian leaders, the algorithm often brings me to a whitened world of “experts.” I would expect [my feed] to be much more open to other profiles when I am researching experts in disability studies, for instance. The exclusion of the algorithm to amplify some voices and not others is very clear to me. It is not a question of identity politics per se but rather a plurality of experiences that needs to be represented.
What identity groups do you identify with? Do these groups impact your experience navigating tech?
I identify as a first-generation Black man, a migrant, and somewhat young; I am still working on my doctoral dissertation, so the algorithm brings up content related to financial instability. I worry that not all identities intersect online; I will receive a bunch of information as a Ph.D. candidate or on racism, [but] I see very little of those identities crossing.
Regarding anti-racist practices, the circulation of violent content online with the risk of re-traumatizing communities is real when you work on transnational Black activism and cultures. In some countries I work with, showing evidence of violent treatment is a way to fight the silencing of news in mainstream media. However, platforms can do better to warn their viewers because, despite efforts after George Floyd’s assassination, too many graphic images circulate freely when we deal with archival images of genocide or colonization, for instance.
This brings us to the utopic desire of a “healing algorithm.” I receive very little content about self-care or strategies to cope with anxiety. Being a migrant in the United States and researching what I do is very stressful, but this is never reflected in my algorithm. Self-care, at least in the United States, is connected to a market and money. Right now the algorithm knows that I do not come from an upper-class upbringing to afford the prices.
Also, the way that self-care is often taught does not reflect my work. For me, self-care would include a reflection on discrimination, questions of access, and finding media that proposes something different than what we have. It is not necessarily doing yoga or eating quinoa. I feel that the self-care logic sometimes removes what our society is doing to people and what needs to be changed systematically. There is a need for self-care because we live in an oppressive society with highly capitalistic standards that often bring people to burnout or depression and where racism and xenophobia exacerbate elitist spaces. However, this lack of connection is quite astonishing.
What shifts or changes do you want to see in the tech industry? How can we make tech safer for marginalized communities?
I think tech companies are highly responsible for their simplistic visual representation and lack of co-creation with historically marginalized communities. I am very much against the typical “idealistic” representation: “we will put an Asian person and a Black person and a Native American person and a white person together, and they will smile.” We are screaming that this is not working – it is a way for them to be like, here is representation and inclusion. It is a long-lasting debate, but marginalized communities – including disabled individuals – should be part of designing these visual campaigns and social promos if we do not want these mistakes repeated.
There is no inclusion without first accountability and reparation. To go back to anti-racist communities, many of my peers have shared the same experience of themes related to systemic racism. We know, as people of color, that we might be the ones who have more chances of being “canceled.” Is it ethical to unfollow so easily someone who speaks truth to power or at least raises awareness on issues that are often swept under the rug to make everyone more comfortable?
Is there anything else you’re noticing relating to the tech industry?
I’m noticing that more and more platforms are using emotions to simplify and often invalidate feelings. You can see it on LinkedIn, Twitter, Facebook. I have many doubts about the options “I am angry. I am happy,” as if these single emotions would define a person. Sometimes you read the worst news and then “heart” it. When you want to say, how can I help? This is a misuse of emotion.
We know that social movements are born out of emotion; you have an emotional response, then you go to the streets. A specific event victimizes someone in your surroundings, and you engage in the fight against injustice. Usually, you have a chance to channel these emotions through the community you work with, and the power of the collective often helps to heal the pain.
It’s also very Western-oriented; some countries and communities do not have the same relationship with emotion, unhappy vs. happy. They have a different way of dealing with emotions; we could learn from many indigenous spiritual practices. Nevertheless, when we are on social media, we cannot go further with these curated options, so we become a bubble of emotion – I would like to have a media outlet that fights these normative practices, like a “meditation button.” You get shocking news, and it’s time to meditate on that, instead of automatically needing a response.