Modern technology has its major benefits, and some major pitfalls as well. Translation is one of the many perks of contemporary technology, and almost every major platform has incorporated automatic translation into their interface. This translation software isn’t always perfect, though, and occasionally has some major, indefensible errors.
A Major Contributor To The Internet
The parent company Meta has become a massive influence in culture over the last decade. First the owner of Facebook, and now the owner of Instagram and its Twitter-like spinoff, Threads. Meta controls the public conversation much of the time, which can be to its detriment.
This can especially be a problem when the conversation is about something as controversial as the conflict between Israel and Palestine. While nobody is arguing that what is happening is a tragedy all around, there are some with very strong opinions about the conflict. Such strong opinions can lead to errors in judgment, and sometimes errors in action or policy.
A Bigger Boo-Boo Than Usual
A recent error on the part of Meta stirred some intense discourse on the internet. Shortly after the Israel and Palestine conflict began, some users of Instagram started to notice something interesting about the auto translation on their profile descriptions.
Now, if you’ve ever tried to translate a phrase from English into another language and then back again, you’ll know that auto-translation software is far from perfect. It’s one of the many perks of modern technology, but that doesn’t mean that it doesn’t also have its downsides.
Not The Right Word To Use
One of these downsides came into stark reality when some Palestinian activists and Palestinian users of Instagram used the auto-translate feature on their bios. They noticed that a specific word had been added to the translation, a word that had many people up in arms and rightfully offended.
Auto-translation on Instagram, in what appeared to be an error in the software code, suddenly was adding the word “terrorist” to the profiles of Palestinian activists. This, of course, was a terrible look for the company after having already been accused of shadowbanning Palestinian activists and Palestinian supporters since the start of the conflict.
Immediate Red Flags
The occurrence was first reported by 404Media, who noticed that the word had been added to profiles with the Palestinian flag or the phrase “Praise to Allah” in Arabic. Some Tik Tok users tested the theory, and found that even if they weren’t located in Israel or Palestine, the translation software still resulted in the same “error.”
There was, of course, immediate backlash to the company for what was perceived as a grievous error. Many people felt that this was a direct insult to those who are suffering in the Middle East. Instagram responded quickly that it was an error, and that it had been resolved. They gave no direct answer as to how it happened, though.
Not The First Complaint About Meta
This wasn’t the first time that users noticed that social media companies were suppressing content or showing favor in one way or the other. Instagram users noticed that their stories about Palestine and the conflict in the Middle East were getting fewer views than others.
These lower views led some users to complain of being “shadow banned.” This is a process where a social media platform suppresses content to ensure that it’s seen in fewer user feeds, or spontaneously in search results. All social media platforms have this ability, which has led to plenty of problems for them in the past.
A Swift Response
Meta, of course, responded swiftly to the accusations of shadow banning pro-Palestine content. In a statement, the company responded that there had been a small “glitch” that had been suppressing views of certain stories on the platform. They emphasized that the content of the posts had nothing to do with these lower views, though.
The assured users that the bug had been fixed as quickly as possible. Of course, this wasn’t a strong enough response for some users, and the following error in Arabic translation only cemented some people’s beliefs in the bias of Meta as a company.
An Earlier, Inadequate Response In The Conflict
This is not the first time that the company has come under fire for being potentially biased in regards to the Israel-Palestine conflict. In May of 2021, Instagram was accused by Human Rights Watch of removing posts regarding the conflict.
The company responded that some posts had been removed for “hate speech or symbols,” but that there would be a further independent review into the moderation of content regarding the Israel-Palestine conflict. A separate report concluded that Meta had prevented Palestinians from sharing information freely with their moderation.
What’s The Point Of Social Media?
The conversation around Meta and content moderation has raised bigger questions about the role of social media regarding current events. Some platforms, like Twitter/X, consider themselves as a haven for conversation and sharing of information.
Meta, on the other hand, had sought to be a neutral space away from politically charged discussion. When so much of the conversation centers around various people’s human rights, though, the question is raised: what is the responsibility of social media regarding information, and ensuring that accurate details are shared?
Greater Responsibility Must Be Taken
Even if in error, the translation mistake was a large fumble on the part of Meta. People come to social media to feel safe and informed, and having the platform itself rebel against some people’s very humanity can feel very much like an attack.
While the task of content moderation can be a burden, for major companies such as Meta, it’s a must. Twitter has come under fire for lack of content moderation many times since Elon Musk took over. If errors like the Palestinian translation mix-up keep happening, it can be guaranteed that Meta will, too, will come under greater scrutiny for its inability to protect its users.
GIPHY App Key not set. Please check settings