Insights from Our Advisory Board Members: Angelo Bani, "Navigating the Digital Realm of Misinformation"

Published on 7th June, 2024



Continuing our series "Insights from Our Advisory Board Members", today it’s my turn to propose an article.

Insights from Our Advisory Board Members: Angelo Bani,

I am a former police DCI (prehistoric image on the side) and Interpol intelligence officer, and now the Managing Director of Protective Intelligence Network in Singapore; my company, among other activities, specializes in social media intelligence security exploitation. We identify and flag adverse content that may represent security or reputational threats to our clients.

Given our expertise, I believe we are well-positioned to discuss fake news, misinformation, and disinformation.

With ongoing conflicts in Ukraine and Gaza, tensions in the South China Sea, issues surrounding Taiwan and Iran, and the approaching U.S. elections, now is the perfect time to address these topics. I hope you find this article informative and timely.

Thank you for reading.

-

Navigating the Digital Realm of Misinformation

We live in an era of rapid news and information overload, which can make it challenging to verify news and ensure it comes from a trustworthy source. Social media is a double-edged sword in this pursuit of truth – providing users with real-time updates from a host of sources (both independent and institutional).

Insights from Our Advisory Board Members: Angelo Bani,

From the emergence of Wikileaks and Edward Snowden to the development of news delivered via Twitter – we’re seeing a major shift in how audiences consume news and whether they trust in journalists and media outlets. Consequently, we’ve seen just how quickly conspiracies and misinformation become disseminated and disruptive at scale.

So how do individuals and organisations verify what they’re viewing and reading? How does anyone determine ‘truth’ in this age of ‘fake news’?

Operating in the intelligence sector, I’ve seen how important it is to discern truth from misinformation and propaganda. Within our industry, it always comes down to setting up a baseline, understanding the anomalies, what threats are posed, and leveraging the right sources and solutions to determine if the information is trustworthy or false.

This can feel like an impossible task when every information source has ulterior motives and vested interests. From mainstream media to social media platforms – there are always financial and political motivations that influence what information is shared (or omitted). Some pundits have labelled this era as one of ‘permacrisis’ where people are continually faced with historic events and differing states of emergency – from geopolitical to environmental.

Navigate these crises becomes more challenging when faced with malicious actors who deliberately spread misinformation, which is designed to stoke fear or lull people into complacency.

The atrocities in Palestine are difficult to watch, but what’s makes following the news developments more challenging is the fact that many media outlets and social media accounts are echoing propaganda content that is completely unrelated to the ongoing conflict.

  • For example, in October 2023, a TikTok video, seen by more than 300,000 users and reviewed by CNN, promoted conspiracy theories about the origins of the Hamas attacks – including false claims that they were orchestrated by the media.

  • One video that claimed to depict a Hamas soldier shooting down an Israeli helicopter was actually footage from a video game.

  • A second viral clip, which alleged to portray Israeli airstrikes in Gaza, was taken from a fireworks show in Algeria, with consequents upticks in Islamophobic and antisemitic narratives.

Social media has been instrumental in eroding audience’s trust and ability to think critically when it comes to news and propaganda. Next time, we’ll explore the misinformation proliferated through social media platforms and how these channels are evolving to address these challenges.

Insights from Our Advisory Board Members: Angelo Bani,

Understanding Social Media Changes in Combatting Misinformation

When Elon Musk took over Twitter in late 2022, he made various policy changes that amplified the spread of false, harmful, and inflammatory content. Almost immediately, he terminated a large share of content moderators and shut down the advisory Trust and Safety Council,. Musk also withdrew the social media platform from the European Union’s voluntary Code of Practice on Disinformation – reneging on the company’s previous pledge (under former CEO Jack Dorsey) to uphold transparency standards, demonetise disinformation, and improve media literacy across the EU.

Meanwhile, Meta and YouTube both engaged in widespread layoffs of their trust and safety workers last year, deepening their reliance on algorithmic methods. Although Meta stated days after the attacks in Palestine that it is employing Hebrew and Arabic content reviewers, their exact investment is unclear.

  • In 2021, whistleblower Frances Haugen revealed that Meta spent 87% of its misinformation resources on English-language content, despite only 9% of its user base speaking English primarily.

  • According to the news site Politico, in 2021 just 6% of Arabic-language hate content was detected on Instagram before it made its way on to the platform.

Meta is undergoing a major shift in its relationship with the news media amid recent laws like Canada’s Online News Act, Australia’s News Media Bargaining Code, and the EU’s Copyright Directive, which aim to require qualifying technology platforms to pay to host news articles. Furthermore, the attacks in Gaza occurred just two weeks after the EU’s Digital Services Act came into effect on, which requires large social media platforms to publicly explain how their content moderation algorithms work, act on user complaints of illegal content, and mitigate “societal and economic” risks to fundamental rights in their design.

The Power of Intelligence Analysis and Monitoring Activities

At Protective Intelligence Network, we utilise state-of-the-art systems and human analysis that support our social media intelligence security exploitation (SOCMINT). Our expertise enables organisations to monitor social channels and conversations, respond effectively to social signals, and synthesise data points into actionable insights. We transform open-source information collected from the web, (overt, deep, and dark) and social media into meaningful intelligence, aiding in discerning the authenticity of information and enhancing security measures.

These services and solutions help our clients identify emerging threats, monitor brand sentiment, track public perception, and promptly detect any potential security breaches or malicious activities.

We also provide basic intensive training on open-source intelligence (OSINT) and social media intelligence exploitation, aimed at improving the collection, evaluation, collation, analysis, and distribution of information gathered across open-source intelligence activities. The training course is designed to enhance the practical capacity of participants to integrate OSINT tools into their investigations. We provide a comprehensive overview of the work processes and general techniques necessary for OSINT, exploring the information available in open source to enhance the use of general OSINT techniques, and therefore discover misleading and fake content.

Insights from Our Advisory Board Members: Angelo Bani,

Deepfake Detection

Deepfakes initially came to prominence in the realm of entertainment with videos of major figures performing out of character, like politicians singing pop songs or saying outlandish things. However, with AI technology becoming more advanced and accessible, the disruptive potential of deepfakes has only become more concerning.

In the corporate sphere, deepfake technology has been utilised to fabricate videos featuring high-profile executives, leading to stock market fluctuations and reputational damage. Meanwhile, the use of deepfakes to mimic family members or colleagues in online scams adds a deeply personal dimension to the threat as it preys on people’s compassion and fears to extract information or payment. For example, in February 2024, a multinational firm in Hong Kong lost HK$200 million (USD 25.5 million) due to deepfake video meeting impersonating its leadership.

The influence of deepfakes extends into the political arena, where the dissemination of fabricated content can sow discord, manipulate public opinion, and even disrupt democratic processes. In December 2023, a deepfake video of Singapore's Prime Minister emerged, promoting a scam investment product.

From detecting fake human faces in social media profiles to uncovering realistic face swaps in video content, our Deepfake Detection Service leverages new AI technology to ensure a higher level of safety and authenticity for image and video files. By harnessing state-of-the-art algorithms and advanced machine learning, our solution delivers unmatched precision in detecting and addressing deepfake content across digital platforms.

Insights from Our Advisory Board Members: Angelo Bani,

AI and OCR technology

Most recently, Protective Intelligence Network added a new tool to our suite of services offered in Singapore. This cutting-edge service merges advanced AI with Optical Character Recognition technology to deliver precision in document verification.

Through our partnership with one of the top organisations in The Netherlands, we’ve gained access to over 11,000 formats that have been classified and analysed across more than 200 countries. This rigorous process of verifying identity documents is essential not only for combating terrorism and enhancing national security but ensuring the safety of individuals and organisations against scams and fraud.

The challenges of verifying information are complex and multifaceted, which is why Protective Intelligence Network provides a range of services, solutions, and training options to help organisations remain ahead of scams, ‘fake news’, and other misleading content that can damage a brand’s reputation.

We understand what it takes to establish a baseline, analyse various sources, and help our clients discern genuine insights from misinformation.

Image
Image

In conclusion, the digital age has brought unparalleled access to information, but it has also ushered in an era of misinformation and disinformation.

This article, in addition to its visibility and marketing objectives, aims to emphasize the importance of establishing reliable baselines, understanding anomalies and threats, and leveraging accurate sources to distinguish between fact and fiction.

As individuals and organizations navigate this complex landscape, vigilance, knowledge, and critical thinking are paramount.