2023 in Review: Pivotal Moments in AI for Trust, Safety, and Fraud

10 minute read

Published:


As 2023 draws to a close, we reflect on a year marked by significant advancements and challenges in the realms of AI, trust, safety, and fraud. This edition of our newsletter offers a comprehensive overview of the pivotal developments that have shaped these fields over the year. Welcome readers to AI for Trust, Safety and Fraud, your guide to the latest insights on machine learning for trust, safety, fraud detection, and risk management. 
 

Major Developments in 2023


GenAI: A Double-Edged Sword in the Digital Fraud Battlefield

image.png

In 2023, the world of fraud witnessed a dramatic transformation, primarily driven by advancements in AI technologies. This year signified a pivotal shift in the fraud landscape, characterized by the rising influence of generative AI.

On one side, fraudsters employed AI to orchestrate increasingly sophisticated scams. By leveraging generative AI tools, they crafted synthetic identities and engineered convincing phishing schemes. For example, one hallmark of a phishing email used to be their shoddy appearance. Think of the poor grammar and spelling and fake-looking designs that we’re used to seeing from scammers. With easily available GenAI tools, fraudsters are making phishing messages seem more credible and fake identities appear more authentic. Fraudsters can also iterate on their bot softwares in significantly less time. The ingenuity of these AI-fueled frauds posed substantial challenges to companies trying to tackle them, adeptly mimicking genuine interactions and thus making it increasingly difficult to distinguish authentic communications from fraudulent ones. Notably, the top five identity fraud types in 2023 included AI-powered fraud, money muling networks, fake IDs, account takeovers, and forced verification  [1]  

Concurrently, the realm of cybersecurity and financial services saw AI as a crucial ally. While fraudsters adapted AI to collect data and create believable identities, AI's defensive role also evolved. GenAI gives fraud fighters the ability to analyze vast amounts of data from multiple sources, including customer profiles, historical claims data, and external databases. Companies like Poste Italiane reported a drop in their fraud ratio by 50% in just three months after launching an anti-fraud service, showcasing the potential of AI in combating fraud. [2]

The impact of generative AI in this domain was also one of the center stage topics during International Fraud Awareness Week, Nov. 12-18, 2023. This dynamic interplay in 2023 highlighted AI’s dual nature in the fraud world - a potent tool for deception and a formidable force for protection. As we advance, the role of AI in fraud is not merely a technological issue but a reflection of the broader struggle between innovation and integrity. The year was a testament to this ongoing saga, where AI stood at the crossroads of trust and deceit, shaping the future of digital security and fraud prevention.

Bots on Twitter: A Ongoing Battle

In the whirlwind year of 2023, the tech world buzzed with the high-profile acquisition of Twitter by Elon Musk. However, it wasn't just the takeover that captured headlines – it was Musk's fervent claims about the platform's bot traffic. Musk, a vocal critic of Twitter's handling of bots, propelled this issue to the forefront of public discourse, sparking debates and studies around the true extent of bot presence on Twitter.

Before Musk's takeover, Twitter's own reports suggested a relatively modest percentage of bots – figures that Musk openly challenged. Post-acquisition, the billionaire entrepreneur initiated a crusade to cleanse the platform of these automated entities. The goal was simple yet ambitious: to enhance the authenticity of interactions on Twitter and restore user trust.

Twitter usually leverages advanced machine learning algorithms to dissect and analyze the data. The focus is on identifying patterns and behaviors typical of bots such as high-frequency tweeting, repetitive messages, and lack of personalized interaction. This ML-driven approach provided a more nuanced understanding of bot activity, going beyond mere numbers to assess the impact on user experience and engagement.

Interestingly, preliminary analyses in 2023 suggested a slight increase in sophisticated bot activities compared to previous years. These bots, likely powered by generative AI, exhibited more human-like behaviors, making them harder to detect. The challenge was not just about counting bots, but discerning their influence on content dissemination and public opinion.

However, the battle against bots wasn't confined to Twitter's internal efforts. The wider AI and ML community takes a keen interest, with independent researchers conducting their own studies. These external analyses provided valuable insights into the evolution of bots on Twitter, offering a more comprehensive view of the problem.

As the year progressed, the narrative around bots on Twitter evolved from a mere technical challenge to a broader discussion about the integrity of digital platforms. Musk's efforts highlighted the critical role of AI in maintaining digital authenticity, while also underscoring the evolving complexity of bot detection in an age where AI is accessible to both protectors and perpetrators of digital fraud.

In conclusion, 2023 marked a pivotal year in the fight against digital deception on social media platforms. Musk's takeover of Twitter and the subsequent focus on bot traffic not only shed light on the prevalence of bots but also catalyzed advancements in AI and ML for more effective detection and management of these digital actors. This saga, still unfolding, serves as a testament to the ongoing battle between technological innovation and the need for digital integrity.

Want to learn more about how ML is traditionally used to tackle bots? Start with Botometer 101 paper. 

Government Initiatives for AI Safety: The Course of Ethical AI

In 2023, the global landscape saw significant efforts by various governments to address the challenges and opportunities posed by AI, focusing on safety and ethical implications. These initiatives reflect a collective understanding that AI technology, while offering immense benefits, also requires careful oversight and ethical considerations.

One notable development was the European Union's advancement in AI regulation. The EU proposed new rules aimed at ensuring AI systems used in the EU are safe, transparent, and accountable. This included provisions for high-risk AI systems, setting a precedent for AI governance that balances innovation with fundamental rights and safety [3].

In Asia, countries like Japan and South Korea made strides in integrating ethical considerations into their AI policies. Japan's approach to AI governance emphasized the importance of human-centric AI, promoting transparency, user privacy, and data security. South Korea, meanwhile, invested in ethical AI research, focusing on developing guidelines and frameworks to ensure the responsible use of AI technologies [4].

In the realm of AI and public health, the World Health Organization (WHO) released guidelines on creating ethical AI systems in healthcare. These guidelines aimed to ensure that AI technologies used in health settings are designed and used in ways that respect human rights and promote health equity [5].

The Biden Administration also released draft policy guidance on U.S. Government use of AI. This policy mandates federal departments and agencies to conduct AI impact assessments and manage risks, particularly in sensitive areas impacting public rights and safety. It signifies a concerted effort to integrate ethical considerations into governmental AI applications [6].

The year also saw collaborations between governments and private sectors to promote AI ethics. Initiatives such as the AI Partnership for Defense, led by the United States Department of Defense, brought together international partners to share best practices and collaborate on responsible AI use in defense and security [7].

In conclusion, 2023 marked an year in the global journey towards responsible AI. Governments worldwide took significant steps to create frameworks and policies that prioritize safety, transparency, and ethical considerations in AI development and deployment. These efforts are crucial in shaping an AI-driven future that aligns with societal values and global standards.

Interplay Between AI and Cryptocurrency

image.png

In 2023, AI also influenced cryptocurrency security, especially after the FTX crisis in late 2022. As crypto trading volumes rebounded, so did the risk of fraud, leading researchers to focus on AI-driven solutions. People focused on enhancing blockchain security and identifying various risk types, including market, cyber, and liquidity risks.

Common signals being used by researchers in the AI-Crypto intersection are sudden large deposits or withdrawals and transactions sourced from various IP addresses, inconsistencies or deviations from a user’s typical transaction behavior, timing, frequency, and associated network activity

Researchers also started leveraging user discussions on social media to gauge cryptocurrency risks. This approach utilized the widespread influence of social media, capturing user perceptions and behavioral intentions to bolster security measures [8].

There are initiatives to use AI in identifying smart contract vulnerabilities and improving user authentication through biometric verification to improve security and prevent unauthorized access. These efforts were crucial in preventing financial losses and unauthorized access, addressing the dynamic nature of crypto fraud and safeguarding investments in this volatile market. 




From combating digital fraud to enhancing cybersecurity in cryptocurrencies, AI has shown immense potential and versatility. As we look ahead, the developments of 2023 remind us of the importance of vigilance, innovation, and ethical considerations in AI applications. The journey into 2024 will undoubtedly bring new challenges and opportunities, and we remain committed to exploring these with you.

References

Identity Fraud Types Now Include AI-powered Fraud, Money Muling Networks, Fake IDs, Account Takeovers, Forced Verification – Report
Hope or hazard? Generative AI takes center stage for Fraud Week 2023
Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence
Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency
WHO calls for safe and ethical AI for health
FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence
DoD Joint AI Center holds fifth International Dialogue for AI in Defense
Enhancing Cryptocurrency Security Using AI Risk Management Model

The newsletter includes the author's interpretation of news and research works, and can have errors. Please feel free to share feedback, errors or questions, if any.
 

Copyright © 2023 AI for Trust, Safety and Fraud, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

beacon