ChatGPT, a rapidly growing app that provides people with the power of an intelligent large language model artificial intelligence (AI) trained on a massive database of internet information, has transformed the way many people work and seek information online. While the technology has inspired excitement about the promise of future AI, it has also sparked concerns about the changes it is bringing. One of the fears surrounding AI technology, like ChatGPT, is how criminals and other malicious actors will exploit this new power.
The European Union’s law enforcement agency, Europol, has examined this issue in its recent report titled ‘The impact of Large Language Models on Law Enforcement.’ According to the report, ChatGPT, which is built on OpenAI’s large language model technology, could make it “significantly easier for malicious actors to better understand and subsequently carry out various types of crime.” This is because, while the information ChatGPT is trained on is already available on the internet, the technology can provide step-by-step instructions on various topics if given the right contextual questions from a user.
Europol has identified several types of crime that chatbots or LLMs, such as ChatGPT, could potentially assist criminals with. One such crime is fraud, which can be facilitated by LLMs that provide human-like writing on any topic based on user prompts. Criminals can use this feature to impersonate a celebrity’s writing style or learn a writing style from inputted text, before creating more writing in that style. This can be used in phishing scams to impersonate someone or an organization’s writing style.
Fraud and Propaganda
Europol also warns that ChatGPT can legitimize various types of online fraud, such as creating masses of fake social media content to promote a fraudulent investment offer. Obvious spelling or grammar mistakes made by criminals in email or social media communications often signify potential fraud. However, with the power of LLMs at their disposal, even criminals with little knowledge of the English language can generate content that does not have these red flags.
In addition to fraud, the technology is also vulnerable to being used by those looking to create and spread propaganda and disinformation, as it is adept at crafting arguments and narratives at great speed. Therefore, while ChatGPT and other LLMs offer incredible benefits, it is essential to examine and address the potential risks and vulnerabilities they pose.