Content warning: discussion of topics linked to suicide and child exploitation
Recently in the UK we’ve seen the suicide of a reality TV presenter, and in March 2019, the father of Molly Russell urged the government to introduce regulation on social media platforms in response to his 14-year-old daughter taking her own life. She was found to have viewed content related to depression and suicide on Instagram before her death. While neither of these tragic instances can be solely attributed to social media, many are discussing the arguably significant role that online media played in both cases.
Following on from these instances, in early 2020 the government announced that Ofcom - the communications regulator - was to be given the power to fine social media companies in a bid to protect children from harmful online content. Ofcom will not only be able to fine companies that fail to remove illegal content - such as the promotion of terrorism or child pornography - but online platforms will also be required to stipulate what behaviour and content is acceptable on their sites, and enforce those rules consistently and transparently.
Clearly, the priority for Ofcom and social media firms has to be removing the most immediately harmful content - that which promotes suicide and child exploitation being critical. However, there is also an argument for Ofcom and the social media firms to ensure that other types of content are correctly classified as illegal and are therefore removed. The other types of content that we at FINTRAIL believed should be more heavily moderated and removed pertains to financial crime.
From our consulting experience in the FinTech sector, we have seen a multitude of financial crime cases where the schemes start on social media. In fact many of the low level criminal activities are facilitated and can only be effective due to social media. A few examples include:
Promoting the sale of goods on social media platforms; victims agree to purchase the goods, transfer the money to an account (often in the scammer’s own name, using their real identity) and the goods never materialise. These are also known as advanced fee fraud.
Money launderers recruit people - and pay them for access to their bank accounts - via social media profiles. The launderers use the access to the recruits’ accounts to wash the proceeds of crime. This is known as money muling, and is worryingly common, even among young people.
Scammers can further advertise purported investments schemes online, attracting potentially thousands of users and defrauding them of large sums of money, sometimes even their life savings.
Images from social media used to entice individuals into money muling and other get rich quick scams.
Even in isolation, the results of these scams and schemes are incredibly harmful to the individuals involved; in some cases causing them to be blacklisted from banks (for having perpetrated money laundering through their accounts in the case of money mules), or in others to lose significant amounts of money. However, it is also important to recognise the wider harm that such behaviour has on the rest of society.
Firstly, not only do many of the fraud and laundering schemes detailed above connect back into wider organised crime, involving the predicate offences of illegal drugs sales, human trafficking, corruption, arms trafficking, kidnapping and extortion (inter alia), all of which have an enormous human cost; secondly, the estimated cost of financial crime to global economy is conservatively estimated at between USD 1.6 trillion and USD 2.2 trillion. A Global Financial Integrity report from 2017 underscores how transnational crime undermines economies, societies, and governments, particularly in developing countries, often preventing those who are most vulnerable from getting the support they need, ironically, increasing the chances that they too become embroiled in a life of crime.
So, it’s with these huge human, societal and economic costs in mind that Ofcom needs to work closely not only with the social media firms themselves, but also with financial services firms, financial services regulators and law enforcement to best determine what content should be categorised as illegal and harmful, and seek to include this in their regulatory scope.
In the same way that financial services firms are heavily regulated - because of the harm the provision of their services can cause - social media firms should also be required to take more proactive steps to prevent, deter and detect illegal and harmful content pertaining to financial crime from appearing on their sites.
Practical steps social media firms can take mimic those applied in the financial services sector, such as Knowing Your Customer (KYC) processes - including identity verification such that they can more effectively block repeat offenders - and more intelligent activity (transaction) monitoring, such that they can proactively identify higher risk profiles that should be subject to enhanced monitoring. This doesn’t have to be an arduous process: the FinTech sector has demonstrated that frictionless processes can exist, whilst maintaining compliance and gathering an appropriate level of customer due diligence in the process.
Clearly, any of these processes will have to be implemented proportionately, particularly to ensure the continued freedom of expression and speech. However, considering the harm that social media appears to be, if not causing then at least amplifying, spending time, effort and money combating these issues and working to ensure proportionality is key for the ongoing success and safe utilisation of social media platforms in today’s society.
Get in Touch
If you are interested in speaking to the FINTRAIL team about the topics discussed here or any other anti-financial crime topics in an increasingly digital FinTech world, please feel free to get in touch with one of our team or at contact@fintrail.co.uk