You may have seen Meta come up a lot in the news.
Firstly, the AI bots. In September 2023, Meta created a range of AI-generated profiles which would auto-post and interact with real users. Over the Dec/Jan break, some unflattering screenshots of these profiles went viral so Meta is now removing these profiles.
While this is not likely to affect your page directly, it highlights just how much social media engagement these days is auto-generated and inauthentic. It would not surprise us if the major social networks have other live projects which essentially amplify bots, to say nothing of the thousands of bots operated by third parties without social networks’ knowledge or consent. Your social moderation strategy should probably assume that some of the comments you’re getting are from bots, even if you are in a “mild” or non-contentious industry.
There are much larger changes taking shape at Meta this year, with users debating updates being made to their hate speech policy and fact-checking program. Meta announced this week it will abandon its fact-checking program, starting in the United States. The program was aimed at preventing the spread of online lies among more than 3 billion people who use Meta’s social media platforms, including Facebook, Instagram and Threads. Instead of relying on professional fact checkers to moderate content, the tech giant will now adopt a “community notes” model, similar to the one used by X. This model relies on other social media users to add context or caveats to a post. It is currently under investigation by the European Union for its effectiveness. In 2023 in Australia alone, the previous fact checking program led to Meta displaying warnings on over 9.2 million distinct pieces of content on Facebook (posts, images and videos), and over 510,000 posts on Instagram, including reshares.
On Tuesday Meta also eroded most of the regulations in its hate speech policy, meaning that users are now allowed to post and comment things far more disturbing than previously ever allowed. The update particulars are a disheartening read, but here is a good summary by Mashable.
As marketers, these changes bring two main concerns; If hate speech and misleading content increase too much on the platform, it will no longer be a popular choice with many of our audiences. Secondly, if we choose to continue marketing on Meta, we need to be mindful that there is a larger chance our branding and content will be shown alongside things that may harm our brand, or that our content, pages, communities on Meta will be engaged with in negative ways we were previously protected from.
The start of a new year is always a good time to revisit our protective measures like our social media policies, privacy policies and community guidelines. If these are readily available to our customers and other, we can refer to these when making decisions to remove content or users from our digital assets. It would also be good to revisit your internal disaster management plans to make sure they include incidents on social media and that organisational legal teams are up to date with changes on the platforms.
