Category Archives: AI

Facebook and Teen-Targeting Ads

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

[An extract from the forthcoming book ‘Facebook: Sins & Insensitivities’]

The Tech Transparency Project claims to be ‘Holding Big Tech Accountable’ and tracks issues with Facebook, X, Google, Apple, Amazon et al.

On 30th January 2024 it published a report – Meta Approves Harmful Teen Ads with Images from its Own AI Tool – about test ads using harmful images generated by Facebooks own AI image generator that clearly targeted the 13-17 age group but were approved almost immediately. I’ve mentioned this report elsewhere in the book, with reference to claims by Frances Haugen.

According to the report:

Meta approved ads promoting drug parties, alcoholic drinks, and eating disorders that used images generated by the company’s own AI tool and allowed them to be targeted at children as young as 13 … showing how the platform is failing to identify policy-violating ads produced by its own technology.

TTP noted that it cancelled the ads before they were due to be published, so they didn’t actually appear on Facebook.

https://www.techtransparencyproject.org/articles/meta-approves-harmful-teen-ads-with-images-from-its-own-ai-tool

Facecrooks points out that this came at a particularly embarrassing time for Meta when Zuckerberg, among other social media oligarchs, was defending social media implementation of claimed policies before Congress, with particular reference to mental health and young people.

https://facecrooks.com/Internet-Safety-Privacy/facebook-approves-pro-anorexia-and-drug-ads-made-with-its-own-ai-tool.html.

Crypto-Gram Ruminations

I’m not Bruce Schneier’s biggest fan. (Some would say that would be him…) He does, I think, suffer from the speech defect that most of us in the security community fall prey to from time to time – an inability to say “I’m not qualified to comment on that.” Well, that’s obviously not a condition unique to the security and journalistic communities. Still, he certainly knows much more than I ever did about many areas of security (not least cryptology, which has always been one of my weaker areas), and he is, in my not-always-humble opinion, particularly good on the social implications of technological issue. Which is probably why I’ve never got around to unsubscribing from his Crypto-Gram newsletter, even though I long ago stopped describing myself as any sort of security expert. (Long before I left the industry, I realized that the more I learned, the less capable I became of filling the gaps in my knowledge.) Anyway…

The latest issue of the newsletter to hit my mailbox addresses – and doesn’t claim to resolve – several issues that should concern us all.

Detecting AI-Generated Text highlights the fact that there is no reliable way to automate the distinguishing of human text from AI-generated text. Though it occurs to me that those commentators who regard AI as the death knell of mankind might wonder whether The Algorithms would allow us awareness of such an automated ability if it did exist. As it happens, I’ve been doing a little informal – not to say flippant – research into that area myself, though in areas of creativity in which I’m more comfortable these days. Here’s an article that may yet be expanded into something larger and possibly more academic: AI, creativity and music. A brief snapshot

But back to Bruce Almighty…

Political Disinformation and AI addresses critical issues in a world that is, perhaps, politically even less stable than at any previous time in my lifetime (the official Cold War included). The assertion that “Disinformation campaigns in the AI era are likely to be much more sophisticated than they were in 2016” seems particularly apposite (not to mention frightening) in juxtaposition with the next item, Deepfake Election Interference in Slovakia, suggesting that deepfake audio recordings likely to influence voting patterns were a tryout for interference in future elections – particularly next year’s presidential election in the US. There’s much more about the implications of the Slovakian deepfakery in the Wired article Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy, not least as regards the difficulties faced by fact-checkers for Meta (and therefore Facebook et al.) in detecting and countering such fakery.

After these chilling discussions, the summary of various viewpoints on AI Risks comes almost as light relief, but the subject is not one to be taken lightly. In fact, none of us can afford to ignore these issues, though most of us will. Not least those of us most vulnerable to media and social media manipulation.

David Harley