Category Archives: Facebook

New book: ‘Facebook: Sins & Insensitivities’

[You’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

I’m amused to see that Amazon has excised the word ‘Facebook’ from the ordering details of the latest book. I’m not sure whether that’s because of corporate mistrust of competitors, nervousness because it isn’t complimentary about Meta, or just that I’ve breached some unwritten rule of titling. But at least the title survives on the book cover.

Available as Kindle eBook and as paperback.

“Sadly, while it would be entertaining (for me, but maybe less for you) to write a more academic book tracing the historical aspects and trends in Facebookland, that will have to wait. Here, my primary aim is to provide an overview of Facebook-related issues that will be of more use to the everyday Facebook user than to academics and security mavens. However, the links to articles in the Appendix, covering issues such as the Cambridge Analytica shambles, may be useful to researchers wanting to go deeper into those issues that I haven’t covered in an in-depth article here. (Or even that I have covered, but not in depth!)”

Facebook fake videos

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

I have spent a not-very-happy time this morning, besieged by Facebook group posts passed off as porn videos and trying to get rid of them. In fact, it’s unlikely that they’re either porn or videos: they’re bot postings of malicious links that are probably intended to steal credentials. It’s not just fake porn that infests Facebook groups, by the way: there are all those fake ‘sad news’ links about celebrities alleged to be dead, ill or maimed, for instance, or scams based on fake ‘special offers’, or ‘bait and switch’ posts about lost/found dogs.

Obviously, this stealing of credentials exposes the legitimate account owner to losing control of their account, but that is usually just a stepping stone to other malicious activities that may range from scam distribution to ‘denial of service’ attacks, from ‘Londoning’ to distribution of political propaganda, from clickjacking to spurious advertising.

Facebook users: bots post all sorts of material to public groups. If it isn’t relevant to the community, it’s probably dangerous. Unfortunately, that doesn’t mean that material that is relevant is safe, but that’s a discussion for another time. I don’t, of course, advise you to follow links like those mentioned above – sadly, there will be other scam links that I haven’t seen or remembered… But do use the option to advise group admins: do it often enough and they may even be inspired to tighten up their group settings.

Facebook group admins: I can understand when people don’t want to make a group private, because that’s likely to hamper growth. However, you don’t have to let anyone (or anybot) post anything. Some of the facilities formerly only available in private groups have recently become available to public groups, too. In particular, turning on participant approval may add to your administrative workload, but it does make a big difference. (That’s what I do on groups I set up, but don’t feel able to enforce it on groups where I’m a co-admin but don’t feel that it’s ‘my’ group.

Don’t rely on Facebook to sort this out for you. Apart from the fact that the platform doesn’t always act in good faith, there are ways that scammers can avoid Meta’s checking. For instance, by showing Meta’s detection systems an innocuous page, where normal FB users see something quite different. (Other malware uses similar techniques to avoid probing by security companies and law enforcement agencies.) If Facebook tells you that a clearly offensive or malicious post doesn’t offend community standards, the likelihood is that its detection has been subverted by this or a similar deception.

[Addendum]

The day after originally posting this, I was encouraged to find that:

  1. If I report a fake pornographic video to Admin as being sexual exploitation (as indeed it is, since it exploits fake porn to capture credentials), it actually gets reported to Meta for review. It isn’t clear whether Meta’s review systems actually look at a post when it’s been deleted and the user (normally a bot/fake profile) removed. So now I’m ‘reporting to admin’ even on groups where I am an admin, before removing the offensive post.
  2. Facebook actually advised me that it was removing the post of a video that I’d previously reported from other posts. It seems that Meta’s Machine Learning is, in fact, sometimes capable of learning. Unfortunately, so are malicious algorithms, so this won’t necessarily last indefinitely, but after a weekend dominated by unattractive renditions of the human body – AI seems to have a curious idea of how perspective and human anatomy correspond – I’m happy for this tiny victory. And no, I’m not puritanical by nature, but this stuff is not only ugly but dangerous.

Facebook and Teen-Targeting Ads

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

[An extract from the forthcoming book ‘Facebook: Sins & Insensitivities’]

The Tech Transparency Project claims to be ‘Holding Big Tech Accountable’ and tracks issues with Facebook, X, Google, Apple, Amazon et al.

On 30th January 2024 it published a report – Meta Approves Harmful Teen Ads with Images from its Own AI Tool – about test ads using harmful images generated by Facebooks own AI image generator that clearly targeted the 13-17 age group but were approved almost immediately. I’ve mentioned this report elsewhere in the book, with reference to claims by Frances Haugen.

According to the report:

Meta approved ads promoting drug parties, alcoholic drinks, and eating disorders that used images generated by the company’s own AI tool and allowed them to be targeted at children as young as 13 … showing how the platform is failing to identify policy-violating ads produced by its own technology.

TTP noted that it cancelled the ads before they were due to be published, so they didn’t actually appear on Facebook.

https://www.techtransparencyproject.org/articles/meta-approves-harmful-teen-ads-with-images-from-its-own-ai-tool

Facecrooks points out that this came at a particularly embarrassing time for Meta when Zuckerberg, among other social media oligarchs, was defending social media implementation of claimed policies before Congress, with particular reference to mental health and young people.

https://facecrooks.com/Internet-Safety-Privacy/facebook-approves-pro-anorexia-and-drug-ads-made-with-its-own-ai-tool.html.

Still feeling a bit like a security researcher…

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

Thank you, Virus Bulletin, for linking on Twitter/X to my review of Frances Haugen’s book on exposing Facebook.

Rather nice to be described as if I were still a security researcher (well, I suppose I am a bit) and VB regular. (Sadly, I doubt if I’ll ever do another VB paper!)

Image showing VB's tweet

The (Face-)Book of Mammon [book review]

David Harley

The (Face-)Book of Mammon [book review]

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

I have, at best, an uneasy relationship with Facebook. To paraphrase something that I’m writing at the moment (more about that shortly):

I first subscribed to Facebook because I was working in IT security research and needed to find out more about it, so I signed up to see how it worked from a user’s point of view. However, friends and colleagues in the security industry – who may well have signed up for similar reasons – quickly found me there and invited me to befriend them, and why wouldn’t I? Then relatives and friends from outside the security industry also sent me invitations, and it would have been churlish to ignore them. Having been partially assimilated I found myself looking for people I knew, especially those I’d lost touch with and with whom I hoped to resume contact. Several years on, I followed various groups and pages aligned with my own interests and activities. So yes, I’m currently willing to accept the trade-off between the social advantages and Facebook’s unwelcome intrusions.

That doesn’t mean, of course, that I’ve resisted the urge to write about Facebook, its shortcomings, and those who take advantage of them: in fact, FB and other social media platforms have supplied me with much blogging material (and hypertension) over the years, to the point where I’ve recently felt obliged to upcycle some of that material into a book project. (If that sounds interesting, you can probably assume that if it’s ever completed, it will be announced on this blog at some point.) I’d already mentioned the whistleblower Frances Haugen in the first draft when I learned that she’d written about her experiences in a book originally called The Power of One: How I Found the Strength to Tell the Truth and Why I Blew the Whistle on Facebook (Little, Brown and Company: published in the UK in 2023 by Hodder and Stoughton as The Power of One – Blowing the Whistle on Facebook). So, naturally, I had to read it.

The first thing to say is that this book has no direct connection that I can see with the 1989 novel The Power of One by Bryce Courtenay, or the slightly later film adaptation. Frances Haugen is best known (and to many of us only known) for having disclosed the contents of 22,000 pages of internal Facebook documents to the Wall Street Journal:

https://www.wsj.com/articles/the-facebook-files-11631713039

Subsequently, she revealed her own identity in September 2021, ahead of an interview on 60 Minutes.

https://www.nbcnews.com/tech/social-media/facebook-whistleblower-reveals-identity-accuses-platform-betrayal-democracy-n1280668

Additionally, she has testified before or otherwise engaged with a number of bodies in the US, Europe and the UK. These included a sub-committee of the US Senate Commerce Committee, the Securities and Exchange Commission, the UK Parliament, and the European Parliament. I’m not always the biggest fan of Wikipedia as a source of accurate information, but there seem to be quite a few useful supporting links here:

https://en.wikipedia.org/wiki/Frances_Haugen

The next thing to say is that this is absolutely not a technical guide to defending your privacy and security from Facebook/Meta, its sponsors, or its abusers, though if you happen to believe that Facebook is an example of all being for the best in the best of all possible Metaverses, the doubts that reading this book might raise may well lead to your wanting to find ways to improve your safety on Facebook and in social media in general. Without commenting on the accuracy of individual claims, I think that’s a Good Thing. But if you aren’t already gifted with a reasonable amount of healthy scepticism, I suppose you probably won’t be reading the book, let alone my less-than-famous blog. As for accuracy: much of what Haugen says and what others have said about her makes a lot of sense to me, as a long-time Facebook watcher and commentator, but I haven’t ploughed through the Facebook Files myself and am not likely to. If I did, I wouldn’t have the resources to verify everything.

The third point to make is that while Haugen makes good points about the need for increased responsibility, transparency, and accountability in social media, this is not an exhaustive guide to ‘fixing’ Meta, let alone other platforms. Judging from her frequent interaction with governmental bodies, she is content to provide information from which they can draw conclusions to drive their future policies and legislation, not push a policy agenda of her own. As she herself writes:

‘Any plan to move forward that’s premised on me personally proposing the solution is a plan that’s doomed to fall short. The “problem” with social media is not a specific feature or a set of design choices. The larger problem is that Facebook is allowed to operate in the dark.’

Elsewhere, she writes about the European Union’s Digital Services Act that:

‘I like to think of laws like the DSA as nutrition labels. In the United States the government does not tell you what you can put in your mouth at dinnertime, but it does require that food producers provide you with accurate information about what you’re eating.’

In fact, the book is by no means focused entirely on the exposure of Facebook. While it begins with Haugen’s presence at President Biden’s first State of the Union address, earning an individual citation as ‘the Facebook whistleblower’, a very large proportion of the subsequent chapters trace the steps that led her to Facebook and beyond from ‘When I Was Young in Iowa’, through Junior High, the Franklin W. Olin College of Engineering and MIT, Google, Harvard Business School, Pinterest, and so on. We hear about her issues with coeliac disease, divorce, victimization by sexist fellow-students, and other negative issues. We don’t, perhaps, need to know about these issues in order to assess the importance of her assertions and allegations, but they’re clearly important to her, and to our understanding of what drives her. (And perhaps even in response to pushback from Facebook?)

What are those assertions and allegations? Well, in general terms, she evidently sees herself as having been ‘a voice from inside Facebook who could authoritatively connect the company’s pernicious algorithms and lies to its corporate culture … [without which] Facebook’s gaslighting and lies might still prevail.’

We’ve been told in recent years that she filed a large number of complaints against Facebook with the Securities and Exchange Commission (at least eight) ‘alleging that the company is hiding research about its shortcomings from investors and the public’, but I was unable to find a direct reference to those complaints in the book.

https://edition.cnn.com/2021/10/03/tech/facebook-whistleblower-60-minutes/index.html

In her statement to the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security, however, she claimed that Facebook’s products “harm children, stoke division and weaken our democracy” and prioritize profit rather than moral responsibility.

https://edition.cnn.com/business/live-news/facebook-senate-hearing-10-05-21/index.html

In the book she touches on a great many issues of concern, including:

  • The rise and fall of the Civic Integrity team ‘spun up’ in the wake of the 2016 US election, with its subsequent defanging and dispersal.
  • The Macedonian misinformation model (1. Build a ‘news’ site 2. Add political articles 3. Post links back from a Facebook page 4. ‘Watch the [Google] AdSense dollars roll in.’
  • Reluctance to reactivate ‘Break The Glass’ measures after the 2020 election, such as requiring a group with a score of hate speech strikes above a certain limit to apply moderation. Haugen clearly links the January 6th actions and ‘Stop The Steal’ to the absence of such ‘friction-adding’ measures.
  • Recognition of and inadequate handling of ‘Adversarial Harmful Movements’.
  • Refusal to share even basic data relating to inconvenient research.
  • Cambridge Analytica data capture as facilitated by Facebook. Cambridge Analytica doesn’t get a lot of wordage in the book, but Haugen does remind us that Facebook was fined $5 billion in 2019 for misleading the public on how much data could be accessed by developer APIs.
  • The effective caps on the number of fact-checking articles commissioned from Facebook’s partners and, crucially, paid for. (Later addressed by the BBC here: https://www.bbc.co.uk/news/technology-47779782).
  • The trade-off between ‘short-term concrete costs’ and the long-term hypothetical risks of an expensive fiasco like the Cambridge Analytica disaster.

These are issues that deserve and need wider exposure and discussion, and that’s why Haugen’s book is important, even though it’s not always well-written: after all, we don’t all have access to the detailed information given to governmental bodies.

Here’s a specific issue about the quality of the writing that caused me to grind my teeth quite a lot. There’s an inconsistency here in the way jargon is addressed. Early in the book, Haugen makes the occasional attempt to clarify coding/algorithmic concepts, even such basics as importing a library. (Though I have a certain amount of empathy with the story of how she was told she needed more instruction on modern software engineering: I went through a similar episode many years ago, when I was told by my manager that my (actually functional, but not necessarily elegant) C code was impenetrable…)

Unfortunately, however, she happily includes many examples of unexplained MBAspeak. Having spent some of the last few years of my working life providing consultancy services to North American companies, I’m not unfamiliar with some of the staples of business communications, and am fully prepared to reach out and circle the wagons in pursuit of an appropriate blue-skying box to think outside. (If Dilbert hadn’t already been invented, he would have had to exist.) Still, I’m (not very) grateful to have been introduced to some new ones (that is, I had to resort to a search engine to find out what they meant in the context in which they were used).

  • ‘Hockey sticking’ describes a fairly flat line on a graph that suddenly shows a dramatic upward turn like a hockey stick handle.
  • ‘Single-player mode’ is when someone on social media posts more than they read.
  • ‘Red ocean’ is a Harvard Business School concept – it describes similarly-qualified ‘sharks’ competing in a blood-filled ocean.

But my favourite is when she complains of having been put in ‘an awkward onboarding position.’ I have a feeling I’ll be borrowing that one.

To be fair, the photo below illustrates a semi-meaningless cliché that I didn’t see in Haugen’s book, but I’m sure you know the one, and might enjoy this take on it.

 

Perhaps it’s unfair to make such a big deal out of this authoring blemish, but it does make me wonder for whom exactly she’s writing. Not, perhaps, a wide audience, so much as other corporate techies, executives, politicians and other policy makers, influencers and, most of all, potential whistleblowers – at any rate, people who might be concerned enough at about the age of corporate-driven AI and the amoral algorithm, to do their best to apply brakes. And if she reaches such people, she deserves applause as much for that as for what she has told us about one specific and particularly problematic social media platform.

https://edition.cnn.com/2021/10/06/tech/facebook-frances-haugen-testimony/index.html

Group Therapy – security and privacy in Facebook groups

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

Having found myself roped into assisting as co-administrator a couple of Facebook groups with security/privacy issues, I thought I should, perhaps, share what little I know about defending your group against scam and spam posts and comments by tightening up group settings.

Caveat: I’ve never really wanted to spend a lot of time administering Facebook groups – in fact I’ve only created one myself that is still active, and I’ll tell you why later – and I haven’t made a lifetime study of the subject. Not even Facebook’s lifetime, let alone my own, which at present is many times longer than Facebook’s. It’s possible, therefore, that I’m not always accurate in my assumptions, and also that an assumption that was accurate when I wrote this was rendered false by changes made by Facebook the day after. But I’ll be as painstakingly accurate as I can. As usual.

Facebook tends to assume that your main ambition and purpose in life is to grow your group at all costs, and preferably devote several hours a day to that task. In fact, there are two main types of groups: private and public.

https://www.facebook.com/help/220336891328465/

Private Groups

A private group is one where only members of the group can see posted content and who are the admins. Furthermore, a private group can be hidden (secret) so that (hopefully) no one can see the group unless they’re already members, or are invited to join. This gives the administrator(s) something close to absolute control over who posts and what is posted, and is particularly appropriate for groups where sensitive information is exchanged. The more tightly controlled the group is, the harder it is for fake profiles to join.

That said, it’s a good idea to remember that Facebook sees everything (or can if it wants to), and is not always scrupulous when it comes to maintaining your privacy: even if/when that’s the company’s intention, it can make mistakes, and its policies and algorithms are generally opaque.

https://www.facebook.com/help/220336891328465/

The trade-off with a private group is that if you’re intending to grow your group, it’s harder for someone who might be interested and an appropriate potential member to happen across it and apply to join.

If you’re attracted by the privacy advantages of a private group and are considering making your public group private, bear in mind that once you’ve gone that route, you can’t revert it to a public group, because that constitutes a breach of the group members’ privacy.

https://www.facebook.com/help/286027304749263?helpref=faq_content

Formerly, this restriction only applied to groups with over 5,000 members, but now applies wholesale.

I don’t administer any private groups, so I shan’t risk any hostages to fortune by considering their privacy settings in detail. It’s worth noting, though, that while even Facebook’s own help pages sometimes contradict each other, it does seem as though there are other restrictions on large (5,000+) groups, such as how often and how many privacy changes can be made.

If this page – https://www.facebook.com/help/214260548594688/ – is still accurate, the settings you can change include enforcing membership approval by an admin or moderator for each subscription request. You can also require the requester to answer one or more questions and base your decision on whether or how the question(s) is or are answered.

Public Groups

Fortunately, since I was first pressganged into helping administer a group, some of the privacy settings formerly unique (as far as I know) to private groups are now available to public groups. While the enforced changes caused some confusion and consternation at first, they seem to me to be an improvement, on the whole. (Gosh, am I saying something positive about Facebook???) Since public groups are, by definition, easier to find, join and share than closed or secret groups, even the most open-by-intent group needs to think about its privacy settings if it’s to avoid some of the unpleasant spam/scam material that may be posted to a group if settings allow. Such material includes, but is certainly not limited to the following, more often than not posted from fake or cloned profiles:

  • Sympathy scams like the posts described here: https://chainmailcheck.wordpress.com/2023/05/13/abusing-communities/
  • Pornographic images, often masquerading as videos, that may attract group members to unhealthy links. These may be intended to trick you into giving away sensitive information, but they may also be intended to upload malware to your device.
  • Fake news about dead or disabled celebrities, again leading to dangerous links.
  • Posts about alleged offers by retailers such as supermarkets giving away coupons or even cash.
  • Recommendations for product links that are at best irrelevant, possibly malicious.

And much more, but I’m not making a special effort to track all these: the above examples are just items that have crossed my radar recently.

When I actually created a group – at any rate, one that is still active – it was in order to replace a page that was becoming increasingly frustrating to administer due to changes introduced by Meta that were overcomplicated, bug-ridden, and based on the assumption that I was running it as a commercial enterprise and constantly needed reminding to take actions that would increase my visibility and non-existent profits (usually by paying Meta for a service I didn’t want). Fortunately, I discovered that I could maintain some visibility (in fact, a public group is required to be visible, not secret) and still get most of the control I wanted. Sorry, but if you want more information on maintaining the security and privacy of Facebook pages, you’ll have to look elsewhere. (Life’s too short: well, mine is probably going to be, and there are other things I want to write about.)

Here’s a selection of the most relevant settings.

  • Participant Approval – if this is off, anyone on Facebook can post or comment, and group members can join chats. (One of the issues I’ve seen kill a group recently was fake profiles posting porn/scam links to chats linked to the group.) If it’s on, however, members and visitors must be approved to post or comment, and only (approved) members can participate in chats.
  • You can also allow both profiles and pages to contribute, or else just profiles. Since some scams are driven by pages masquerading as profiles (only an admin can post to a page, so it’s difficult to flag a scam actually posted on the page), there’s something to be said for not allowing pages. But profiles can, of course, be fake.
  • You can ask up to three questions and invite anyone requesting approval as a member or visitor to answer them: if they don’t answer or answer inappropriately, you can decline to approve them, if Participant Approval is on.
  • You can choose whether or not to allow anonymous posts and edits. My guess is that this will be more desirable in some groups than others: sometimes it’s fair to be reluctant to be identified, but sometimes that privilege can be abused.
  • You can require an administrator or moderator to approve all posts. Clearly, this could be a lot of work in a popular group, but allows control of obviously malicious posts.
  • You can set it so that potential spam posts and comments are held for your approval as an admin.
  • You can set it so that edits to posts must be approved: this helps to address cases where an approved post is edited maliciously by changing a link from something innocuous to something harmful.
  • You can set it so only admins and moderators create chats, or you can set it so that approved members can also create chats.
  • You can allow or disallow whether events, tag events, polls or GIFs can be posted.

NB: the more relaxed your settings, the more you’ll need to set your notifications so that you get to see everything incoming and remove as necessary. Irritating if you happen to have a life outside Facebook, but there it is.

Note also that you can also notify Facebook in many cases for them to run a review: however, if their algorithms are not up-to-scratch (impossible, do I hear you say?) you may find that the thing pops up again and you get a message telling you that the post or comment didn’t contravene their community standards. Sigh…

David Harley
Reluctant FB Group Administrator

A Facebook Tagging Scam

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]
I’ve been seeing an unpleasant scam attack being spread via a friend’s account today: I think from a cloned rather than hacked account, but I can’t be sure.
1. If you see a link to an article that apparently describes how Elon Musk is going to make UK residents rich… well, I doubt if anyone really believes in Musk’s charitable impulses, but I’d suggest you resist the temptation to click on it to see where it goes.
2. If you realize your account is being misused and use the words ‘hack’ or ‘hacked’ in a post, the chances are that you’ll get a flood of bots advising you to contact a ‘helpful’ person who’ll your account back for you. Obviously, don’t take any notice. (Yes, I know I’m risking such a flood attached to this post, but if I do that will tell me something useful about my current settings.)
3. The attack is slightly different to what I’ve seen before: the attacker is tagging everyone in my friend’s Friends List and telling them to check a scam link that sooner or later appears in the comments. I’ll be looking into this further. No, not the scam link, the technique.
4. If you set your Friends List so that only you can see it, it makes this sort of attack significantly less effective.
Picture of facebook tagging scam notificiation

Facebook tagging scam image.

Tag scam second image

Tag scam link

Scam Interceptors (again)

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

I’ve just noticed that the BBC is screening another series of this programme. On the whole, I think it does more good than harm, but I’m a little concerned that Rav Wilding, when talking to a victim or potential victim, tends to ring off and promise to ring back a little later. Granted, the programme does encourage people to use a known-safe contact number for banks etc., such as the helpline number on the back of a bank card. And it might not be safe to expect that some of the more confused victims they contact will call the programme back. But there doesn’t seem to be a published direct number for the victim to call the programme back on.

I can see why publishing a number for the programme on its web page might be inviting trouble, but promising a call back is one of the things that scammers do: after all, they may not want to risk a victim not calling back either, though for less laudable reasons. It seems to me that if you’re going to set yourself up as a go-to source of help, you have to accept that you might get some hassle from the ungodly by phone, email etc. In the security industry, it goes with the territory for reputable vendors and service providers, and perhaps the programme owes that risk to those it intends to help.

On the bright side, it was good to hear a victim wanting proof of Wilding’s identity and the programme’s bona fides, even if that was due to the prompting by their bank.

Facebook – abysmal algorithms and customer disservice

[Disclaimer: you’ll probably see ads under and possibly incorporated into articles on this blog. I don’t choose them and I don’t approve them: that’s the price I pay for not being able to afford to pay for all my blogs…]

Facecrooks nails Facebook/Meta on (at least) two of its less attractive attributes.

Firstly, its reliance on artificial intelligence, in this case using a faulty algorithm to correct a faulty algorithm. Presumably because AI works out cheaper than human eyes for fact-checking.

Secondly, its lack of commitment to customer service. Its refusal to consider issues where it’s at fault after an arbitrary period of time is not news to me: I was previously alerted to it by a friend who cannot regain her account from the hands of a scammer because she didn’t report it quickly enough. (In both these cases, the victim simply hadn’t been aware of the problem in time to make the arbitrary cutoff date.)

I can see that there’s a difficulty in that Facebook apparently doesn’t keep data after 180 days, so the cutoff date reflects the fact that there is ‘no evidence’ on which to re-examine the case. But this doesn’t excuse inaction on FB’s part because ‘nothing can be done’. In the case of an account takeover, surely the ongoiong use of the hijacked account to send scam messages is sufficiently clear to justify remedial action. In the case of the algorithmic confusion – the victim  teaches the programming language Python and the related programming library Pandas, so the fact-checking algorithm assumed him to be trading exotic fauna – the original page data may be lost, but surely the lifetime ban on his using Meta for advertising could have been corrected?

Reuven M. Lerner’s article, as cited by Facecrooks, is here: I’m banned for life from advertising on Meta. Because I teach Python.

David Harley

(Anti-)Social Media updates 24th August 2018

Updates to Anti-Social Media 

Richi Jennings for TechBeacon’s Security Blogwatch: It’s election hacking season: Are you a target? A selection of commentary from a variety of sources. “Allegedly, Russia and Iran have been phishing, hacking, and building fake profiles on Facebook, Twitter, and YouTube…With the midterms just a few months away, the froth is building.”


Graham Cluley for BitDefender: Facebook pulls its VPN from the iOS App Store after data-harvesting accusations – “Facebook has withdrawn its Onavo Protect VPN app from the iOS App Store after Apple determined that it was breaking data-collection policies.”

John Leyden for The Register: Facebook pulls ‘snoopy’ Onavo VPN from Apple’s App Store after falling foul of rules


Rebecca Hill for The Register: Chap asks Facebook for data on his web activity, Facebook says no, now watchdog’s on the case – “Info collected on folk outside the social network ‘not readily accessible’ … Facebook’s refusal … is to be probed by the Irish Data Protection Commissioner … Under the General Data Protection Regulation … people can demand that organisations hand over the data they hold on them.”


Lisa Vaas for Sophos: Facebook’s rating you on how trustworthy you are – a good analysis of the difficulties Facebook and other social media face in addressing the problem of fake news.

David Harley