Here’s an interesting article by Tristan Greene for The Next Web: Academic expert says Google and Facebook’s AI researchers aren’t doing science. The expert in question is Simon DeDeo, and he’s a astrophysicist rather than a practitioner in AI. But he’s speaking as a scientist and an academic when he points out – rightly, in my opinion – that “Machine learning is an amazing accomplishment of engineering. But it’s not science. Not even close. It’s just 1990, scaled up. It has given us *literally* no more insight than we had twenty years ago.”
He also remarks that “They said they did social science, but it was nothing of the sort. It was homo economicus spread out over 50 GPUs.” Which reminds me very much of Facebook’s dabbling in psychological manipulation and emotional contagion. Well, I’ve been fairly scathing from time to time about Facebook’s reliance on algorithms that presumably work well enough for its paying customers but may be irritating or even painful to
its product those of us who trade its intrusiveness and willingness to share our data for its social advantages. And I’m not even going to mention Cambridge Analytica.
I will quote one more of DeDeo’s tweets, though: “The real subjectivity is in ML, which spends all its time developing new techniques to optimize a subjectively-chosen goal function on a subjectively-chosen test set.” I could draw a parallel there with the way in which some so-called next-gen security companies still cite their use of machine-learning as if it was their very own magic fairy dust that detects all malware (yeah, right…) while propagating a series of myths about how mainstream products work. (Relying on signatures? Which century are you living in, Help Net? You know better than that, and so does Cylance…)
In fact, as I may have mentioned before, machine learning is used by mainstream companies to sift through the ludicrously high volumes of potentially malicious samples we see on a daily basis to prioritize other analytical techniques. But we – and the black hats behind malware – are all too aware of the risks of relying purely on machine-learning to distinguish between Good and Evil samples. But I don’t think I’ll go further into that yet again at this point.