[work in progress – I’m updating it gradually]
Machine Learning
Google Apologizes After Photos App Autotags Black People as ‘Gorillas’ – a very upsetting and embarrassing misclassification. Flickr’s system did the same thing but in a less visible way.
How Vector Space Mathematics Reveals the Hidden Sexism in Language – very interesting work analysing Word2vec, and particularly their mechanisms for fixing the problem
There is a blind spot in AI research Kate Crawford and Ryan Calo, Nature, October 2015 – a call for “A practical and broadly applicable social-systems analysis thinks through all the possible effects of AI systems on all parties”
a ProPublica investigation in May 2016 found that the proprietary algorithms widely used by judges to help determine the risk of reoffending are almost twice as likely to mistakenly flag black defendants than white defendants
…
As a first step, researchers — across a range of disciplines, government departments and industry — need to start investigating how differences in communities’ access to information, wealth and basic services shape the data that AI systems train on.
Maciej Cegłowski – SASE Panel – Maciej on why not being able to understand the mechanisms by which ML systems come to their results is problematic, or as he puts it
“Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias.”
All it takes to steal your face is a special pair of glasses – report on a paper experimentally tricking a commercial face recognition system into misidentifying people as specific individuals. Depends on a feature of some DNNs that means that small perturbations in an image can produce misclassifications – as described in the next paper:
Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extent. We can cause the network to misclassify an image by applying a certain hardly perceptible perturbation, which is found by maximizing the network’s prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.
Filter bubbles
How the Internet Is Loosening Our Grip on the Truth –
In a recent Pew Research Center survey, 81 percent of respondents said that partisans not only differed about policies, but also about “basic facts.”
[…]
Psychologists and other social scientists have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest — we gorge on information that confirms our ideas, and we shun what does not.
The spreading of misinformation online. Del Vicario, Michela and Bessi, Alessandro and Zollo, Fabiana and Petroni, Fabio and Scala, Antonio and Caldarelli, Guidoand Stanley, H. Eugene and Quattrociocchi, Walter Proceedings of the National Academy of Sciences, 113 (3). pp. 554-559. ISSN 1091-6490 (2016)
Many mechanisms cause false information to gain acceptance, which in turn generate false beliefs that, once adopted by an individual, are highly resistant to correction.
[…]
Our findings show that users mostly tend to select and share content related to a specific narrative and to ignore the rest. In particular, we show that social homogeneity is the primary driver of content diffusion, and one frequent result is the formation of homogeneous, polarized clusters.
The End of the Echo Chamber – Farhad Manjoo, Feb 2012. Summary of Facebook’s large-scale experiments in 2010 with selective removal of links on EdgeRank (fb newsfeed display algo).
If an algorithm like EdgeRank favors information that you’d have seen anyway, it would make Facebook an echo chamber of your own beliefs. But if EdgeRank pushes novel information through the network, Facebook becomes a beneficial source of news rather than just a reflection of your own small world.
[…]
… it doesn’t address whether those stories differ ideologically from our own general worldview. If you’re a liberal but you don’t have time to follow political news very closely, then your weak ties may just be showing you lefty blog links that you agree with—even though, under Bakshy’s study, those links would have qualified as novel information
What’s wrong with Big Data – Some interesting examples – pharmacology and chess, but overall argument a bit unclear.
Applications
Artificial Intelligence Is Helping The Blind To Recognize Objects
UK Hospitals Are Feeding 1.6 Million Patients’ Health Records to Google’s AI
Speak, Memory – When her best friend died, she rebuilt him using artificial intelligence – a chatbot version of a real person based on chatlogs. You could probably do that with me…
Hi Libby–a few more for you: https://www.diigo.com/user/bobducharme?query=machinelearning
thanks Bob…this is a WIP, will add more as I find them. Much appreciated.