jack: (Default)
[personal profile] jack
They're clearly horrible but can someone give the cliff notes version of which things Cambridge Analytica actually did?

Date: 2018-03-21 04:24 pm (UTC)
seekingferret: Two warning signs one above the other. 1) Falling Rocks. 2) Falling Rocs. (Default)
From: [personal profile] seekingferret
As I have heard it, might be half-wrong:

Cambridge Analytica created a Facebook App which if people signed up, administered some cutesey memey personality quiz. When people signed up, they also gave the app permission to access their profile and I guess the profiles of their friends (or maybe just whatever profile information their friends had granted them?) Cambridge Analytica gathered this data and used it to target ads, including political ads. This may or may not have violated the Facebook TOS, and may or may not have violated various jurisdictions' rules about data privacy, which may or may not have the force of law, and which Facebook may or may not accept as binding on them. In any case, some users feel that regardless of whether what Cambridge Analytica did was legal, they should have been informed about how their personal data was being used with more clarity than it was. And since Cambridge Analytica worked for the Trump campaign, all of this is somehow connected to the investigation into ethical and criminal violations that occurred during that campaign.

Date: 2018-03-21 06:01 pm (UTC)
hilarita: stoat hiding under a log (Default)
From: [personal profile] hilarita
Cambridge Analytica did not create the app. A researcher created an app which sucked up a shitton of data, because of dubious FB privacy settings. This researcher then shared the data with Cambridge Analytica (possibly he sold them the data). The researcher was breaking FB TOS when he did so; FB later asked CA to delete the data they had obtained. Any UK data that was transferred was not lawfully acquired by CA, as while the initial capture of FB data was in compliance with FB's TOS, that data could not lawfully be sold on to a 3rd party under Data Protection law.

Date: 2018-03-21 07:06 pm (UTC)
seekingferret: Two warning signs one above the other. 1) Falling Rocks. 2) Falling Rocs. (Default)
From: [personal profile] seekingferret
Thanks for the clarifications.

Date: 2018-03-21 08:22 pm (UTC)
wild_irises: (not cynical enough)
From: [personal profile] wild_irises
Excellent summary short videos here and here.

I would say it was somewhat worse than described above, because of the fine microtargeting of app respondents with the direct intent of moving their U.S. election votes.

Date: 2018-03-23 04:07 pm (UTC)
seekingferret: Two warning signs one above the other. 1) Falling Rocks. 2) Falling Rocs. (Default)
From: [personal profile] seekingferret
Is fine microtargeting of Facebook users with the intent of selling them a political candidate fundamentally different from fine microtargeting of Facebook users with the intent of selling them shoes?

Date: 2018-03-23 05:48 pm (UTC)
wild_irises: (not cynical enough)
From: [personal profile] wild_irises
I think it is, especially if the fine microtargeting involves defaming the other candidate and spreading lies. The lies spread to sell shoes are, in my opinion, of a different character from the lies spread with the direct intent of selling fear. Would you disagree?

Date: 2018-03-23 06:02 pm (UTC)
seekingferret: Two warning signs one above the other. 1) Falling Rocks. 2) Falling Rocs. (Default)
From: [personal profile] seekingferret
I'm of mixed opinion about it. If it's the lies that are the problem, that's a different question to me than if the microtargeting is the problem, first of all. I think? Maybe microtargeted lies are fundamentally different than television ad lies because journalists are more able to factcheck the tv ad lies? But fundamentally I feel like we already have (reasonable, probably subject to fine-tuning) remedies in place for confronting libel as an act.

But I mean, I know some techbro types have been loudly handwringing for the past year or so about if maybe the Facebook microtargeted ad model is essentially immoral because it's capable of algorithmically manipulating people's brains at a deeper level than older types of ad delivery systems, or something. That feels to me like it gives too much credit to the tech, in a way that's pretty typical for techbros, but on the other hand they spend more time in the guts of that tech than I do, so maybe they're right.

There's certainly an argument that because electoral decisions impact the broader world in a different way than shoe shopping decisions, there ought to be a recognition that powers capable of manipulating electoral decisions need to be exercised more carefully, but I'm not sure how cleanly that interacts with Free Speech principles and the idea that ideas and candidates should have to win out in an open marketplace of ideas.