WE HAVE MOVED!

"And I beheld, and heard the voice of one eagle flying through the midst of heaven,
saying with a loud voice: Woe, woe, woe to the inhabitants of the earth....
[Apocalypse (Revelation) 8:13]

Sunday, March 18, 2018

(SURVEILLANCE STATE) Snowden: Facebook Is A Surveillance Company Rebranded As "Social Media"

(SURVEILLANCE STATE) Snowden: Facebook Is A Surveillance Company Rebranded As "Social Media"

NSA whistleblower and former CIA employee Edward Snowden slammed Facebook in a Saturday tweet following the suspension of Strategic Communication Laboratories (SCL) and its political data analytics firm, Cambridge Analytica, over what Facebook says was imporoper use of collected data.
In a nutshell, in 2015 Cambridge Analytica bought data from a University of Cambridge psychology professor, Dr. Aleksandr Kogan, who had developed an app called "thisisyourdigitallife" that vacuumed up loads of information on users and their contacts. After making Kogan and Cambridge Analytica promise to delete the data the app had gathered, Facebook received reports (from sources they would not identify) which claimed that not all the data had been deleted - which led the social media giant to delete Cambridge Analytica and parent company SCL's accounts.



“By passing information on to a third party, including SCL/Cambridge Analytica and Christopher Wylie of Eunoia Technologies, he violated our platform policies. When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. Cambridge Analytica, Kogan and Wylie all certified to us that they destroyed the data.” -Facebook
Of note, Cambridge Analytica worked for Ted Cruz and Ben Carson during the 2016 election before contracting with the Trump campaign. Cruz stopped using CA after their data modeling failed to identify likely supporters. 
Cambridge Analytica has vehemently denied any wrongdoing in a statement.
In response to the ban, Edward Snowden fired off two tweets on Saturday criticizing Facebook, and claimed social media companies were simply "surveillance companies" who engaged in a "successful deception" by rebranding themselves.
Snowden isn't the first big name to call out Silicon Valley companies over their data collection and monitoring practices, or their notorious intersection with the U.S. Government.
In his 2014 book: When Google Met WikiLeaks, Julian Assange describes Google's close relationship with the NSA and the Pentagon.
Around the same time, Google was becoming involved in a program known as the “Enduring Security Framework” (ESF), which entailed the sharing of information between Silicon Valley tech companies and Pentagon-affiliated agencies “at network speed.” Emails obtained in 2014 under Freedom of Information requests show Schmidt and his fellow Googler Sergey Brin corresponding on first-name terms with NSA chief General Keith Alexander about ESF Reportage on the emails focused on the familiarity in the correspondence: “General Keith . . . so great to see you . . . !” Schmidt wrote. But most reports overlooked a crucial detail. “Your insights as a key member of the Defense Industrial Base,” Alexander wrote to Brin, “are valuable to ensure ESF’s efforts have measurable impact.” -Julian Assange
Kim Dotcom has also opined on social media's close ties to the government, tweeting in February "Unfortunately all big US Internet companies are in bed with the deep state. Google, Facebook, YouTube, Twitter, etc. are all providing backdoors to your data."
In 2013, the Washington Post and The Guardian revealed that the NSA has backdoor access to all major Silicon Valley social media firms, including Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, and Apple - all through the notorious PRISM program which began in 2007 under the Protect America Act. PRISM's existence was leaked by Edward Snowden before he entered into ongoing asylum in Moscow. Microsoft was the first company to join the PRISM program.

The NSA has the ability to pull any sort of data it likes from these companies, but it claims that it does not try to collect it all. The PRISM program goes above and beyond the existing laws that state companies must comply with government requests for data, as it gives the NSA direct access to each company's servers — essentially letting the NSA do as it pleases. -The Verge

After PRISM's existence was leaked by Snowden, the Director of National Intelligence issued a statment which stated that the only people targed by the programs are "outside the United States," and that the program "does not allow" the targeting of citizens within US borders.
In 2006, Wired magazine published evidence from a retired AT&T communications technician, Mark Klein, that revealed a secret room used to "split" internet data at a San Francisco office as part of the NSA's bulk data collection techniques used on millions of Americans.

During the course of that work, he learned from a co-worker that similar cabins were being installed in other cities, including Seattle, San Jose, Los Angeles and San Diego, he said.
The split circuits included traffic from peering links connecting to other internet backbone providers, meaning that AT&T was also diverting traffic routed from its network to or from other domestic and international providers, Klein said. -Wired
 "They are collecting everything on everybody," Klein said.

Pentagon and DARPA Seek Predictive A.I. to Uncover Enemy Thoughts

Source: Activist Post

I’ve recently been covering the widening use of predictive algorithms in modern-day police work, which frequently has been compared to the “pre-crime” we have seen in dystopian fiction. However, what is not being discussed as often are the many examples of how faulty this data still is.
All forms of biometrics, for example, use artificial intelligence to match identities to centralized databases. However, in the UK we saw police roll-out a test of facial recognition at a festival late last year that resulted in 35 false matches and only one accurate identification. Although this extreme inaccuracy is the worst case I’ve come across, there are many experts who are concerned with the expansion of biometrics and artificial intelligence in police work when various studies have concluded that these systems may not be adequate to be relied upon within any system of justice.
The type of data collected above is described as “physical biometrics” – however, there is a second category which is also gaining steam in police work that primarily centers on our communications; this is called “behavioral biometrics.”
The analysis of behavior patterns leads to the formation of predictive algorithms which claim to be able to identify “hotspots” in the physical or virtual world that might indicate the potential for crime, social unrest, or any other pattern outside the norm. It is the same mechanism that is at the crux of what we are seeing emerge online to identify terrorist narratives and the various forms of other speech deemed to “violate community guidelines.” It is also arguably what is driving the current social media purge of nonconformists. Yet, as one recent prominent example illustrates, the foundation for determining “hate speech” is shaky at best. And, yet, people are losing their free speech and even their livelihoods solely based on the determinations of these algorithms.
The Anti-Defamation League (ADL) recently announced an artificial intelligence program that is being developed in partnership with Facebook, Google, Microsoft and Twitter to “stop cyberhate.” In their video, you can hear the ADL’s Director of the Center for Technology & Society admit to a “78-85% success rate” in their A.I. program to detect hate speech online. I actually heard that as a 15-22% failure rate. And they are defining the parameters. That is a disturbing margin for error, even when supposedly defining a nebulous concept and presuming to know exactly what is being looked for.
The above examples (and there are many more) should force us to imagine how error prone current A.I. could be when we account for the complexities of military strategies and political propaganda. Of course one might assume that the U.S. military has access to better technology than what is being deployed by police or social media. But these systems all ultimately occupy the same space and overlap in increasingly complex ways that can generate an array of potentially false matches. When it comes to war, this is an existential risk that far surpasses even the gross violations of civil liberties that we see in police work and our online communications.
Nevertheless, according to an article in Defense One, the Pentagon wants to use these potentially flawed algorithms to read enemy intentions and perhaps even to take action based on the findings.  This new system is being called COMPASS. My emphasis added:
This activity, hostile action that falls short of — but often precedes — violence, is sometimes referred to as gray zone warfare, the ‘zone’ being a sort of liminal state in between peace and war. The actors that work in it are difficult to identify and their aims hard to predict, by design.
“We’re looking at the problem from two perspectives: Trying to determine what the adversary is trying to do, his intent; and once we understand that or have a better understanding of it, then identify how he’s going to carry out his plans — what the timing will be, and what actors will be used,” said DARPA program manager Fotis Barlos.
Dubbed COMPASS, the new program will “leverage advanced artificial intelligence technologies, game theory, and modeling and estimation to both identify stimuli that yield the most information about an adversary’s intentions, and provide decision makers high-fidelity intelligence on how to respond–-with positive and negative tradeoffs for each course of action,” according to a DARPA notice posted Wednesday.
Source: The Pentagon Wants AI To Reveal Adversaries’ True Intentions
Depending on how those “tradeoffs” are weighed, it could form a justification for military deployment to a “hotspot,” much as we have seen with Chicago police and their “Heat List” to visit marked individuals before any crime has even been committed. In this case, though, the political ramifications could be disastrous for even a single false trigger.
As Defense One rightly suggests, there is a massive gulf between analyzing Big data for shopping patterns or other online activities versus the many dimensions that exist in modern warfare and political destabilization efforts.
Whether or not the COMPASS system ever becomes a reality, it appears at the very least that military intelligence will be seeking more data than ever before from every facet of society as justification for creating more security. That alone should spark heightened debate about how far down this road we are willing to travel.


SOURCE