Here's a fascinating interview with neuroscientist Lisa Feldman Barrett. She argues that many of the key beliefs we have about emotions are wrong: for example, itâs not true that we all feel the same things; that anyone can âreadâ other peopleâs faces; and itâs not true that emotions are things that happen to us. This is especially relevant to me because Robin Beauvais, the protagonist of dEATH in dAVOS and of the book I'm working on now, mURDER in mACAU, has such a hard time reading faces that she creates an Encyclopedia of Facial Expressions for herself to help her untangle what the people around her are thinking. On the surface, you might think that Robin is somewhere on the spectrum. You might think that . . . but you'd be wrong. There are other things at work within the psyche of Robin Beauvais, things that I only just begin to touch on in mURDER in mACAU. Interested in learning more? Check out this article from The Verge.
Neuroscientist Lisa Feldman Barrett explains how emotions are made
We donât all make the same expressions when weâre sad
I am known for being hard to read, to the point that friends complain that they can never tell what Iâm thinking by looking at my face. But, says neuroscientist Lisa Feldman Barrett, itâs possible that they might remain confused even if my face were more expressive.
Barrett, a neuroscientist at Northeastern University, is the author of How Emotions Are Made. She argues that many of the key beliefs we have about emotions are wrong. Itâs not true that we all feel the same things, that anyone can âreadâ other peopleâs faces, and itâs not true that emotions are things that happen to us.
The Verge spoke to Barrett about her new view of emotion, what this means for emotion-prediction startups, and whether we can feel an emotion if we donât have the word for it.
This interview has been lightly edited for clarity.
You argue that emotions are constructed by our brains. How does that differ from what we knew before?
The classical view assumes that emotions happen to you. Something happens, neurons get triggered, and you make these stereotypical expressions you canât control. It says that people scowl when theyâre angry and pout when theyâre sad, that everyone around the world not only makes the same expressions, but that youâre born with the capacity to recognize them automatically.
In my view, a face doesnât speak for itself when it comes to emotion, ever. Iâm not saying that when your brain constructs a strong feeling that there are no physical cues to the strength of your feeling. People do smile when theyâre happy or scowl when theyâre sad. What Iâm saying is that thereâs not a single obligatory expression. And emotions arenât some objective thing, theyâre learned and something that our brains construct.
You write about studies where you show someone a face and ask them to identify the emotions, and people consistently get it wrong, like confusing fear with anxiety. But fear and anxiety seem pretty similar to me. Do people also confuse emotions that are really far apart, like happiness and guilt?
Itâs interesting that you say that guilt and happiness are far apart. I often show people a picture of the top half of my daughterâs face and people say she looks sad or guilty or deflated, and then I show the whole image and and sheâs actually in a full-blown episode of pleasure because sheâs at a chocolate museum.
If you were to pit a face against anything else, it will always lose. If you show a face on its own, versus if you pair it with a voice or a body posture or a scenario, the face is very ambiguous in its meaning. There are studies where they actually took peopleâs whole faces but removed the bodies. People were expressing negativity or positivity, and people mistake all the time without the context. When you take a super positive face and stick it in a negative situation, people experience the face as more negative. They donât just interpret the face as negative, they actually change how they look at the face when you use eye-tracking software.
The expressions that weâve been told are the correct ones are just stereotypes and people express in many different ways.
What about things like resting bitch face? Thatâs a topic you hear about a lotâwhere people say that they can âtellâ someone is a bitch, but women protest that their face is âjust that like.â
Weâve done research on this and resting bitch face is a neutral face. When you look at it structurally, thereâs nothing negative in the face. People are using the context or their knowledge about that person to see more negativity in the face.
Iâm curious what all this means for affective computing, or the startups that try to analyze your facial expression to figure out how youâre feeling. Does this mean their research is futile?
As they are currently pursuing it, most companies are going to fail. If people use the classical view to guide the development of their technology â if youâre trying to build software or technology to identify scowls or frowns and pouts and so on and assume that means anger, good luck.
But if affective computing and other technology in this area were adjusted slightly in their goals, they hold the potential to revolutionize the science of emotion. We need to be able to track peopleâs movements accurately, and it would be so helpful to measure their movements and as much of the external and internal context as possible.
So we know that emotions donât have a universal look. Can you explain more about your argument that emotions are constructed? My understanding is that your claim is like this: you have a basic feelingâlike âpleasantâ or âunpleasantââand bodily sensations, which are sometimes triggered by the environment. Then we interpret those feelings and physical sensations as certain emotions, like rage or guilt. How does this work?
All brains evolved for the purposes of regulating the body. Any brain has to make decisions about what to invest its resources in: what am I going to spend, and what kind of reward am I going to get? Your brain is always regulating and itâs always predicting what the sensations from your body are to try to figure out how much energy to expend.
When those sensations are very intense, we typically use emotion concepts to make sense of those sensory inputs. We construct emotions.
Letâs back up a bit. What are emotion concepts?
Itâs just what you know about emotion â not necessarily what you can describe but what your brain knows to do and the feelings that come from that knowledge. When youâre driving, your brain knows how to do a bunch of things automatically, but you donât need to articulate it or even be aware of it as youâre doing it to successfully drive.
When you known an emotion concept, you can feel that emotion. In our culture we have âsadness,â in Tahitian culture they donât have that. Instead they have a word whose closest translation would be âthe kind of fatigue you feel when you have the flu.â Itâs not the equivalent of sadness, thatâs what they feel in situations where we would feel sad.
Where do we learn those concepts?
At the earliest stage, we are taught these concepts by our parents.
You donât have to teach children to have feelings. Babies can feel distress, they can feel pleasure and they do, they can certainly be aroused or calm. But emotion concepts â like sadness when something bad happens â are taught to children, not always explicitly. And that doesnât stop in childhood either. Your brain has the capacity to combine past experience in novel ways to create new representations, experience something new that youâve never seen or heard or felt before.
Iâm fascinated by the link between language and emotion. Are you saying that if we donât have a word for an emotion, we canât feel it?
Hereâs an example: you probably had experienced schadenfreude without knowing the word, but your brain would have to work really hard to construct those concepts and make those emotions. You would take a long time to describe it.
But if you know the word, if you hear the word often, then it becomes much more automatic, just like driving a car. It gets triggered more easily and you can feel it more easily. And in fact thatâs how schadenfreude feels to most Americans because they have a word theyâve used a lot. It can be conjured up very quickly.
Does understanding that emotions are constructed help us control them?
Itâs never going to be the case that itâs effortless and never the case that you can snap your fingers and just change how you feel.
But learning new emotions words is good because you can learn to feel more subtle emotions, and that makes you better at regulating your emotions. For example, you can learn to distinguish between distress and discomfort. This is partly why mindfulness meditation is so useful to people who have chronic pain â it lets you separate out the physical discomfort from the distress.
I think understanding how emotions are constructed widens the horizon of control. You realize that if your brain is using your past to construct your present, you can invest energy in the present to cultivate new experiences that then become the seeds for your future. You can cultivate or curate experiences in the now and then they become, if you practice them, they become automated enough that your brain will automatically construct them in the future.
Laptop searches on U.S. borders have risen precipitously over the past two years, from a total of 5,000 searches in 2015 to 25,000 in 2016, and rising to 5,000 in the month of February 2017 alone. To see where your liberties are about to be infringed, look to the edges, literally, in this case â U.S. borders. If U.S. authorities are eager to to intrude on our privacy to such an egregious degree on our borders, can our general liberties be far behind? I cover these issues at great length in my latest two novels, the techno-thriller 4o4 - A John Decker Thriller, about the surveillance state, recently listed as a Top Ten Amazon Bestseller in Technothrillers, as well as my latest book, dEATH in dAVOS, about a teen serial killer and hacker of Haitian descent who gets herself invited to the World Economic Forum in Davos, Switzerland, ostensibly because of an app she's developed, though in reality to take out nefarious billionaires who have committed unspeakable crimes but whose great wealth and power insulates them from prosecution and punishment. For more on this topic, check out this article from The Intercept.
A LAWSUIT FILED today by the Knight First Amendment Institute, a public interest legal organization based at Columbia University, seeks to shed light on invasive searches of laptops and cellphones by Customs and Border Protection officers at U.S. border crossings.
Documents filed in the case note that these searches have risen precipitously over the past two years, from a total of 5,000 searches in 2015 to 25,000 in 2016, and rising to 5,000 in the month of February 2017 alone. Among other questions, the lawsuit seeks to compel the federal government to provide more information about these searches, including how many of those searched have been U.S. citizens, the number of searches by port of entry, and the number of searches by the country of origin of the travelers.
Civil rights groups have long claimed that warrantless searches of cellphones and laptops by government agents constitute a serious invasion of privacy, due to the wealth of personal data often held on such devices. It is common for private conversations, photographs, and location information to be held on cellphones and laptops, making a search of these items significantly more intrusive than searching a simple piece of luggage.
A number of recent cases in the media have revealed instances of U.S. citizens and others being compelled by CBP agents to unlock their devices for search. In some instances, people have claimed to have been physically coerced into complying, including one American citizen who said that CBP agents grabbed him by the neck in order to take his cellphone out of his possession.
The legality of warrantless device searches at the border remains a contested issue, with the government asserting, over the objections of civil liberties groups, that Fourth Amendment protections do not apply at ports of entry. Some particularly controversial cases of searches at the border have involved journalists whose electronic data contains sensitive information about the identity of sources. Last year, a Canadian journalist was detained for six hours before being denied entry to the United States after refusing to unlock devices containing sensitive information. It has also been alleged that border agents are disproportionately targeting Muslim Americans and people with ties to Muslim-majority countries for both interrogation and device searches.
This February, Sen. Ron Wyden sent a letter to Department of Homeland Security head John Kelly stating that he was âalarmed by recent media reports of Americans being detained by U.S. Customs and Border Protection (CBP) and pressured to give CBP access ⦠to locked mobile devices.â Wydenâs letter also indicated plans for legislation that would require agents to obtain a warrant before conducting these searches.
The rapidly growing number of searches has prompted a legal effort to demand constraints and controls on the practice. In a press release issued today announcing the lawsuit, the Knight First Amendment Institute indicated more plans to scrutinize these searches in the future.
âThese searches are extremely intrusive, and government agents shouldnât be conducting them without cause,â said Jameel Jaffer, the Knight Instituteâs executive director. âPutting this kind of unfettered power in the hands of border agents invites abuse and discrimination and will inevitably have a chilling effect on the freedoms of speech and association.â
Yet another attempt by government to undermine encryption in order to protect their ability to spy on its own citizenry. What government fails to understand is that any backdoors available to them will, eventually, be available to cyber-thieves and other hackers. I cover these issues at great length in my latest two novels, the techno-thriller 4o4 - A John Decker Thriller, about the surveillance state, recently listed as a Top Ten Amazon Bestseller in Technothrillers, as well as my latest book, dEATH in dAVOS, about a teen serial killer and hacker of Haitian descent who gets herself invited to the World Economic Forum in Davos, Switzerland, ostensibly because of an app she's developed, though in reality to take out nefarious billionaires who have committed unspeakable crimes but whose great wealth and power insulates them from prosecution and punishment. For more on this topic, check out this article from Mashable.
The UK wants there to be 'no place for terrorists to hide,' including on WhatsApp
The UK government wants there to be "no place for terrorists to hide," and that includes on encrypted messaging services. The company first on its agenda? WhatsApp.
Speaking on the BBC's Andrew Marr Show on Sunday, Home Secretary Amber Rudd called for companies that provide secure communication apps to work with law enforcement.
"We need to make sure that organisations like WhatsApp, and there are plenty of others like that, don't provide a secret place for terrorists to communicate with each other," she said.
"It used to be that people would steam open envelopes or listen in on phones when they wanted to find out what people were doing, legally, through warrantry [sic]. But in this situation, we need to make sure that our intelligence services have the ability to get into situations like encrypted WhatsApp."
Rudd's comment came after media reports on Sunday that the Westminster Bridge attacker had sent a WhatsApp message prior to the incident that cannot be accessed because it was encrypted.
Fifty-two-year-old Briton Khalid Masood used a car and a knife to carry out an attack in the heart of London on Wednesday that left four people dead. He was killed by law enforcement on the scene.
Rudd said she was not arguing for the government to access all messages on such platforms. Instead, she wants encrypted services to recognise they have a responsibility to engage with law enforcement agencies to counter terrorism.
"They cannot get away with saying 'we are a different situation,'" she said. "They are not."
A WhatsApp spokesperson said the company was horrified at the London attack, adding that it is "cooperating with law enforcement as they continue their investigations."
The most famous case so far has been Apple's tussle with the FBI. In 2016, the security service took on the Silicon Valley giant in an attempt to bypass the lock screen of the iPhone 5C used by San Bernardino gunman Syed Farook.
Farook and his wife killed 14 people and wounded 22 more in San Bernardino, California in Dec., 2015.
The U.S. Justice Department obtained a court order ordering Apple to assist the FBI in bypassing the phone's security, fearing that too many attempts to guess the passcode would wipe the phone's memory.
Warning that the FBI was seeking a "dangerous power," Apple fought the order, and ultimately the FBI managed to use an undisclosed technique to access the smartphone in question.
Security experts warn that building a backdoor into the iPhone or services like WhatAspp would compromise the safety of users in unintended ways: If UK police can somehow read encrypted messages, for example, what's to prevent law enforcement in countries with a poor human rights record from demanding the same level of access?
The UK-based digital rights advocate Open Rights Group has warned that undermining encryption would make ordinary internet activities more vulnerable.
"Compelling companies to put backdoors into encrypted services would make millions of ordinary people less secure online," the group's executive director, Jim Killock, said in a statement. "We all rely on encryption to protect our ability to communicate, shop and bank safely."
The UK already has extensive laws allowing the government access to the internet footprint of its citizens.
In late 2016, it passed the Investigatory Powers Act, also known as the Snoopers' Charter. The bill creates a quasi-internet history database that's accessible to law enforcement upon request, among other measures.
The rhetoric on Sunday highlighted a clash between digital privacy and national security that has been playing out globally in recent years.
It's practically impossible for even the experts to determine who leaked what. That's why efforts to pin the hacks of DNC servers on Russian officials is basically a fool's errand. This story in the Intercept offers a fascinating glimpse at the latest leak of CIA materials released through Wikileaks, and how attribution is an inexact science at best. I cover these issues at great length in my latest two novels, the techno-thriller 4o4 - A John Decker Thriller, about the surveillance state, recently listed as a Top Ten Amazon Bestseller in Technothrillers, as well as my latest book, dEATH in dAVOS, about a teen serial killer and hacker of Haitian descent who gets herself invited to the World Economic Forum in Davos, Switzerland, ostensibly because of an app she's developed, though in reality to take out nefarious billionaires who have committed unspeakable crimes but whose great wealth and power insulates them from prosecution and punishment. For more on this topic, check out this article from The Intercept.
ATTRIBUTING HACKING ATTACKS to the correct perpetrators is notoriously difficult. Even the U.S. government, for all its technical resources and expertise, took warranted criticism for trying to pin a high-profile 2014 cyberattack on North Korea, and more recently faced skepticism when it blamed Russia for hacks against top Democrats during the 2016 election.
In those cases, government officials said they based their attribution in part on software tools the hackers employed, which had been used in other cyberattacks linked to North Korea and Russia. But that sort of evidence is not conclusive; hackers have been known to intentionally use or leave behind software and other distinctive material linked to other groups as part of so-called false flag operations intended to falsely implicate other parties. Researchers at Russian digital security firm Kaspersky Lab have documented such cases.
On Tuesday, WikiLeaks published a large cache of CIA documents that it said showed the agency had equipped itself to run its own false-flag hacking operations. The documents describe an internal CIA group called UMBRAGE that WikiLeaks said was stealing the techniques of other nation-state hackers to trick forensic investigators into falsely attributing CIA attacks to those actors. According to WikiLeaks, among those from whom the CIA has stolen techniques is the Russian Federation, suggesting the CIA is conducting attacks to intentionally mislead investigators into attributing them to Vladimir Putin.
âWith UMBRAGE and related projects, the CIA can not only increase its total number of attack types, but also misdirect attribution by leaving behind the âfingerprintsâ of the groups that the attack techniques were stolen from,â WikiLeaks writes in a summary of its CIA document dump.
Itâs a claim that seems intended to shed doubt on the U.S. governmentâs attribution of Russia in the DNC hack; the Russian Federation was the only nation specifically named by WikiLeaks as a potential victim of misdirected attribution. Itâs also a claim that some media outlets have accepted and repeated without question.
âWikiLeaks said thereâs an entire department within the CIA whose job it is to âmisdirect attribution by leaving behind the fingerprintsâ of others, such as hackers in Russia,â CNN reported without caveats.
It would be possible to leave such fingerprints if the CIA were reusing unique source code written by other actors to intentionally implicate them in CIA hacks, but the published CIA documents donât say this. Instead, they indicate the UMBRAGE group is doing something much less nefarious.
They say UMBRAGE is borrowing hacking âtechniquesâ developed or used by other actors to use in CIA hacking projects. This is intended to save the CIA time and energy by copying methods already proven successful. If the CIA were actually reusing source code unique to a specific hacking group, this could lead forensic investigators to misattribute CIA attacks to the original creators of the code. But the documents appear to say the UMBRAGE group is writing snippets of code that mimic the functionality of other hacking tools and placing it in a library for CIA developers to draw on when designing custom CIA tools.
âThe goal of this repository is to provide functional code snippets that can be rapidly combined into custom solutions,â notes a document in the cache that discusses the project. âRather than building feature-rich tools, which are often costly and can have significant CI value, this effort focuses on developing smaller and more targeted solutions built to operational specifications.â
Robert Graham, CEO of Errata Security, agrees that the CIA documents are not talking about framing Russia or other nations.
âWhat we can conclusively say from the evidence in the documents is that theyâre creating snippets of code for use in other projects and theyâre reusing methods in code that they find on the internet,â he told The Intercept. âElsewhere they talk about obscuring attacks so you canât see where itâs coming from, but thereâs no concrete plan to do a false flag operation. Theyâre not trying to say, âWeâre going to make this look like Russia.ââ
The UMBRAGE documents do mention looking at source code, but these reference widely available source code for popular tools, not source code unique to, say, Russian Federation hackers. And the purpose of examining the source code seems to be for purposes of inspiring the CIA code developers in developing their code, not so they can copy/paste it into CIA tools.
Itâs not unusual for attackers of all persuasion â nation-state and criminal â to copy the techniques of other hackers. Success breeds success. A month after Stuxnet was discovered in June 2010, someone created a copycat exploit to attack the same Windows vulnerability Stuxnet exploited.
Components the UMBRAGE project has borrowed from include keyloggers; tools for capturing passwords and webcam imagery; data-destruction tools; components for gaining escalated privileges on a machine and maintaining stealth and persistent presence; and tools for bypassing anti-virus detection.
Some of the techniques UMBRAGE has borrowed come from commercially available tools. The documents mention Dark Comet, a well-known remote access trojan, or RAT, which cancapture screenshots and keystrokes and grab webcam imagery, among other things. The French programmer who created Dark Comet stopped distributing it after stories emerged that the Syrian government was using it to spy on dissidents. Another tool UMBRAGE highlights is RawDisk, a tool made by the commercial software company Eldos, which contains drivers that system administrators can use to securely delete information from hard drives.
But legitimate tools are often used by hackers for illegitimate purposes, and RawDisk is no different. It played a starring role in the Sony hack in 2014, where the attackers used it to wipe data from Sonyâs servers.
It was partly the use of RawDisk that led forensic investigators to attribute the Sony hack to North Korea. Thatâs because RawDisk had been previously used in 2011 âDark Seoulâ hack attacks that wiped the hard drives and master boot records of three banks and two media companies in South Korea. South Korea blamed the attack on North Korea and China. But RawDisk was also used in the destructive Shamoon attack in 2012 that wiped data from 30,000 systems at Saudi Aramco. That attack wasnât attributed to North Korea, however; instead U.S. officials attributed it to Iran.
All of this highlights how murky attribution can be, particularly when focused only on the tools or techniques a group uses, and how the CIA is not doing anything different than other groups in borrowing tools and techniques.
âEverything theyâre referencing [in the CIA documents] is extremely public code, which means the Russians are grabbing the same snippets and the Chinese are grabbing them and the U.S. is grabbing,â says Graham. âSo theyâre all grabbing the same snippets of code and then theyâre making their changes to it.â
The CIA documents do talk elsewhere about using techniques to thwart forensic investigators and make it hard to attribute attacks and tools to the CIA. But the methods discussed are simply proper operational security techniques that any nation-state attackers would be expected to use in covert operations they donât want attributed to them. The Intercept wasnât able to find documents within the WikiLeaks cache that talk about tricking forensic investigators into attributing attacks to Russia. Instead, they discuss doâs and donâts of tradecraft, such as encrypting strings and configuration data in malware to prevent someone from reverse engineering the code, or removing file compilation timestamps to prevent investigators from making correlations between compilation times and the working hours of CIA hackers in the U.S.
Researchers at anti-virus firms often use compilation times to determine where a malwareâs creators might be located geographically if their files are consistently compiled during work hours that are distinctive to a region. For example, tools believed to have been created in Israel have shown compilation times on Sunday, which is a normal workday in Israel.
The bottom line with the CIA data dump released by WikiLeaks is that journalists and others should take care to examine statements made around it to ensure that theyâre reporting accurately on the contents.
Here's a great clip from The Young Turk's where Cenk talks about the latest Wikileaks revelations. Looks like the CIA is doing a whole lof of domestic, NSA-type spying, which is prohibited by its charter and, as Cenk points out, duplicative of the illegal spying the NSA is already doing. So, not only is the CIA performing illegal surveillance of U.S. citizens, it's wasting our money doing so! Hey, your TV is watching you! Could it get any more Orwellian than that? This is exactly the kind of government behavior that I cover in my two most recent novels, the techno-thriller 4o4 - A John Decker Thriller, about the surveillance state, recently listed as a Top Ten Amazon Bestseller in Technothrillers, as well as my latest book, dEATH in dAVOS, about a teen serial killer of Haitian descent who gets herself invited to the World Economic Forum in Davos, Switzerland, ostensibly because of an app she's developed, though in reality to take out nefarious billionaires who have committed unspeakable crimes but whose great wealth and power insulates them from prosecution and punishment...but not from young Robin Beauvais.
If I saw this thing coming toward me, I'd poop my pants. The security industry is changing by the second. Can you imagine these robots protecting the DAPL pipeline? They soon will be. And then what will peaceful protestors do? These are exactly the issues I cover in my recent novel, 4o4 - A John Decker Thriller, about the surveillance state, recently listed as a Top Ten Amazon Bestseller in Technothrillers.
Handle is a research robot that stands 6.5 ft tall, travels at 9 mph and jumps 4â âfeet vertically. âIt uses electric power to operate both electric and hydraulic actuators, with a range of about 15 miles on one battery charge. âââHandle uses many of the same dynamics, balance and mobile manipulation principlesâ found in the quadruped and biped robots Boston Dynamic builds, but with only about 10 actuated joints, it is significantly less complex. Wheels are efficient on flat surfaces while legs can go almost anywhere: by combining wheels and legs Handle can have the best of both worlds.
Here's another frightening story about how Peter Thiel's Palantir is enabling President Drumpf's era of mass deportation. Notice how Palantir is leveraging our own use of social media to spy on us. In the case of immigrants, this is helping Palantir identify where possible suspects are (e.g. a wedding or party), so that ICE agents can go there and detain them. This use of social media to plan arrests is exactly the kind of government behavior that I cover in my two most recent novels, the techno-thriller 4o4 - A John Decker Thriller, about the surveillance state, recently listed as a Top Ten Amazon Bestseller in Technothrillers, as well as my latest book, dEATH in dAVOS, about a teen serial killer of Haitian descent who gets herself invited to the World Economic Forum in Davos, Switzerland, ostensibly because of an app she's developed, though in reality to take out nefarious billionaires who have committed unspeakable crimes but whose great wealth and power insulates them from prosecution and punishment...but not from young Robin Beauvais. For more on this topic, check out this article from The Intercept.
IMMIGRATION AND CUSTOMS ENFORCEMENT is deploying a new intelligence system called Investigative Case Management (ICM), created by Palantir Technologies, that will assist in President Donald Trumpâs efforts to deport millions of immigrants from the United States.
In 2014, ICE awarded Palantir, the $20 billion data-mining firm founded by billionaire Trump advisor Peter Thiel, a $41 million contract to build and maintain ICM, according to government funding records. The system is scheduled to arrive at âfinal operating capacityâ by September of this year. The documents identify Palantirâs ICM as âmission criticalâ to ICE, meaning that the agency will not be able to properly function without the program.
ICM funding documents analyzed by The Intercept make clear that the system is far from a passive administrator of ICEâs case flow. ICM allows ICE agents to access a vast âecosystemâ of data to facilitate immigration officials in both discovering targets and then creating and administering cases against them. The system provides its users access to intelligence platforms maintained by the Drug Enforcement Administration, the Bureau of Alcohol, Tobacco, Firearms and Explosives, the Federal Bureau of Investigation, and an array of other federal and private law enforcement entities. It can provide ICE agents access to information on a subjectâs schooling, family relationships, employment information, phone records, immigration history, foreign exchange program status, personal connections, biometric traits, criminal records, and home and work addresses.
âWhat we have here is a growing network of interconnected databases that together are drawing in more and more information,â said Jay Stanley, a privacy expert at the American Civil Liberties Union. âIf President Trumpâs rhetoric on mass deportations is going to be turned into reality, then weâre going to see these tools turned in that direction, and these documents show that there are very powerful and intrusive tools that can be used toward that end.â
Although ICM appears to have been originally conceived for use by ICEâs office of Homeland Security Investigations (HSI), the system appears to be widely available to agents within ICE. Officers of ICEâs Enforcement and Removal Office (ERO) â the U.S. governmentâs primary deportation force â access the system to gather information for both criminal and civil cases against immigrants, according to a June 2016 disclosure by the Department of Homeland Security, although ERO will use a separate system to manage its civil cases. âHSI and ERO personnel use the information in ICM to document and inform their criminal investigative activities and to support the criminal prosecutions arising from those investigations,â states the DHS filing. âERO also uses ICM data to inform its civil cases.â
ICEâs Office of the Principal Legal Advisor also uses ICM to represent the office in âexclusion, deportation, and removal proceedings,â among other matters, according to the DHS disclosure.
The DHS disclosure states that Homeland Security Investigations is ICMâs primary user. Although mainly tasked with investigating serious cross-border crimes like drug smuggling, human trafficking, and child pornography, HSI had also been behind some of the most controversial workplace immigration raids of the Obama administration, which immigrant advocates fear could expand massively under President Trump. HSI provided support to the Enforcement and Removal Office during last monthâs high-profile enforcement surge, and just last week it was reported that HSI agents spearheaded a controversial sweep of several Asian restaurants in Mississippi that led to the agency apprehending more than 50 immigrants.
The ICM documents offer a detailed reminder of the Obama-era push to upgrade and expand the federal governmentâs tools to track and deport immigrants. Obama not only presided over an unprecedented number of deportations; his administration also oversaw the pronounced expansion of intelligence systems aimed at the countryâs immigrants. Now the sprawling immigrant surveillance apparatus that Obama enhanced is squarely in the hands of Donald Trump to assist in carrying out his promise to rapidly deport millions of immigrants.
A slide from a 2014 Immigration and Customs Enforcement document outlining capabilities required by the agencyâs proposed Investigative Case Management system.
The ICM documents also underscore the prominent role Palantir will likely play in assisting ICE in this mission.
Notably, two of the primary intelligence systems that ICM relies upon have also been also built or supported by Peter Thielâs firm, according to the funding documents. One of these is ICEâs FALCON system, a database and analytical platform built by Palantir that HSI agents can use to track immigrants and crunch data on forms of cross-border criminal activity. According to the documents, ICM also provides its users access to U.S. Customs and Border Protectionâs âAnalytical Framework for Intelligence,â or AFI, a vast yet little-understood data system that Palantir played a largely secret-role in supporting. Some privacy advocates believe that AFI could be used to fuel Trumpâs âextreme vettingâ of those seeking to enter the country.
A slide from a 2014 ICE funding document outlining required data flows for the agencyâs modernized Investigative Case Management system.
âWhen Trump uses the term âextreme vettingâ, AFI is the black-box system of profiling algorithms that heâs talking about,â Edward Hasbrouck of the Identity Project, a civil liberties initiative, told me last year. âThis is what extreme vetting means.â
ICM also provides its users with access to an internal system called the Student and Exchange Visitor Information System (SEVIS), which âincludes biographic and immigration status data related to individuals who are temporarily admitted to the United States as students or exchange visitors,â according to the DHS. Agents using ICM can also query ACRIMe, an extensive database operated by ERO that compiles data on immigrants in the United States. In addition, the funding documents state that ICM provides agents â through AFI â access to data gathered under the controversial National Security Entry-Exit Registration System, or NSEERS, the now-defunct Bush-era system requiring visa-holders from two-dozen predominately Muslim countries and North Korea to register with the federal government.
One funding document states that ICM provides agents with the ability to simultaneously search information on a given person across a diverse range of government databases, permitting, for an example, an address search to query âmultiple documents throughout the system, such as the person subject record, financial data (interface), CBP crossing data (interface), and other HSI and CBP subject record types. The user shall be able to conduct a consolidated address search that will match on all addresses regardless of the record type.â
Although ICEâs enforcement focuses overwhelmingly on immigrants, the ICM funding documents make clear the intelligence tool can also be aimed at U.S. citizens. âCitizenship can be established a variety of ways to include biographical and biometric system checks,â one document states. âU.S. Citizens are still subject to criminal prosecution and thus are a part of ICM.â
The scope of ICMâs use appears to have expanded during the systemâs development. The hundreds of pages of funding documents from 2014 make no mention whatsoever of ICEâs Enforcement and Removal Office (ERO). On the contrary, the 2014 records state that ICM was launched as primarily an HSI initiative and meant for use by HSI agents. Yet by June of last year, this appears to have changed: The recent DHS privacy disclosure repeatedly states that ERO uses ICM to support aspects of its mission.
This is not the only case in which it has remained unclear what kind of limits ICE has on the sorts of missions for which its intelligence systems can be used.
A spokesperson for Palantir declined to provide comment for this story. ICE did not respond to a list of questions, including whether FALCON â ICEâs advanced intelligence and analytics system for Homeland Security Investigations â is also made available to ERO agents.
In February, ICE responded to a Freedom of Information Act Request asking for internal rules or restrictions on FALCONâs use by stating that no such documents existed, although ICEâs response also indicated the agency may have conducted an incomplete search for the records. The 2014 funding records indicate that EROâs use of ICM â which provides its users access to Palantirâs FALCON â might also grant the deportation force access to FALCON.
Data sharing between federal agencies is often not governed by concrete legal regulations, according to Anil Kalhan, a professor at Drexel Universityâs Thomas R. Kline School of Law.
âLegislation after 9/11 authorized and encouraged information sharing within the executive branch,â Kalhan told the Intercept in December. âThere is general authorization, and the scope and limits and constraints upon that authorization have not really been spelled out.â
The ICM documents appear to contain information about FALCON that is not otherwise publicly available. One funding document states that FALCON â and thus ICM â can link to a controversial law enforcement database called Black Asphalt, which is maintained by a private firm called Desert Snow and provides information to help police engage in civil and criminal asset forfeiture. Iowa and Kansas have prohibited the use of Black Asphalt by law enforcement agencies because of concerns that it âmight not be a legal law enforcement tool,â according to the Washington Post. The funding documents also state that FALCON includes access to services provided by Cellebrite, an Israeli company that specializes in software used to breach cellphones.
With its full deployment arriving just in time for the Trump transition, ICM appears well positioned to respond to a new set of demands being placed on ICE by a president elected on promises of deporting immigrants en masse. The agency stipulated that Palantir must build a tool that can handle âno less than 10,000 users accessing the system at the same timeâ to search tens of millions of subject records.
A slide from a 2014 ICE funding document illustrating a day in the life of a Homeland Security Investigations special agent.
On May 8, 2014, in a meeting with representatives of firms vying to win the ICM contract, ICE screened a slide presentation to show just how ICMâs many users will be able to utilize the ICM system. The slides lay out a hypothetical scenario in which an ICE agent uses ICM to both interrogate a suspect at the border and then to shepherd the suspectâs case through court proceedings.
The first slide tells of a man named Jim Doe who attempted to enter the country by car but was stopped by CBP at the border and was discovered to be carrying contraband. So CBP calls in a square-jawed ICE HSI investigator, who immediately opens ICM and queries its data. This produces records on Doeâs vehicle, business dealings, prior arrests, and records detailing his prior crossings of the border.
Armed with this intelligence, the HSI agent then interrogates Doe and learns that he had brought the contraband across the border at the behest of a man Doe knows only by the nickname âCaliber,â who also has detailed discoverable information in ICM, which is able to reveal his true name of Calvin Clark by making connections based on a tattoo of Clarkâs that is included in the systemâs data.
Once the ICE agent has completed his ICM-backed investigation, he then uses ICM to create a case file. A subsequent chart shows the apparent final stage of ICMâs cradle-to-grave services represented in a graphic of a person clutching to prison bars with a caption reading: âjustice is served.â
But the following slide points out that a conviction is not in fact the final step in ICMâs intelligence life cycle.
âEven once the case is closed,â the document states of the ICM record, âit is available for other agents to discover and link to future investigations, continuing the investigative cycle.â
Here's a fascinating story from Europe where mobile networks will soon be helping "drone" operators drive long-haul "driverless" trucks around Europe. Both of my novels 4o4 - A John Decker Thrillerand dEATH in dAVOS feature scenes where cars are hijacked and used to kill. The scene in dEATH in dAVOS is particularly creepy:
Does it make me a bad person that I engineered my victory in an international competition, created a health app that won all kinds of awards and saved lives, just so I could arrange to have myself sent as a student observer and journalist to Davos in order to kill that despicable man?
Or does it just make me a good planner?
Kick, push, glide. Kick, push, glide.
I came up over the rise and the Flüelatal valley stretched out before me, white pine hugging the mountains, winding bern in the center. I could see mile after mile along Route 28. I reached back behind me unconsciously for my rifle but I didnât have it with me. Not this time. Just the telescopic site that I plucked from my jacket and, with it, the stillness of the cold winter air, the impenetrable silence.
I raised the site and put it to my right eye, tapped the arm of my Glass, taking in the white vista, the trees heavy with snow, the dark smudge of the road, like a track of eyeliner, and then Juan Castilloâs blue Lamborghini Zagato as it flew around that bend, the hairpin, flew around it and finally let go of the earth, flattening the rail as if it were nothing and tumbled down the side of the mountain, rolling over and over in the heavy wet snow until it crashed into a farrago of boulders and burst into brilliant blue flames.
BARCELONA -- The brave new world of remote-controlled cars is now technically possible using wireless technologies which are set to be commonplace early in the next decade, two major telecoms companies said at a test drive staged on Monday during an industry conference in Barcelona.
Spanish networks operator Telefonica joined forces with Swedish network equipment maker Ericsson to demonstrate how a car could be remotely controlled around obstacles on a test track located 70 kilometres away in Tarragona using wireless networks.
The driver of the vehicle took the wheel from the floor of the Fira conference centre in Barcelona, on the first day of the Mobile World Congress, Europe's biggest annual industry gathering.
The remote test drive relied on the latest mobile networks which are controlled in the cloud and are capable of the quick response times and high data-rates to make split-second driving decisions from afar.
Ericsson and Telefonica worked in partnership with KTH, Sweden's Royal Institute of Technology, and vehicle safety testing company Idiada to organise the demonstration.
Javier Lorca, head of innovation in wireless access networks at Telefonica said using state-of-the-art wireless networks to remotely control vehicles at a distance has many possible applications, ranging from electric fleets traversing university campuses and even, eventually for wide scale public transport.
But he cautioned that, for the near term, such applications would require travelling only within closed-circuit, predictable routes and in situations where it is otherwise impractical for the driver to be seated behind the wheel of the vehicle itself.
The event was intended to highlight the possibilities of 5G, or fifth-generation, wireless networks, which are expected to begin to become mainstream around the world in the years after 2020.
However, Telefonica said in a statement that current, so-called 4.5G networks could handle most of these demands.
Telefonica has invested 38 billion euros in the last five years to reach millions of homes with it higher-speed fibre fixed-line broadband network, which it considers to be crucial to 5G.
This is one of the most comprehensive looks at Big Data and Machine Learning (whereby bots train themselves over time) that I've read. It's actually a series of essays by a number of Big Data experts, a trifle long but well worth the read. Anybody who has ever tried to get a link either on or off the first page of a Google search knows the issue of "nudging", how algorithms move you in the direction they think you want to go or, more insidiously, the way that they (or their advertisers) want you to go as you search. But as we hand over more and more control to algorithms, we are beginning to see their deep limitations â including the presence of "racist" algorithms, programmed with predominantly "white" data sets, resulting in everything from biased ad delivery to minorities being deprived second mortgages. What can we do about it and how can we avoid being the victims of big bad Big Data? Read this series of essays and see for yourself. These are some of the moral issues with which I wrestle at great length in my latest two novels, the techno-thriller 4o4 - A John Decker Thriller, about the surveillance state, recently listed as a Top Ten Amazon Bestseller in Technothrillers, as well as my latest book, dEATH in dAVOS, about a teen serial killer and hacker of Haitian heritage who gets herself invited to the World Economic Forum in Davos, Switzerland, ostensibly because of an app she's developed, though in reality to take out nefarious billionaires who have committed unspeakable crimes but whose great wealth and power insulates them from prosecution and punishment. For more on this topic, check out this article from Scientific American.
Will Democracy Survive Big Data and Artificial Intelligence?
We are in the middle of a technological upheaval that will transform the way society is organized. We must make the right decisions now.
âEnlightenment is manâs emergence from his self-imposed immaturity. Immaturity is the inability to use oneâs understanding without guidance from another.â
âImmanuel Kant, âWhat is Enlightenment?â (1784)
The digital revolution is in full swing. How will it change our world? The amount of data we produce doubles every year. In other words: in 2016 we produced as much data as in the entire history of humankind through 2015. Every minute we produce hundreds of thousands of Google searches and Facebook posts. These contain information that reveals how we think and feel. Soon, the things around us, possibly even our clothing, also will be connected with the Internet. It is estimated that in 10 yearsâ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours. Many companies are already trying to turn this Big Data into Big Money.
Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet?
The field of artificial intelligence is, indeed, making breathtaking advances. In particular, it is contributing to the automation of data analysis. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself. Recently, Google's DeepMind algorithm taught itself how to win 49 Atari games. Algorithms can now recognize handwritten language and patterns almost as well as humans and even complete some tasks better than them. They are able to describe the contents of photos and videos. Today 70% of all financial transactions are performed by algorithms. News content is, in part, automatically generated. This all has radical economic consequences: in the coming 10 to 20 years around half of today's jobs will be threatened by algorithms. 40% of today's top 500 companies will have vanished in a decade.
It can be expected that supercomputers will soon surpass human capabilities in almost all areasâsomewhere between 2020 and 2060. Experts are starting to ring alarm bells. Technology visionaries, such as Elon Musk from Tesla Motors, Bill Gates from Microsoft and Apple co-founder Steve Wozniak, are warning that super-intelligence is a serious danger for humanity, possibly even more dangerous than nuclear weapons.
Is This Alarmism?
One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next. With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements.
In the 1940s, the American mathematician Norbert Wiener (1894â1964) invented cybernetics. According to him, the behavior of systems could be controlled by the means of suitable feedbacks. Very soon, some researchers imagined controlling the economy and society according to this basic principle, but the necessary technology was not available at that time.
Today, Singapore is seen as a perfect example of a data-controlled society. What started as a program to protect its citizens from terrorism has ended up influencing economic and immigration policy, the property market and school curricula. China is taking a similar route. Recently, Baidu, the Chinese equivalent of Google, invited the military to take part in the China Brain Project. It involves running so-called deep learning algorithms over the search engine data collected about its users. Beyond this, a kind of social control is also planned. According to recent reports, every Chinese citizen will receive a so-called âCitizen Scoreâ, which will determine under what conditions they may get loans, jobs, or travel visa to other countries. This kind of individual monitoring would include peopleâs Internet surfing and the behavior of their social contacts (see âSpotlight on Chinaâ).
With consumers facing increasingly frequent credit checks and some online shops experimenting with personalized prices, we are on a similar path in the West. It is also increasingly clear that we are all in the focus of institutional surveillance. This was revealed in 2015 when details of the British secret service's "Karma Police" program became public, showing the comprehensive screening of everyone's Internet use. Is Big Brother now becoming a reality?
Programmed Society, Programmed citizens
Everything started quite harmlessly. Search engines and recommendation platforms began to offer us personalised suggestions for products and services. This information is based on personal and meta-data that has been gathered from previous searches, purchases and mobility behaviour, as well as social interactions. While officially, the identity of the user is protected, it can, in practice, be inferred quite easily. Today, algorithms know pretty well what we do, what we think and how we feelâpossibly even better than our friends and family or even ourselves. Often the recommendations we are offered fit so well that the resulting decisions feel as if they were our own, even though they are actually not our decisions. In fact, we are being remotely controlled ever more successfully in this manner. The more is known about us, the less likely our choices are to be free and not predetermined by others.
But it won't stop there. Some software platforms are moving towards âpersuasive computing.â In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people.
These technologies are also becoming increasingly popular in the world of politics. Under the label of ânudging,â and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a "nudge"âa modern form of paternalism. The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is "big nudging", which is the combination of big data with nudging. To many, this appears to be a sort of digital scepter that allows one to govern the masses efficiently, without having to involve citizens in democratic processes. Could this overcome vested interests and optimize the course of the world? If so, then citizens could be governed by a data-empowered âwise kingâ, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand.
Pre-Programmed Catastrophes
But one look at the relevant scientific literature shows that attempts to control opinions, in the sense of their "optimization", are doomed to fail because of the complexity of the problem. The dynamics of the formation of opinions are full of surprises. Nobody knows how the digital magic wand, that is to say the manipulative nudging technique, should best be used. What would have been the right or wrong measure often is apparent only afterwards. During the German swine flu epidemic in 2009, for example, everybody was encouraged to go for vaccination. However, we now know that a certain percentage of those who received the immunization were affected by an unusual disease, narcolepsy. Fortunately, there were not more people who chose to get vaccinated!
Another example is the recent attempt of health insurance providers to encourage increased exercise by handing out smart fitness bracelets, with the aim of reducing the amount of cardiovascular disease in the population; but in the end, this might result in more hip operations. In a complex system, such as society, an improvement in one area almost inevitably leads to deterioration in another. Thus, large-scale interventions can sometimes prove to be massive mistakes.
Regardless of this, criminals, terrorists and extremists will try and manage to take control of the digital magic wand sooner or laterâperhaps even without us noticing. Almost all companies and institutions have already been hacked, even the Pentagon, the White House, and the NSA.
A further problem arises when adequate transparency and democratic control are lacking: the erosion of the system from the inside. Search algorithms and recommendation systems can be influenced. Companies can bid on certain combinations of words to gain more favourable results. Governments are probably able to influence the outcomes too. During elections, they might nudge undecided voters towards supporting themâa manipulation that would be hard to detect. Therefore, whoever controls this technology can win electionsâby nudging themselves to power.
This problem is exacerbated by the fact that, in many countries, a single search engine or social media platform has a predominant market share. It could decisively influence the public and interfere with these countries remotely. Even though the European Court of Justice judgment made on 6th October 2015 limits the unrestrained export of European data, the underlying problem still has not been solved within Europe, and even less so elsewhere.
What undesirable side effects can we expect? In order for manipulation to stay unnoticed, it takes a so-called resonance effectâsuggestions that are sufficiently customized to each individual. In this way, local trends are gradually reinforced by repetition, leading all the way to the "filter bubble" or "echo chamber effect": in the end, all you might get is your own opinions reflected back at you. This causes social polarization, resulting in the formation of separate groups that no longer understand each other and find themselves increasingly at conflict with one another. In this way, personalized information can unintentionally destroy social cohesion. This can be currently observed in American politics, where Democrats and Republicans are increasingly drifting apart, so that political compromises become almost impossible. The result is a fragmentation, possibly even a disintegration, of society.
Owing to the resonance effect, a large-scale change of opinion in society can be only produced slowly and gradually. The effects occur with a time lag, but, also, they cannot be easily undone. It is possible, for example, that resentment against minorities or migrants get out of control; too much national sentiment can cause discrimination, extremism and conflict.
Perhaps even more significant is the fact that manipulative methods change the way we make our decisions. They override the otherwise relevant cultural and social cues, at least temporarily. In summary, the large-scale use of manipulative methods could cause serious social damage, including the brutalization of behavior in the digital world. Who should be held responsible for this?
Legal Issues
This raises legal issues that, given the huge fines against tobacco companies, banks, IT and automotive companies over the past few years, should not be ignored. But which laws, if any, might be violated? First of all, it is clear that manipulative technologies restrict the freedom of choice. If the remote control of our behaviour worked perfectly, we would essentially be digital slaves, because we would only execute decisions that were actually made by others before. Of course, manipulative technologies are only partly effective. Nevertheless, our freedom is disappearing slowly, but surelyâin fact, slowly enough that there has been little resistance from the population, so far.
The insights of the great enlightener Immanuel Kant seem to be highly relevant here. Among other things, he noted that a state that attempts to determine the happiness of its citizens is a despot. However, the right of individual self-development can only be exercised by those who have control over their lives, which presupposes informational self-determination. This is about nothing less than our most important constitutional rights. A democracy cannot work well unless those rights are respected. If they are constrained, this undermines our constitution, our society and the state.
As manipulative technologies such as big nudging function in a similar way to personalized advertising, other laws are affected too. Advertisements must be marked as such and must not be misleading. They are also not allowed to utilize certain psychological tricks such as subliminal stimuli. This is why it is prohibited to show a soft drink in a film for a split-second, because then the advertising is not consciously perceptible while it may still have a subconscious effect. Furthermore, the current widespread collection and processing of personal data is certainly not compatible with the applicable data protection laws in European countries and elsewhere.
Finally, the legality of personalized pricing is questionable, because it could be a misuse of insider information. Other relevant aspects are possible breaches of the principles of equality and non-discriminationâand of competition laws, as free market access and price transparency are no longer guaranteed. The situation is comparable to businesses that sell their products cheaper in other countries, but try to prevent purchases via these countries. Such cases have resulted in high punitive fines in the past.
Personalized advertising and pricing cannot be compared to classical advertising or discount coupons, as the latter are non-specific and also do not invade our privacy with the goal to take advantage of our psychological weaknesses and knock out our critical thinking.
Furthermore, let us not forget that, in the academic world, even harmless decision experiments are considered to be experiments with human subjects, which would have to be approved by a publicly accountable ethics committee. In each and every case the persons concerned are required to give their informed consent. In contrast, a single click to confirm that we agree with the contents of a hundred-page âterms of useâ agreement (which is the case these days for many information platforms) is woefully inadequate.
Nonetheless, experiments with manipulative technologies, such as nudging, are performed with millions of people, without informing them, without transparency and without ethical constraints. Even large social networks like Facebook or online dating platforms such as OkCupid have already publicly admitted to undertaking these kinds of social experiments. If we want to avoid irresponsible research on humans and society (just think of the involvement of psychologists in the torture scandals of the recent past), then we urgently need to impose high standards, especially scientific quality criteria and a code of conduct similar to the Hippocratic Oath.Has our thinking, our freedom, our democracy been hacked?
Let us suppose there was a super-intelligent machine with godlike knowledge and superhuman abilities: would we follow its instructions? This seems possible. But if we did that, then the warnings expressed by Elon Musk, Bill Gates, Steve Wozniak, Stephen Hawking and others would have become true: computers would have taken control of the world. We must be clear that a super-intelligence could also make mistakes, lie, pursue selfish interests or be manipulated. Above all, it could not be compared with the distributed, collective intelligence of the entire population.
The idea of replacing the thinking of all citizens by a computer cluster would be absurd, because that would dramatically lower the diversity and quality of the solutions achievable. It is already clear that the problems of the world have not decreased despite the recent flood of data and the use of personalized informationâon the contrary! World peace is fragile. The long-term change in the climate could lead to the greatest loss of species since the extinction of dinosaurs. We are also far from having overcome the financial crisis and its impact on the economy. Cyber-crime is estimated to cause an annual loss of 3 trillion dollars. States and terrorists are preparing for cyberwarfare.
In a rapidly changing world a super-intelligence can never make perfect decisions (see Fig. 1): systemic complexity is increasing faster than data volumes, which are growing faster than the ability to process them, and data transfer rates are limited. This results in disregarding local knowledge and facts, which are important to reach good solutions. Distributed, local control methods are often superior to centralized approaches, especially in complex systems whose behaviors are highly variable, hardly predictable and not capable of real-time optimization. This is already true for traffic control in cities, but even more so for the social and economic systems of our highly networked, globalized world.
Furthermore, there is a danger that the manipulation of decisions by powerful algorithms undermines the basis of "collective intelligence," which can flexibly adapt to the challenges of our complex world. For collective intelligence to work, information searches and decision-making by individuals must occur independently. If our judgments and decisions are predetermined by algorithms, however, this truly leads to a brainwashing of the people. Intelligent beings are downgraded to mere receivers of commands, who automatically respond to stimuli.
In other words: personalized information builds a "filter bubble" around us, a kind of digital prison for our thinking. How could creativity and thinking "out of the box" be possible under such conditions? Ultimately, a centralized system of technocratic behavioral and social control using a super-intelligent information system would result in a new form of dictatorship. Therefore, the top-down controlled society, which comes under the banner of "liberal paternalism," is in principle nothing else than a totalitarian regime with a rosy cover.
In fact, big nudging aims to bring the actions of many people into line, and to manipulate their perspectives and decisions. This puts it in the arena of propaganda and the targeted incapacitation of the citizen by behavioral control. We expect that the consequences would be fatal in the long term, especially when considering the above-mentioned effect of undermining culture.
A Better Digital Society Is Possible
Despite fierce global competition, democracies would be wise not to cast the achievements of many centuries overboard. In contrast to other political regimes, Western democracies have the advantage that they have already learned to deal with pluralism and diversity. Now they just have to learn how to capitalize on them more.
In the future, those countries will lead that reach a healthy balance between business, government and citizens. This requires networked thinking and the establishment of an information, innovation, product and service "ecosystem." In order to work well, it is not only important to create opportunities for participation, but also to support diversity. Because there is no way to determine the best goal function: should we optimize the gross national product per capita or sustainability? Power or peace? Happiness or life expectancy? Often enough, what would have been better is only known after the fact. By allowing the pursuit of various different goals, a pluralistic society is better able to cope with the range of unexpected challenges to come.
Centralized, top-down control is a solution of the past, which is only suitable for systems of low complexity. Therefore, federal systems and majority decisions are the solutions of the present. With economic and cultural evolution, social complexity will continue to rise. Therefore, the solution for the future is collective intelligence. This means that citizen science, crowdsourcing and online discussion platforms are eminently important new approaches to making more knowledge, ideas and resources available.
Collective intelligence requires a high degree of diversity. This is, however, being reduced by today's personalized information systems, which reinforce trends.
Sociodiversity is as important as biodiversity. It fuels not only collective intelligence and innovation, but also resilienceâthe ability of our society to cope with unexpected shocks. Reducing sociodiversity often also reduces the functionality and performance of an economy and society. This is the reason why totalitarian regimes often end up in conflict with their neighbors. Typical long-term consequences are political instability and war, as have occurred time and again throughout history. Pluralism and participation are therefore not to be seen primarily as concessions to citizens, but as functional prerequisites for thriving, complex, modern societies.
In summary, it can be said that we are now at a crossroads (see Fig. 2). Big data, artificial intelligence, cybernetics and behavioral economics are shaping our societyâfor better or worse. If such widespread technologies are not compatible with our society's core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right pathâa path that allows us all to benefit from the digital revolution. Therefore, we urge to adhere to the following fundamental principles:
1. to increasingly decentralize the function of information systems;
2. to support informational self-determination and participation;
3. to improve transparency in order to achieve greater trust;
4. to reduce the distortion and pollution of information;
5. to enable user-controlled information filters;
6. to support social and economic diversity;
7. to improve interoperability and collaborative opportunities;
8. to create digital assistants and coordination tools;
9. to support collective intelligence, and
10. to promote responsible behavior of citizens in the digital world through digital literacy and enlightenment.
Following this digital agenda we would all benefit from the fruits of the digital revolution: the economy, government and citizens alike. What are we waiting for?
A Strategy for the Digital Age
Big data and artificial intelligence are undoubtedly important innovations. They have an enormous potential to catalyze economic value and social progress, from personalized healthcare to sustainable cities. It is totally unacceptable, however, to use these technologies to incapacitate the citizen. Big nudging and citizen scores abuse centrally collected personal data for behavioral control in ways that are totalitarian in nature. This is not only incompatible with human rights and democratic principles, but also inappropriate to manage modern, innovative societies. In order to solve the genuine problems of the world, far better approaches in the fields of information and risk management are required. The research area of responsible innovation and the initiative âData for Humanityâ (see "Big Data for the benefit of society and humanity") provide guidance as to how big data and artificial intelligence should be used for the benefit of society.
What can we do now? First, even in these times of digital revolution, the basic rights of citizens should be protected, as they are a fundamental prerequisite of a modern functional, democratic society. This requires the creation of a new social contract, based on trust and cooperation, which sees citizens and customers not as obstacles or resources to be exploited, but as partners. For this, the state would have to provide an appropriate regulatory framework, which ensures that technologies are designed and used in ways that are compatible with democracy. This would have to guarantee informational self-determination, not only theoretically, but also practically, because it is a precondition for us to lead our lives in a self-determined and responsible manner.
There should also be a right to get a copy of personal data collected about us. It should be regulated by law that this information must be automatically sent, in a standardized format, to a personal data store, through which individuals could manage the use of their data (potentially supported by particular AI-based digital assistants). To ensure greater privacy and to prevent discrimination, the unauthorised use of data would have to be punishable by law. Individuals would then be able to decide who can use their information, for what purpose and for how long. Furthermore, appropriate measures should be taken to ensure that data is securely stored and exchanged.
Sophisticated reputation systems considering multiple criteria could help to increase the quality of information on which our decisions are based. If data filters and recommendation and search algorithms would be selectable and configurable by the user, we could look at problems from multiple perspectives, and we would be less prone to manipulation by distorted information.
In addition, we need an efficient complaints procedure for citizens, as well as effective sanctions for violations of the rules. Finally, in order to create sufficient transparency and trust, leading scientific institutions should act as trustees of the data and algorithms that currently evade democratic control. This would also require an appropriate code of conduct that, at the very least, would have to be followed by anyone with access to sensitive data and algorithmsâa kind of Hippocratic Oath for IT professionals.
Furthermore, we would require a digital agenda to lay the foundation for new jobs and the future of the digital society. Every year we invest billions in the agricultural sector and public infrastructure, schools and universitiesâto the benefit of industry and the service sector.
Which public systems do we therefore need to ensure that the digital society becomes a success? First, completely new educational concepts are needed. This should be more focused on critical thinking, creativity, inventiveness and entrepreneurship than on creating standardised workers (whose tasks, in the future, will be done by robots and computer algorithms). Education should also provide an understanding of the responsible and critical use of digital technologies, because citizens must be aware of how the digital world is intertwined with the physical one. In order to effectively and responsibly exercise their rights, citizens must have an understanding of these technologies, but also of what uses are illegitimate. This is why there is all the more need for science, industry, politics, and educational institutions to make this knowledge widely available.
Secondly, a participatory platform is needed that makes it easier for people to become self-employed, set up their own projects, find collaboration partners, market products and services worldwide, manage resources and pay tax and social security contributions (a kind of sharing economy for all). To complement this, towns and even villages could set up centers for the emerging digital communities (such as fab labs), where ideas can be jointly developed and tested for free. Thanks to the open and innovative approach found in these centers, massive, collaborative innovation could be promoted.
Particular kinds of competitions could provide additional incentives for innovation, help increase public visibility and generate momentum for a participatory digital society. They could be particularly useful in mobilising civil society to ensure local contributions to global problems solving (for example, by means of "Climate Olympics"). For instance, platforms aiming to coordinate scarce resources could help unleash the huge potential of the circular and sharing economy, which is still largely untapped.
With the commitment to an open data strategy, governments and industry would increasingly make data available for science and public use, to create suitable conditions for an efficient information and innovation ecosystem that keeps pace with the challenges of our world. This could be encouraged by tax cuts, in the same way as they were granted in some countries for the use of environmentally friendly technologies.
Thirdly, building a "digital nervous system," run by the citizens, could open up new opportunities of the Internet of Things for everyone and provide real-time data measurements available to all. If we want to use resources in a more sustainable way and slow down climate change, we need to measure the positive and negative side effects of our interactions with others and our environment. By using appropriate feedback loops, systems could be influenced in such a way that they achieve the desired outcomes by means of self-organization.
For this to succeed we would need various incentive and exchange systems, available to all economic, political and social innovators. This could create entirely new markets and, therefore, also the basis for new prosperity. Unleashing the virtually unlimited potential of the digital economy would be greatly promoted by a pluralistic financial system (for example, functionally differentiated currencies) and new regulations for the compensation for inventions.
To better cope with the complexity and diversity of our future world and to turn it into an advantage, we will require personal digital assistants. These digital assistants will also benefit from developments in the field of artificial intelligence. In the future it can be expected that numerous networks combining human and artificial intelligence will be flexibly built and reconfigured, as needed. However, in order for us to retain control of our lives, these networks should be controlled in a distributed way. In particular, one would also have to be able to log in and log out as desired.
Democratic Platforms
A "Wikipedia of Cultures" could eventually help to coordinate various activities in a highly diverse world and to make them compatible with each other. It would make the mostly implicit success principles of the world's cultures explicit, so that they could be combined in new ways. A "Cultural Genome Project" like this would also be a kind of peace project, because it would raise public awareness for the value of sociocultural diversity. Global companies have long known that culturally diverse and multidisciplinary teams are more successful than homogeneous ones. However, the framework needed to efficiently collate knowledge and ideas from lots of people in order to create collective intelligence is still missing in many places. To change this, the provision of online deliberation platforms would be highly useful. They could also create the framework needed to realize an upgraded, digital democracy, with greater participatory opportunities for citizens. This is important, because many of the problems facing the world today can only be managed with contributions from civil society.
Further Reading:
ACLU: Orwellian Citizen Score, China's credit score system, is a warning for Americans, http://www.computerworld.com/article/2990203/security/aclu-orwellian-citizen-score-chinas-credit-score-system-is-a-warning-for-americans.html
Big data, meet Big Brother: China invents the digital totalitarian state. The worrying implications of its social-credit project. The Economist (December 17, 2016).
Harris, S. The Social Laboratory, Foreign Policy (29 July 2014), http://foreignpolicy.com/2014/07/29/the-social-laboratory/
Tong, V.J.C. Predicting how people think and behave, International Innovation, http://www.internationalinnovation.com/predicting-how-people-think-and-behave/
Volodymyr, M., Kavukcuoglu, K., Silver, D., et al.: Human-level control through deep reinforcement learning. In: Nature, 518, S. 529-533, 2015.
Frey, B. S. und Gallus, J.: Beneficial and Exploitative Nudges. In: Economic Analysis of Law in European Legal Scholarship. Springer, 2015.
Gigerenzer, G.: On the Supposed Evidence for Libertarian Paternalism. In: Review of Philosophy and Psychology 6(3), S. 361-383, 2015.
Grassegger, H. and Krogerus, M. Ich habe nur gezeigt, dass es die Bombe gibt [I have only shown the bomb exists]. Das Magazin (3. Dezember 2016) https://www.dasmagazin.ch/2016/12/03/ich-habe-nur-gezeigt-dass-es-die-bombe-gibt/
Hafen, E., Kossmann, D. und Brand, A.: Health data cooperativesâcitizen empowerment. In: Methods of Information in Medicine 53(2), S. 82â86, 2014.
Helbing, D.: The Automation of Society Is Next: How to Survive the Digital Revolution. CreateSpace, 2015.
Helbing, D.: Thinking AheadâEssays on Big Data, Digital Revolution, and Participatory Market Society. Springer, 2015.
Helbing, D. und Pournaras, E.: Build Digital Democracy. In: Nature 527, S. 33-34, 2015.
van den Hoven, J., Vermaas, P.E. und van den Poel, I.: Handbook of Ethics, Values and Technological Design. Springer, 2015.
Zicari, R. und Zwitter, A.: Data for Humanity: An Open Letter. Frankfurt Big Data Lab, 13.07.2015. Zwitter, A.: Big Data Ethics. In: Big Data & Society 1(2), 2014.
Thanks to Big Data, we can now take better, evidence-based decisions. However, the principle of top-down control increasingly fails, since the complexity of society grows in an explosive way as we go on networking our world. Distributed control approaches will become ever more important. Only by means of collective intelligence will it be possible to find appropriate solutions to the complexity challenges of our world.
Our society is at a crossroads: If ever more powerful algorithms would be controlled by a few decision-makers and reduce our self-determination, we would fall back in a Feudalism 2.0, as important historical achievements would be lost. Now, however, we have the chance to choose the path to digital democracy or democracy 2.0, which would benefit us all (see also https://vimeo.com/147442522).
Spotlight on China: Is this what the Future of Society looks like?
How would behavioural and social control impact our lives? The concept of a Citizen Score, which is now being implemented in China, gives an idea. There, all citizens are rated on a one-dimensional ranking scale. Everything they do gives plus or minus points. This is not only aimed at mass surveillance. The score depends on an individual's clicks on the Internet and their politically-correct conduct or not, and it determines their credit terms, their access to certain jobs, and travel visas. Therefore, the Citizen Score is about behavioural and social control. Even the behaviour of friends and acquaintances affects this score, i.e. the principle of clan liability is also applied: everyone becomes both a guardian of virtue and a kind of snooping informant, at the same time; unorthodox thinkers are isolated. Were similar principles to spread in democratic countries, it would be ultimately irrelevant whether it was the state or influential companies that set the rules. In both cases, the pillars of democracy would be directly threatened:
The tracking and measuring of all activities that leave digital traces would create a "naked" citizen, whose human dignity and privacy would progressively be degraded.
Decisions would no longer be free, because a wrong choice from the perspective of the government or company defining the criteria of the points system would have negative consequences. The autonomy of the individual would, in principle, be abolished.
Each small mistake would be punished and no one would be unsuspicious. The principle of the presumption of innocence would become obsolete. Predictive Policing could even lead to punishment for violations that have not happened, but are merely expected to occur.
As the underlying algorithms cannot operate completely free of error, the principle of fairness and justice would be replaced by a new kind of arbitrariness, against which people would barely be able to defend themselves.
If individual goals were externally set, the possibility of individual self-development would be eliminated and, thereby, democratic pluralism, too.
Local culture and social norms would no longer be the basis of appropriate, situation-dependent behaviour.
The control of society with a one-dimensional goal function would lead to more conflicts and, therefore, to a loss of security. One would have to expect serious instability, as we have seen it in our financial system.
Such a control of society would turn away from self-responsible citizens to individuals as underlings, leading to a Feudalism 2.0. This is diametrically opposed to democratic values. It is therefore time for an Enlightenment 2.0, which would feed into a Democracy 2.0, based on digital self-determination. This requires democratic technologies: information systems, which are compatible with democratic principles - otherwise they will destroy our society.
"BIG NUDGING" - ILL-DESIGNED FOR PROBLEM SOLVING
He who has large amounts of data can manipulate people in subtle ways. But even benevolent decision-makers may do more wrong than right, says Dirk Helbing.
Proponents of Nudging argue that people do not take optimal decisions and it is, therefore, necessary to help them. This school of thinking is known as paternalism. However, Nudging does not choose the way of informing and persuading people. It rather exploits psychological weaknesses in order to bring us to certain behaviours, i.e. we are tricked. The scientific approach underlying this approach is called "behaviorism", which is actually long out of date.
Decades ago, Burrhus Frederic Skinner conditioned rats, pigeons and dogs by rewards and punishments (for example, by feeding them or applying painful electric shocks). Today one tries to condition people in similar ways. Instead of in a Skinner box, we are living in a "filter bubble": with personalized information our thinking is being steered. With personalized prices, we may be even punished or rewarded, for example, for (un)desired clicks on the Internet. The combination of Nudging with Big Data has therefore led to a new form of Nudging that we may call "Big Nudging". The increasing amount of personal information about us, which is often collected without our consent, reveals what we think, how we feel and how we can be manipulated. This insider information is exploited to manipulate us to make choices that we would otherwise not make, to buy some overpriced products or those that we do not need, or perhaps to give our vote to a certain political party.
However, Big Nudging is not suitable to solve many of our problems. This is particularly true for the complexity-related challenges of our world. Although already 90 countries use Nudging, it has not reduced our societal problems - on the contrary. Global warming is progressing. World peace is fragile, and terrorism is on the rise. Cybercrime explodes, and also the economic and debt crisis is not solved in many countries.
There is also no solution to the inefficiency of financial markets, as Nudging guru Richard Thaler recently admitted. In his view, if the state would control financial markets, this would rather aggravate the problem. But why should one then control our society in a top-down way, which is even more complex than a financial market? Society is not a machine, and complex systems cannot be steered like a car. This can be understood by discussing another complex system: our bodies. To cure diseases, one needs to take the right medicine at the right time in the right dose. Many treatments also have serious side and interaction effects. The same, of course, is expected to apply to social interventions by Big Nudging. Often is not clear in advance what would be good or bad for society. 60 percent of the scientific results in psychology are not reproducible. Therefore, chances are to cause more harm than good by Big Nudging.
Furthermore, there is no measure, which is good for all people. For example, in recent decades, we have seen food advisories changing all the time. Many people also suffer from food intolerances, which can even be fatal. Mass screenings for certain kinds of cancer and other diseases are now being viewed quite critically, because the side effects of wrong diagnoses often outweigh the benefits. Therefore, if one decided to use Big Nudging, a solid scientific basis, transparency, ethical evaluation and democratic control would be really crucial. The measures taken would have to guarantee statistically significant improvements, and the side effects would have to be acceptable. Users should be made aware of them (in analogy to a medical leaflet), and the treated persons would have to have the last word.
In addition, applying one and the same measure to the entire population would not be good. But far too little is known to take appropriate individual measures. Not only is it important for society to apply different treatments in order to maintain diversity, but correlations (regarding what measure to take in what particular context) matter as well. For the functioning of society it is essential that people apply different roles, which are fitting to the respective situation they are in. Big Nudging is far from being able to deliver this.
Current Big-Data-based personalization rather creates new problems such as discrimination. For instance, if we make health insurance rates dependent on certain diets, then Jews, Muslims and Christians, women and men will have to pay different rates. Thus, a bunch of new problems is arising.
Richard Thaler is, therefore, not getting tired to emphasize that Nudging should only be used in beneficial ways. As a prime example, how to use Nudging, he mentions a GPS-based route guidance system. This, however, is turned on and off by the user. The user also specifies the respective goal. The digital assistant then offers several alternatives, between which the user can freely choose. After that, the digital assistant supports the user as good as it can in reaching the goal and in making better decisions. This would certainly be the right approach to improve people's behaviours, but today the spirit of Big Nudging is quite different from this.
DIGITAL SELF-DETERMINATION BY MEANS OF A âRIGHT TO A COPYâ
by Ernst Hafen
Europe must guarantee citizens a right to a digital copy of all data about them (Right to a Copy), says Ernst Hafen. A first step towards data democracy would be to establish cooperative banks for personal data that are owned by the citizens rather than by corporate shareholders.
Medicine can profit from health data. However, access to personal data must be controlled the persons (the data subjects) themselves. The âRight to a Copyâ forms the basis for such a control.
In Europe, we like to point out that we live in free, democratic societies. We have almost unconsciously become dependent on multinational data firms, however, whose free services we pay for with our own data. Personal dataâwhich is now sometimes referred to as a ânew asset classâ or the oil of the 21st Centuryâis greatly sought after. However, thus far nobody has managed to extract the maximum use from personal data because it lies in many different data sets. Google and Facebook may know more about our health than our doctor, but even these firms cannot collate all of our data, because they rightly do not have access to our patient files, shopping receipts, or information about our genomic make-up. In contrast to other assets, data can be copied with almost no associated cost. Every person should have the right to obtain a copy of all their personal data. In this way, they can control the use and aggregation of their data and decide themselves whether to give access to friends, another doctor, or the scientific community.
The emergence of mobile health sensors and apps means that patients can contribute significant medical insights. By recording their bodily health on their smartphones, such as medical indicators and the side effects of medications, they supply important data which make it possible to observe how treatments are applied, evaluate health technologies, and conduct evidence-based medicine in general. It is also a moral obligation to give citizens access to copies of their data and allow them to take part in medical research, because it will save lives and make health care more affordable.
European countries should copper-fasten the digital self-determination of their citizens by enshrining the âRight to a Copyâ in their constitutions, as has been proposed in Switzerland. In this way, citizens can use their data to play an active role in the global data economy. If they can store copies of their data in non-profit, citizen-controlled, cooperative institutions, a large portion of the economic value of personal data could be returned to society. The cooperative institutions would act as trustees in managing the data of their members. This would result in the democratization of the market for personal data and the end of digital dependence.
DEMOCRATIC DIGITAL SOCIETY
Citizens must be allowed to actively participate
In order to deal with future technology in a responsible way, it is necessary that each one of us can participate in the decision-making process, argues Bruno S. Frey from the University of Basel
How can responsible innovation be promoted effectively? Appeals to the public have little, if any, effect if the institutions or rules shaping human interactions are not designed to incentivize and enable people to meet these requests.
Several types of institutions should be considered. Most importantly, society must be decentralized, following the principle of subsidiarity. Three dimensions matter.
Spatial decentralization consists in vibrant federalism. The provinces, regions and communes must be given sufficient autonomy. To a large extent, they must be able to set their own tax rates and govern their own public expenditure.
Functional decentralization according to area of public expenditure (for example education, health, environment, water provision, traffic, culture etc) is also desirable. This concept has been developed through the proposal of FOCJ, or âFunctional, Overlapping and Competing Jurisdictionsâ.
Political decentralization relating to the division of power between the executive (government), legislative (parliament) and the courts. Public media and academia should be additional pillars.
These types of decentralization will continue to be of major importance in the digital society of the future.
In addition, citizens must have the opportunity to directly participate in decision-making on particular issues by means of popular referenda. In the discourse prior to such a referendum, all relevant arguments should be brought forward and stated in an organized fashion. The various proposals about how to solve a particular problem should be compared and narrowed down to those which seem to be most promising, and integrated insomuch as possible during a mediation process. Finally, a referendum needs to take place, which serves to identify the most viable solution for the local conditions (viable in the sense that it enjoys a diverse range of support in the electorate).
Nowadays, on-line deliberation tools can efficiently support such processes. This makes it possible to consider a larger and more diverse range of ideas and knowledge, harnessing âcollective intelligenceâ to produce better policy proposals.
Another way to implement the ten proposals would be to create new, unorthodox institutions. For example, it could be made compulsory for every official body to take on an âadvocatus diaboliâ. This lateral thinker would be tasked with developing counter-arguments and alternatives to each proposal. This would reduce the tendency to think along the lines of âpolitical correctnessâ and unconventional approaches to the problem would also be considered.
Another unorthodox measure would be to choose among the alternatives considered reasonable during the discourse process using random decision-makingmechanisms. Such an approach increases the chance that unconventional and generally disregarded proposals and ideas would be integrated into the digital society of the future.
Bruno S. Frey
Bruno Frey (* 1941) is an academic economist and Permanent Visiting Professor at the University of Basel where he directs the Center for Research in Economics and Well-Being (CREW). He is also Research Director of the Center for Research in Economics, Management and the Arts (CREMA) in Zurich.
DEMOCRATIC TECHNOLOGIES AND RESPONSIBLE INNOVATION
When technology determines how we see the world, there is a threat of misuse and deception. Thus, innovation must reflect our values, argues Jeroen van den Hoven.
Germany was recently rocked by an industrial scandal of global proportions. The revelations led to the resignation of the CEO of one of the largest car manufacturers, a grave loss of consumer confidence, a dramatic slump in share price and economic damage for the entire car industry. There was even talk of severe damage to the âMade in Germanyâ brand. The compensation payments will be in the range of billions of Euro.
The background to the scandal was a situation whereby VW and other car manufacturers used manipulative software which could detect the conditions under which the environmental compliance of a vehicle was tested. The software algorithm altered the behavior of the engine so that it emitted fewer pollutant exhaust fumes under test conditions than in normal circumstances. In this way, it cheated the test procedure. The full reduction of emissions occurred only during the tests, but not in normal use.
In the 21st Century, we urgently need to address the question of how we can implement ethical standards technologically.
Similarly, algorithms, computer code, software, models and data will increasingly determine what we see in the digital society, and what are choices are with regard to health insurance, finance and politics. This brings new risks for the economy and society. In particular, there is a danger of deception.
Thus, it is important to understand that our values are embodied in the things we create. Otherwise, the technological design of the future will determine the shape of our society (âcode is lawâ). If these values are self-serving, discriminatory or contrary to the ideals of freedom and personal privacy, this will damage our society. Thus, in the 21st Century we must urgently address the question of how we can implement ethical standards technologically. The challenge calls for us to âdesign for valueâ.
If we lack the motivation to develop the technological tools, science and institutions necessary to align the digital world with our shared values, the future looks very bleak. Thankfully, the European Union has invested in an extensive research and development program for responsible innovation. Furthermore, the EU countries which passed the Lund and Rome Declarations emphasized that innovation needs to be carried out responsibly. Among other things, this means that innovation should be directed at developing intelligent solutions to societal problems, which can harmonize values such as efficiency, security and sustainability. Genuine innovation does not involve deceiving people into believing that their cars are sustainable and efficient. Genuine innovation means creating technologies that can actually satisfy these requirements.
DIGITAL RISK LITERACY
Technology needs users who can control it
Rather than letting intelligent technology diminish our brainpower, we should learn to better control it, says Gerd Gigerenzerâbeginning in childhood.
The digital revolution provides an impressive array of possibilities: thousands of apps, the Internet of Things, and almost permanent connectivity to the world. But in the excitement, one thing is easily forgotten: innovative technology needs competent users who can control it rather than be controlled by it.
Three examples:
One of my doctoral students sits at his computer and appears to be engrossed in writing his dissertation. At the same time his e-mail inbox is open, all day long. He is in fact waiting to be interrupted. It's easy to recognize how many interruptions he had in the course of the day by looking at the flow of his writing.
An American student writes text messages while driving:
"When a text comes in, I just have to look, no matter what. Fortunately, my phone shows me the text as a pop up at first⦠so I don't have to do too much looking while I'm driving." If, at the speed of 50 miles per hour, she takes only 2 seconds to glance at her cell phone, she's just driven 48 yards "blind". That young woman is risking a car accident. Her smart phone has taken control of her behaviorâas is the case for the 20 to 30 percent of Germans who also text while driving.
During the parliamentary elections in India in 2014, the largest democratic election in the world with over 800 million potential voters, there were three main candidates: N. Modi, A. Kejriwal, and R. Ghandi. In a study, undecided voters could find out more information about these candidates using an Internet search engine. However, the participants did not know that the web pages had been manipulated: For one group, more positive items about Modi popped up on the first page and negative ones later on. The other groups experienced the same for the other candidates. This and similar manipulative procedures are common practice on the Internet. It is estimated that for candidates who appear on the first page thanks to such manipulation, the number of votes they receive from undecided voters increases by 20 percentage points.
In each of these cases, human behavior is controlled by digital technology. Losing control is nothing new, but the digital revolution has increased the possibility of that happening.
What can we do? There are three competing visions. One is techno-paternalism, which replaces (flawed) human judgment with algorithms. The distracted doctoral student could continue readings his emails and use thesis-writing software; all he would need to do is input key information on the topic. Such algorithms would solve the annoying problem of plagiarism scandals by making them an everyday occurrence.
Although still in the domain of science fiction, human judgment is already being replaced by computer programs in many areas. The BabyConnect app, for instance, tracks the daily development of infantsâheight, weight, number of times it was nursed, how often its diapers were changed, and much moreâwhile newer apps compare the baby with other users' children in a real-time database. For parents, their baby becomes a data vector, and normal discrepancies often cause unnecessary concern.
The second vision is known as "nudging". Rather than letting the algorithm do all the work, people are steered into a particular direction, often without being aware of it. The experiment on the elections in India is an example of that. We know that the first page of Google search results receives about 90% of all clicks, and half of these are the first two results. This knowledge about human behavior is taken advantage of by manipulating the order of results so that the positive ones about a particular candidate or a particular commercial product appear on the first page. In countries such as Germany, where web searches are dominated by one search engine (Google), this leads to endless possibilities to sway voters. Like techno-paternalism, nudging takes over the helm.
But there is a third possibility. My vision is risk literacy, where people are equipped with the competencies to control media rather than be controlled by it. In general, risk literacy concerns informed ways of dealing with risk-related areas such as health, money, and modern technologies. Digital risk literacy means being able to take advantage of digital technologies without becoming dependent on or manipulated by them. That is not as hard as it sounds. My doctoral student has since learned to switch on his email account only three times a day, morning, noon, and evening, so that he can work on his dissertation without constant interruption.
Learning digital self-control needs to begin as a child, at school and also from the example set by parents. Some paternalists may scoff at the idea, stating that humans lack the intelligence and self-discipline to ever become risk literate. But centuries ago the same was said about learning to read and writeâwhich a majority of people in industrial countries can now do. In the same way, people can learn to deal with risks more sensibly. To achieve this, we need to radically rethink strategies and invest in people rather than replace or manipulate them with intelligent technologies. In the 21st century, we need less paternalism and nudging and more informed, critical, and risk-savvy citizens. It's time to snatch away the remote control from technology and take our lives into our own hands.
ETHICS: BIG DATA FOR THE COMMON GOOD AND FOR HUMANITY
The power of data can be used for good and bad purposes. Roberto Zicari and Andrej Zwitter have formulated five principles of Big Data Ethics.
In recent times there have been a growing number of voices â from tech visionaries like Elon Musk (Tesla Motors), to Bill Gates (Microsoft) and Steve Wozniak (Apple) â warning of the dangers of artificial intelligence (AI). A petition against automated weapon systems was signed by 200,000 people and an open letter recently published by MIT calls for a new, inclusive approach to the coming digital society.
We must realize that big data, like any other tool, can be used for good and bad purposes. In this sense, the decision by the European Court of Justice against the Safe Harbour Agreement on human rights grounds is understandable.
States, international organizations and private actors now employ big data in a variety of spheres. It is important that all those who profit from big data are aware of their moral responsibility. For this reason, the Data for Humanity Initiative was established, with the goal of disseminating an ethical code of conduct for big data use. This initiative advances five fundamental ethical principles for big data users:
1. âDo no harmâ. The digital footprint that everyone now leaves behind exposes individuals, social groups and society as a whole to a certain degree of transparency and vulnerability. Those who have access to the insights afforded by big data must not harm third parties.
2. Ensure that data is used in such a way that the results will foster the peaceful coexistence of humanity. The selection of content and access to data influences the world view of a society. Peaceful coexistence is only possible if data scientists are aware of their responsibility to provide even and unbiased access to data.
3. Use data to help people in need. In addition to being economically beneficial, innovation in the sphere of big data could also create additional social value. In the age of global connectivity, it is now possible to create innovative big data tools which could help to support people in need.
4. Use data to protect nature and reduce pollution of the environment. One of the biggest achievements of big data analysis is the development of efficient processes and synergy effects. Big data can only offer a sustainable economic and social future if such methods are also used to create and maintain a healthy and stable natural environment.
5. Use data to eliminate discrimination and intolerance and to create a fair system of social coexistence. Social media has created a strengthened social network. This can only lead to long-term global stability if it is built on the principles of fairness, equality and justice.
To conclude, we would also like to draw attention to how interesting new possibilities afforded by big data could lead to a better future: "As more data become less costly and technology breaks barriers to acquisition and analysis, the opportunity to deliver actionable information for civic purposes grows. This might be termed the 'common good' challenge for big data." (Jake Porway, DataKind). In the end, it is important to understand the turn to big data as an opportunity to do good and as a hope for a better future.
MEASURING, ANALYZING, OPTIMIZING: WHEN INTELLIGENT MACHINES TAKE OVER SOCIETAL CONTROL
In the digital age, machines steer everyday life to a considerable extent already. We should, therefore, think twice before we share our personal data, says expertYvonne Hofstetter
If Norbert Wiener (1894-1964) had experienced the digital era, for him it would have been the land of plenty. âCybernetics is the science of information and control, regardless of whether the target of control is a machine or a living organismâ, the founder of Cybernetics once explained in Hannover, Germany in 1960. In history, the world never produced such amount of data and information as it does today.
Cybernetics, a science asserting ubiquitous importance, makes a strong claim: âEverything can be controlled.â During the 20th century, both the US armed forces and the Soviet Union applied Cybernetics to control their armsâ race. The NATO had deployed so-called C3I systems (Command, Control, Communication and Information), a term for military infrastructure that leans linguistically to Wienerâs book on Cybernetics: Or Control and Communication in the Animal and the Machine, published in 1948. Control refers to the control of machines as well as of individuals or entire social systems like military alliances, financial markets or, pointing to the 21st century, even the electorate. Its major premise: keeping the world under surveillance to collect data. Connecting people and things to the Internet of Everything is a perfect to way to obtain the required mass data as input to cybernetic control strategies.
With Cybernetics, Wiener proposed a new scientific concept: the closed-loop feedback. Feedbackâe.g. the Likes we give, the online comments we makeâis a major concept of digitization, too. Does that mean digitization is the most perfect implementation of Cybernetics? When we use smart devices, we are creating a ceaseless data stream disclosing our intentions, geo position or social environment. While we communicate more thoughtlessly than ever online, in the background, an ecosystem of artificial intelligence is evolving. Today, artificial intelligence is the sole technology being able to profile us and draw conclusions about our future behavior.
An automated control strategy, usually a learning machine, analyzes our actual situation and then computes a stimulus that should draw us closer to a more desirable âoptimalâ state. Increasingly, such controllers govern our daily lives. As digital assistants they help us making decisions in the vast ocean of optionality and intimidating uncertainty. Even Google Search is a control strategy. When typing a keyword, a user reveals his intentions. The Google search engine, in turn, will not just present a list with best hits, but a link list that embodies the highest (financial) value rather for the company than for the user. Doing it that way, i.e. listing corporate offerings at the very top of the search results, Google controls the userâs next clicks. This, the European Union argues, is a misuse.
But is there any way out? Yes, if we disconnected from the cybernetic loop. Just stop responding to a digital stimulus. Cybernetics will fail, if the controllable counterpart steps out of the loop. Yet, we are free to owe a response to a digital controller. However, as digitization further escalates, soon we may have no more choice. Hence, we are called on to fight for our freedom rightsâafresh during the digital era and in particular at the rise of intelligent machines.
For Norbert Wiener (1894-1964), the digital era would be a paradise. âCybernetics is the science of information and control, regardless of whether a machine or a living organism is being controlledâ, the founder of cybernetics once said in Hanover, Germany in 1960.
Cybernetics, a science which claims ubiquitous importance makes a strong promise: âEverything is controllable.â During the 20th century, both the US armed forces and the Soviet Union applied cybernetics to control the armsâ race. NATO had deployed so-called C3I systems (Command, Control, Communication and Information), a term for military infrastructure that linguistically leans on Wienerâs book entitled Cybernetics: Or Control and Communication in the Animal and the Machine published in 1948. Control refers to the control of machines as well as of individuals or entire societal systems such as military alliances, NATO and the Warsaw Pact. Its basic requirements are: Integrating, collecting data and communicating. Connecting people and things to the Internet of Everything is a perfect way to obtain the required data as input of cybernetic control strategies.
With cybernetics, a new scientific concept was proposed: the closed-loop feedback. Feedbackâsuch as the likes we give or the online comments we makeâis another major concept related to digitization. Does this mean that digitization is the most perfect implementation of cybernetics? When we use smart devices, we create an endless data stream disclosing our intentions, geolocation or social environment. While we communicate more thoughtlessly than ever online, in the background, an artificial intelligence (AI) ecosystem is evolving. Today, AI is the sole technology able to profile us and draw conclusions about our future behavior.
An automated control strategy, usually a learning machine, analyses our current state and computes a stimulus that should draw us closer to a more desirable âoptimalâ state. Increasingly, such controllers govern our daily lives. Such digital assistants help us to make decisions among the vast ocean of options and intimidating uncertainty. Even Google Search is a control strategy. When typing a keyword, a user reveals his intentions. The Google search engine, in turn, presents not only a list of the best hits, but also a list of links sorted according to their (financial) value to the company, rather than to the user. By listing corporate offerings at the very top of the search results, Google controls the userâs next clicks. That is a misuse of Googleâs monopoly, the European Union argues.
But is there any way out? Yes, if we disconnect from the cybernetic loop and simply stop responding to the digital stimulus. Cybernetics will fail, if the controllable counterpart steps out of the loop. We should remain discreet and frugal with our data, even if it is difficult. However, as digitization further escalates, soon there may be no more choices left. Hence, we are called on to fight once again for our freedom in the digital era, particularly against the rise of intelligent machines.
This is a fascinating portrait of Facebook board member, Gawker bankrupter, and Drumpf adviser Peter Thiel's data mining/analytics company Palantir and how it has been helping the NSA and other alphabet agencies spy on U.S. citizens as well as the rest of the world. Like Robert Mercer, backer of Breitbart and Drumpf and â it turns out â the Brexit movement via Cambridge Analytics, Thiel is another billionaire conservative political influencer who leveraged his early investment in Facebook and other tech startups into yet another data analytics enterprise. And like so many others in the data mining/analytics space, he naturally opted to court one of the biggest buyers of such services, the NSA. The character of Zimmerman in my novel 4o4 - A John Decker Thriller, about the surveillance state, recently listed as a Top Ten Amazon Bestseller in Technothrillers, was based in part on Mercer and Thiel. Check out this great story from the The Intercept.
HOW PETER THIELâS PALANTIR HELPED THE NSA SPY ON THE WHOLE WORLD
DONALD TRUMP HAS inherited the most powerful machine for spying ever devised. How this petty, vengeful man might wield and expand the sprawling American spy apparatus, already vulnerable to abuse, is disturbing enough on its own. But the outlook is even worse considering Trumpâs vast preference for private sector expertise and new strategic friendship with Silicon Valley billionaire investor Peter Thiel, whose controversial (and opaque) company Palantir has long sought to sell governments an unmatched power to sift and exploit information of any kind. Thiel represents a perfect nexus of government clout with the kind of corporate swagger Trump loves. The Intercept can now reveal that Palantir has worked for years to boost the global dragnet of the NSA and its international partners, and was in fact co-created with American spies.
Peter Thiel became one of the American political mainstreamâs most notorious figures in 2016 (when it emerged he was bankrolling a lawsuit against Gawker Media, my former employer) even before he won a direct line to the White House.Now he brings to his role as presidential adviser decades of experience as kingly investor and token nonliberal on Facebookâs board of directors, a Rolodex of software luminaries, and a decidedly Trumpian devotion to controversy and contrarianism. But perhaps the most appealing asset Thiel can offer our bewildered new president will be Palantir Technologies, which Thiel founded with Alex Karp and Joe Lonsdale in 2004.
Palantir has never masked its ambitions, in particular the desire to sell its services to the U.S. government â the CIA itself was an early investor in the startup through In-Q-Tel, the agencyâs venture capital branch. But Palantir refuses to discuss or even name its government clientele, despite landing âat least $1.2 billionâ in federal contracts since 2009, according to an August 2016 report in Politico. The company was last valued at $20 billion and is expected to pursue an IPO in the near future. In a 2012 interview with TechCrunch, while boasting of ties to the intelligence community, Karp said nondisclosure contracts prevent him from speaking about Palantirâs government work.
Alex Karp, co-founder and CEO of Palantir Technologies, speaks during the WSJDLive Global Technology Conference in Laguna Beach, Calif., on Oct. 26, 2016.
âPalantirâ is generally used interchangeably to refer to both Thiel and Karpâs company and the software that company creates. Its two main products are Palantir Gotham and Palantir Metropolis, more geeky winks from a company whose Tolkien namesake is a type of magical sphere used by the evil lord Sauron to surveil, trick, and threaten his enemies across Middle Earth. While Palantir Metropolis is pegged to quantitative analysis for Wall Street banks and hedge funds, Gotham (formerly Palantir Government) is designed for the needs of intelligence, law enforcement, and homeland security customers. Gotham works by importing large reams of âstructuredâ data (like spreadsheets) and âunstructuredâ data (like images) into one centralized database, where all of the information can be visualized and analyzed in one workspace. For example, a 2010 demo showed how Palantir Government could be used to chart the flow of weapons throughout the Middle East by importing disparate data sources like equipment lot numbers, manufacturer data, and the locations of Hezbollah training camps. Palantirâs chief appeal is that itâs not designed to do any single thing in particular, but is flexible and powerful enough to accommodate the requirements of any organization that needs to process large amounts of both personal and abstract data.
A Palantir promotional video.
Despite all the grandstanding about lucrative, shadowy government contracts, co-founder Karp does not shy away from taking a stand in the debate over government surveillance. In a Forbes profile in 2013, he played privacy lamb, saying, âI didnât sign up for the government to know when I smoke a joint or have an affair. ⦠We have to find places that we protect away from government so that we can all be the unique and interesting and, in my case, somewhat deviant people weâd like to be.â In that same article, Thiel lays out Palantirâs mission with privacy in mind: to âreduce terrorism while preserving civil liberties.â After the first wave of revelations spurred by the whistleblower Edward Snowden, Palantir was quick to deny that it had any connection to the NSA spy program known as PRISM, which shared an unfortunate code name with one of its own software products. The current iteration of Palantirâs website includes an entire section dedicated to âPrivacy & Civil Liberties,â proclaiming the companyâs support of both:
Palantir Technologies is a mission-driven company, and a core component of that mission is protecting our fundamental rights to privacy and civil liberties. â¦
Some argue that society must âbalanceâ freedom and safety, and that in order to better protect ourselves from those who would do us harm, we have to give up some of our liberties. We believe that this is a false choice in many areas. Particularly in the world of data analysis, liberty does not have to be sacrificed to enhance security. Palantir is constantly looking for ways to protect privacy and individual liberty through its technology while enabling the powerful analysis necessary to generate the actionable intelligence that our law enforcement and intelligence agencies need to fulfill their missions.
Itâs hard to square this purported commitment to privacy with proof, garnered from documents provided by Edward Snowden, that Palantir has helped expand and accelerate the NSAâs global spy network, which is jointly administered with allied foreign agencies around the world. Notably, the partnership has included building software specifically to facilitate, augment, and accelerate the use of XKEYSCORE, one of the most expansive and potentially intrusive tools in the NSAâs arsenal. According to Snowden documents published by The Guardian in 2013, XKEYSCORE is by the NSAâs own admission its âwidest reachingâ program, capturing ânearly everything a typical user does on the internet.âA subsequent report by The Intercept showed that XKEYSCOREâs âcollected communications not only include emails, chats, and web-browsing traffic, but also pictures, documents, voice calls, webcam photos, web searches, advertising analytics traffic, social media traffic, botnet traffic, logged keystrokes, computer network exploitation targeting, intercepted username and password pairs, file uploads to online services, Skype sessions, and more.â For the NSA and its global partners, XKEYSCORE makes all of this as searchable as a hotel reservation site.
But how do you make so much data comprehensible for human spies? As the additional documents published with this article demonstrate, Palantir sold its services to make one of the most powerful surveillance systems ever devised even more powerful, bringing clarity and slick visuals to an ocean of surveillance data.
An office building occupied by the technology firm Palantir in McLean, Va., on Oct. 11, 2014.
PALANTIRâS RELATIONSHIP WITH government spy agencies appears to date back to at least 2008, when representatives from the U.K.âs signals intelligence agency, Government Communications Headquarters, joined their American peers at VisWeek, an annual data visualization and computing conference organized by the Institute of Electrical and Electronics Engineers and the U.S. National Institute of Standards and Technology. Attendees from throughout government and academia gather to network with members of the private sector at the event, where they compete in teams to solve hypothetical data-based puzzles as part of the Visual Analytics Science and Technology (VAST) Challenge. As described in a document saved by GCHQ, Palantir fielded a team in 2008 and tackled one such scenario using its own software. It was a powerful marketing opportunity at a conference filled with potential buyers.
In the demo, Palantir engineers showed how their software could be used to identify Wikipedia users who belonged to a fictional radical religious sect and graph their social relationships. In Palantirâs pitch, its approach to the VAST Challenge involved using software to enable âmany analysts working together [to] truly leverage their collective mind.â The fake scenarioâs target, a cartoonishly sinister religious sect called âthe Paraiso Movement,â was suspected of a terrorist bombing, but the unmentioned and obvious subtext of the experiment was the fact that such techniques could be applied to de-anonymize and track members of any political or ideological group. Among a litany of other conclusions, Palantir determined the group was prone to violence because its âManifestoâs intellectual influences include âPancho Villa, Che Guevara, Leon Trotsky, [and] Cuban revolutionary Jose MartÃ,â a list of military commanders and revolutionaries with a history of violent actions.â
The delegation from GCHQ returned from VisWeek excited and impressed. In a classified report from those who attended, Palantirâs potential for aiding the spy agency was described in breathless terms. âPalantir are a relatively new Silicon Valley startup who are sponsored by the CIA,â the report began. âThey claim to have significant involvement with the US intelligence community, although none yet at NSA.â GCHQ noted that Palantir âhas been developed closely internally with intelligence community users (unspecified, but likely to be the CIA given the funding).â The report described Palantirâs demo as âso significantâ that it warranted its own entry in GCHQâs classified internal wiki, calling the software âextremely sophisticated and mature. ⦠We were very impressed. You need to see it to believe it.â
The report conceded, however, that âit would take an enormous effort for an in-house developed GCHQ system to get to the same level of sophisticationâ as Palantir. The GCHQ briefers also expressed hesitation over the price tag, noting that âadoption would have [a] huge monetary ⦠cost,â and over the implications of essentially outsourcing intelligence analysis software to the private sector, thus making the agency âutterly dependent on a commercial product.â Finally, the report added that âit is possible there may be concerns over security â the company have published a lot of information on their website about how their product is used in intelligence analysis, some of which we feel very uncomfortable about.â
A page from Palantirâs âExecutive Summaryâ document, provided to government clients.
However anxious British intelligence was about Palantirâs self-promotion, the worry must not have lasted very long. Within two years, documents show that at least three members of the âFive Eyesâ spy alliance between the United States, the U.K., Australia, New Zealand, and Canada were employing Palantir to help gather and process data from around the world. Palantir excels at making connections between enormous, separate databases, pulling big buckets of information (call records, IP addresses, financial transactions, names, conversations, travel records) into one centralized heap and visualizing them coherently, thus solving one of the persistent problems of modern intelligence gathering: data overload.
A GCHQ wiki page titled âVisualisation,â outlining different ways âto provide insight into some set of data,â puts succinctly Palantirâs intelligence value:
Palantir is an information management platform for analysis developed by Palantir Technologies. It integrates structured and unstructured data, provides search and discovery capabilities, knowledge management, and collaborative features. The goal is to offer the infrastructure, or âfull stack,â that intelligence organizations require for analysis.
Bullet-pointed features of note included a âGraph View,â âTimelining capabilities,â and âGeo View.â
A GCHQ diagram indicates how Palantir could be used as part of a computer network attack.
Under the Five Eyes arrangement, member countries collect and pool enormous streams of data and metadata collected through tools like XKEYSCORE, amounting to tens of billions of records. The alliance is constantly devising (or attempting) new, experimental methods of prying data out of closed and private sources, including by hacking into computers and networks in non-Five Eyes countries and infecting them with malware.
A 2011 PowerPoint presentation from GCHQâs Network Defence Intelligence & Security Team (NDIST) â which, as The Intercept has previously reported, âworked to subvert anti-virus and other security software in order to track users and infiltrate networksâ â mentioned Palantir as a tool for processing data gathered in the course of its malware-oriented work. Palantirâs software was described as an âanalyst workspace [for] pulling together disparate information and displaying it in novel ways,â and was used closely in conjunction with other intelligence software tools, like the NSAâs notorious XKEYSCORE search system. The novel ways of using Palantir for spying seemed open-ended, even imaginative: A 2010 presentation on the joint NSA-GCHQ âMastering the Internetâ surveillance program mentioned the prospect of running Palantir software on âAndroid handsetsâ as part of a SIGINT-based âaugmented realityâ experience. Itâs unclear what exactly this means or could even look like.
Above all, these documents depict Palantirâs software as a sort of consolidating agent, allowing Five Eyes analysts to make sense of tremendous amounts of data that might have been otherwise unintelligible or highly time-consuming to digest. In a 2011 presentation to the NSA, classified top secret, an NDIST operative noted the âgood collectionâ of personal data among the Five Eyes alliance but lamented the âpoor analytics,â and described the attempt to find new tools for SIGINT analysis, in which it âconducted a review of 14 different systems that might work.â The review considered services from Lockheed Martin and Detica (a subsidiary of BAE Systems) but decided on the up-and-comer from Palo Alto.
Palantir is described as having been funded not only by In-Q-Tel, the CIAâs venture capital branch, but furthermore created âthrough [an] iterative collaboration between Palantir computer scientists and analysts from various intelligence agencies over the course of nearly three years.â While itâs long been known that Palantir got on its feet with the intelligence communityâs money, it has not been previously reported that the intelligence community actually helped build the software. The continuous praise seen in these documents shows that the collaboration paid off. Under the new âPalantir Model,â âdata can come from anywhereâ and can be âasked whatever the analyst wants.â
Along with Palantirâs ability to pull in âdirect XKS Results,â the presentation boasted that the software was already connected to 10 other secret Five Eyes and GCHQ programs and was highly popular among analysts. It even offered testimonials (TWO FACE appears to be a code name for the implementation of Palantir):
[Palantir] is the best tool I have ever worked with. Itâs intuitive, i.e. idiot-proof, and can do a lot you never even dreamt of doing.
This morning, using TWO FACE rather than XKS to review the activity of the last 3 days. It reduced the initial analysis time by at least 50%.
Enthusiasm runs throughout the PowerPoint: A slide titled âUnexpected Benefitsâ reads like a marketing brochure, exclaiming that Palantir âinteracts with anything!â including Google Earth, and âYou can even use it on a iphone or laptop.â The next slide, on âPotential Downsides,â is really more praise in disguise: Palantir âLooks expensiveâ but âisnât as expensive as expected.â The answer to âWhat canât it do?â is revealing: âHowever we ask, Palantir answer,â indicating that the collaboration between spies and startup didnât end with Palantirâs CIA-funded origins, but that the company was willing to create new features for the intelligence community by request.
On GCHQâs internal wiki page for TWO FACE, analysts were offered a âhow toâ guide for incorporating Palantir into their daily routine, covering introductory topics like âHow do I ⦠Get Data from XKS in Palantir,â âHow do I ⦠Run a bulk search,â and âHow do I ⦠Run bulk operations over my objects in Palantir.âFor anyone in need of a hand, âtraining is currently offered as 1-2-1 desk based training with a Palantir trainer. This gives you the opportunity to quickly apply Palantir to your current work task.â Palantir often sends âforward deployed engineers,â or FDEs, to work alongside clients at their offices and provide assistance and engineering services, though the typical client does not have access to the worldâs largest troves of personal information. For analysts interested in tinkering with Palantir, there was even a dedicated instant message chat room open to anyone for âinformallyâ discussing the software.
The GCHQ wiki includes links to classified webpages describing Palantirâs use by the Australian Defence Signals Directorate (now called the Australian Signals Directorate) and to a Palantir entry on the NSAâs internal âIntellipedia,â though The Intercept does not have access to copies of the linked sites. However, embedded within Intellipedia HTML files available to The Intercept are references to a variety of NSA-Palantir programs, including âPalantir Classification Helper,â â[Target Knowledge Base] to Palantir PXML,â and âPalantirAuthService.â (Internal Palantir documents obtainedby TechCrunch in 2013 provide additional confirmation of the NSAâs relationship with the company.)
One Palantir program used by GCHQ, a software plug-in named âKite,â was preserved almost in its entirety among documents provided to The Intercept. An analysis of Kiteâs source code shows just how much flexibility the company afforded Five Eyes spies. Developers and analysts could ingest data locally using either Palantirâs âWorkspaceâ application or Kite. When they were satisfied the process was working properly, they could push it into a Palantir data repository where other Workspace users could also access it, almost akin to a Google Spreadsheets collaboration. When analysts were at their Palantir workstation, they could perform simple imports of static data, but when they wanted to perform more complicated tasks like import databases or set up recurring automatic imports, they turned to Kite.
Kite worked by importing intelligence data and converting it into an XML file that could be loaded into a Palantir data repository. Out of the box, Kite was able to handle a variety of types of data (including dates, images, geolocations, etc.), but GCHQ was free to extend it by writing custom fields for complicated types of data the agency might need to analyze. The import tools were designed to handle a variety of use cases, including static data sets, databases that were updated frequently, and data stores controlled by third parties to which GCHQ was able to gain access.
This collaborative environment also produced a piece of software called âXKEYSCORE Helper,â a tool programmed with Palantir (and thoroughly stamped with its logo) that allowed analysts to essentially import data from the NSAâs pipeline, investigate and visualize it through Palantir, and then presumably pass it to fellow analysts or Five Eyes intelligence partners. One of XKEYSCOREâs only apparent failings is that itâs so incredibly powerful, so effective at vacuuming personal metadata from the entire internet, that the volume of information it extracts can be overwhelming. Imagine trying to search your Gmail account, only the results are pulled from every Gmail inbox in the world.
MAKING XKEYSCORE MORE intelligible â and thus much more effective â appears to have been one of Palantirâs chief successes. The helper tool, documented in a GCHQ PDF guide, provided a means of transferring data captured by the NSAâs XKEYSCORE directly into Palantir, where presumably it would be far easier to analyze for, say, specific people and places. An analyst using XKEYSCORE could pull every IP address in Moscow and Tehran that visited a given website or made a Skype call at 14:15 Eastern Time, for example, and then import the resulting data setinto Palantir in order to identify additional connections between the addresses or plot their positions using Google Earth.
Palantir was also used as part of a GCHQ project code-named LOVELY HORSE, which sought to improve the agencyâs ability to collect so-called open source intelligence â data available on the public internet, like tweets, blog posts, and news articles. Given the âunstructuredâ nature of this kind of data, Palantir was cited as âan enrichment to existing [LOVELY HORSE] investigations ⦠the content should then be viewable in a human readable format within Palantir.â
Palantirâs impressive data-mining abilities are well-documented, but so too is the potential for misuse. Palantir software is designed to make it easy to sift through piles of information that would be completely inscrutable to a human alone, but the human driving the computer is still responsible for making judgments, good or bad.
A 2011 document by GCHQâs SIGINT Development Steering Group, a staff committee dedicated to implementing new spy methods, listed some of these worries. In a table listing ârisks & challenges,â the SDSG expressed a âconcern that [Palantir] gives the analyst greater potential for going down too many analytical paths which could distract from the intelligence requirement.â What it could mean for analysts to distract themselves by going down extraneous âpathsâ while browsing the worldâs most advanced spy machine is left unsaid. But Palantirâs data-mining abilities were such that the SDSG wondered if its spies should be blocked from having full access right off the bat and suggested configuring Palantir software so that parts would âunlock ⦠based on analysts skill level, hiding buttons and features until needed and capable of utilising.â If Palantir succeeded in fixing the intelligence problem of being overwhelmed with data, it may have created a problem of over-analysis â the companyâs software offers such a multitude of ways to visualize and explore massive data sets that analysts could get lost in the funhouse of infographics, rather than simply being overwhelmed by the scale of their task.
If Palantirâs potential for misuse occurred to the companyâs spy clients, surely it must have occurred to Palantir itself, especially given the companyâs aforementioned âcommitmentâ to privacy and civil liberties. Sure enough, in 2012 the company announced the formation of the Palantir Council of Advisors on Privacy and Civil Liberties, a committee of academics and consultants with expertise in those fields. Palantir claimed that convening the PCAP had âprovided us with invaluable guidance as we try to responsibly navigate the often ill-defined legal, political, technological, and ethical frameworks that sometimes govern the various activities of our customers,â and continued to discuss the privacy and civil liberties âimplications of product developments and to suggest potential ways to mitigate any negative effects.â Still, Palantir made clear that the âPCAP is advisory only â any decisions that we make after consulting with the PCAP are entirely our own.â
What would a privacy-minded conversation about privacy-breaching software look like? How had a privacy and civil liberties council navigated the fact that Palantirâs clientele had directly engaged in one of the greatest privacy and civil liberties breaches of all time? Itâs hard to find an answer.
Palantir wrote thatit structured the nondisclosure agreement signed by PCAP members so that they âwill be free to discuss anything that they learn in working with us unless we clearly designate information as proprietary or otherwise confidential (something that we have rarely found necessary except on very limited occasions).â But despite this assurance of transparency, all but one of the PCAPâs former and current members either did not return a request for comment for this article or declined to comment citing the NDA.
The former PCAP member who did respond, Stanford privacy scholar Omer Tene, told The Intercept that he was unaware of âany specific relationship, agreement, or project that youâre referring to,â and said he was not permitted to answer whether Palantirâs work with the intelligence community was ever a source of tension with the PCAP. He declined to comment on either the NSA or GCHQ specifically. âIn general,â Tene said, âthe role of the PCAP was to hear about client engagement or new products and offerings that the company was about to launch, and to opine as to the way they should be set up or delivered in order to minimize privacy and civil liberties concerns.â But without any further detail, itâs unclear whether the PCAP was ever briefed on the companyâs work for spy agencies, or whether such work was a matter of debate.
Thereâs little detail to be found on archived versions of Palantirâs privacy and civil liberties-focused blog, which appears to have been deleted sometime after the PCAP was formed. Palantir spokesperson Matt Long told The Intercept to contact the Palantir media team for questions regarding the vanished blog at the same email address used to reach Long in the first place. Palantir did not respond to additional repeated requests for comment and clarification.
A GCHQ spokesperson provided a boilerplate statement reiterating the agencyâs âlongstanding policyâ against commenting on intelligence matters and asserted that all its activities are âcarried out in accordance with a strict legal and policy framework.â The NSA did not provide a response.
Anyone worried that the most powerful spy agencies on Earth might use Palantir software to violate the privacy or civil rights of the vast number of people under constant surveillance may derive some cold comfort in a portion of the user agreement language Palantir provided for the Kite plug-in, which stipulates that the user will not violate âany applicable lawâ or the privacy or the rights âof any third party.â The world will just have to hope Palantirâs most powerful customers follow the rules.
Recent Comments