Exploring Digital Culture
Article

4. In the Shadow of the Matrix

A strategic approach to the transparant society

Wed 05 Nov 2014
Marina Turco 

This chapter proposes to consider media forms and mediators from a “stratagematic” perspective, a perspective developed by Matthew Fuller and Andrew Goffey in their book Evil Media (2012). Instead of pointing to the growing integration of technologies, processes, and devices within the transparent walls of an all-encompassing socio-technical matrix [1], the stratagematic vantage point focuses on the discrepancies, the ambiguities that emerge within the contemporary media landscape. From this perspective the media environment looks less like a modern glass building than like a middle age cathedral, a work-in-progress where, even if a clear architectural principle is present, the construction advances by stratification of layers made of diverse, more or less opaque, materials, patterns, structures, unstable elements that continuously adjust their reciprocal positions, creating illusory reflexes and impenetrable shadow zones.

Fuller and Goffey’s notion of “evil media” can be seen as a counterpoint to Flusser’s (2000 [1983]) idea of the electronic/digital medium as “black box”, or to its redeemed version, the “transparent box” as pictured in Brin’s (1998) essay on the transparent society, two extreme visions of contemporary techno-culture that remain an important reference point when discussing social and ethical issues, and still bear a great rhetorical and symbolic power.

The evil of media emerges in between actions and plans that aim at shaping them as producers of truth, progress, and morality:

The way we frame our discussion of media and mediation -as stratagems- seeks to draw attention to aspects of such practices that cannot be… explained or justified as a scientific application of technical principles, as an emanation from some global macro-actor or as the verifiable expression of some other kind of conceptual abstraction. Much of what is proposed here does not have the pursuit of truth as its telos, even if it often avails itself of a claim to truth telling.” (Fuller and Goffey 2012, 19)

From a strategic vantage point, media are developed, used and interpreted as technical application of scientific principles, means to pursue well-defined cultural, social or political goals. Yet, because of the multiplicity of projects and players, conflicts emerge, and the materials and processes adjust to each other in a more improvisational way:

The preconditions for antagonism, and conflict between groups should be located in relation to the shifting patterns of interaction accomplished in complex media ecologies. As those ecologies change, so the lines of stability between groups shift and falter. (Fuller and Goffey 2012, 8)

It is at this point that stratagematic configurations are activated, operations that do not follow a pre-defined path but improvise a trajectory across many layers of the media environment. Stratagems are Machiavellian “operative constructs,” that do “not describe or prescribe an action that is certain in its outcome.” (Fuller and Goffey 2012, 21) The complex ecosystem of the media includes thus different, often conflicting, interests and different kinds of material infrastructures, as well as levels of abstraction where the idea of a socio-technical matrix (the “global macro-actor” in Fuller’s and Goffey’s words), as an overarching construction principle, plays a role in the operations too. Stratagematic constructs emerge in-between and through the layers of such an ecology, in the paths linking centre and periphery, micro and macro levels, onstage and backstage spots.

The stratagems described in Fuller and Goffey’s book picture some of the most relevant examples of stratagematic action in contemporary culture; the ones I want to picture here refer more specifically to different ways of dealing with the (real or imaginary) action of “the other”, the black-gone-transparent box, the socio-technical matrix. Let’s assume that the Internet and all kinds of more or less “digitalized” mediators (communication and business strategies, educational models, relational databases, etc.) are developing within the frame of a centralized structure, a structure that is whether opaque and oriented toward closeness and control, or bearer of a comprehensive, however inherently discriminatory, transparency and accountability. How do actors relate to this matrix by means of stratagematic action? How do they bolster, oppose, or exploit (the myth of) the black/transparent box for the sake of their own particular interests? I will attempt to answer those questions by examining some randomly collected matrix-related stratagems operated by actors in different fields and with different goals.

Stratagems of attack

Attack still is a basic tactic in many stratagematic actions. Hackers, glitchers and whistle-blowers act individually, or within social formations that make their actions meaningful in relation to larger socio-political projects. The contemporary engaged political subject, the hero in the narrative of emancipation from global macro-actors of different kinds, is constructed by means of a creative combination of techniques of attack. In this regard, the fact that the definition, function and modus operandi of the three types of attacker mentioned above can mix and overlap reveals the stratagematic nature of most of their actions [2]. Since other writers in this book are dealing with this topic, I will not analyze specific hacking practices and forms of digital activism here; I will instead cast light on stratagematic disturbances that might hide in the shadow of a perfectly planned att(h)acking action.

Hackers operate on the thin gray line separating good from evil. Those of them who are committed to activism (“hacktivists”) often aim at hijacking of the core of the evil matrix, directing their attack towards particular social actors accountable for protecting the black-boxiness of the medium for their own interest (multinational corporations, political actors, or the two together) [3]. Other kinds of hackers follow a less defined political agenda, or are employed by companies and governments that are fighting against each other [4]. Here, the ethical or unethical nature of the operations depends on which side one chooses to consider the good or the evil one. The whole hacking and data-leaking phenomenon might have positive long terms effects on the socio-technical ecosystem, but it is doubtful that it will lead towards a generalized form of “sousvelliance” aimed at counterbalancing traditional forms of top-down surveillance. Besides, the question arises whether it is possible to convert the unethical behaviour of governments, private companies and individual citizens into an ethical one only by virtue of transparency. As we will see discussing some of the following stratagems, visibility does not always equal knowledge, as transparency on one aspect or object might be stratagematically employed to cover, shadow or distract from other aspects or objects involved in the process. That is the problem with transparency and control on both sides -the institutions and the citizens- because “the more one insists on the transparent accountability of the explicit, codified into denumerable procedure, current doctrine, and official practice, the more the covert trafficking of the tacit comes into play.” (Fuller and Goffey 2012, 104)

Even if it was possible to exactly locate evil in all its transmuting forms, fighting it might turn out to be a problematic task. If an oligarchically controlled network of power exists, it is a diffuse network, an entity that occupies parts of the financial, technological, and political infrastructures in a patched way. The matrix is not easily “attackable” because it has no stable core: it is among us, it comes from us, it operates through terrorism rather than war. Activists’ actions affect one of the possible centres of control, but the new balance that is created in the socio-technical ecosystem [5] by this action might just have the effect of giving more power to the other centres. Although the effectiveness of the attack can be measured on the basis of the correspondence between the intentions of the perpetrator and the produced effects, in each operation both intentions and goals have to be reassessed, adjusted to the new balance in the ecosystem that results from the previous actions. The assessment of what is evil and of where it takes roots have to be flexible, hence, the stratagematic nature of such operations. Of course, an attack is oppositional by definition, but the identification of the terms of such oppositions might shift: pro-transparency activists attack governments or companies that hide information that should be public in the interest of the citizens; pro-privacy activists attack those who illegitimately gather, expose and exploit information concerning the citizens.

It is not the opposition privacy-transparency that does play a role here, but the authority to decide with what means and in which measure governments (in the name of the public interest) and companies (for the purpose of making profit) should be allowed to control the people and to manipulate the truth. The opposition that does play a role is perhaps that between the public and the private subject, an opposition that can be overcome when the two subjects reciprocally recognize their right to co-exist and to pursue their different interests. The state, in this perspective, cannot abolish the rights of the private subject in the name of the public interest. The evil might hide precisely in the grey zone generated by the overlapping between those distinct subjects and interests.

Stratagems of deception

These stratagems operate by means of a simple semiotic mechanism that may be implemented by a basic element of the technological infrastructure: the relational database. Relational databases are fundamental insofar they increase the interpretability of processes and resources. Because it is assumed that the relations between items in a relational database “represent” actual relations at some levels of reality, the appearance of related items is enough to generate the perception of the object in the user, or the machine. Fuller and Goffey (2012, 115) remark that

relational databases are immensely useful as stratagematic breeding grounds and as such work as part of what we can understand as the abstract infrastructure of flexibilization and of the increasing interpretability of processes and resources. Such interpretability should never be mistaken for transparency, however.

Interpretability, indeed, or “the tractability of data and relations depends in no small part on the degree of normalization of data and the structures it is entered into.” (ibid.) This means that, for instance, a “red round thing might be a cricket ball or an apple, and neither might exactly be round, but once they are normalized and interpretable as simple exemplars of bearers of one… of the categorizations red, round, and thing, they loose their specificity.” (ibid.) Between object and interpretation, thus, a space for deception is created, a space where the object can be “creatively” connected with the actual processes, things and actors it is supposed to represent.

A very simple example of deception is the spyware procedure (called significantly “spoofing”) to look into other people’s conversations on Whatsapp [6]. The Whatsapp application “interprets” the presence of security items -the device identification number and the confirmation code- as a permission to display the conversation to the requiring device. To the machine, it is irrelevant that the identification number is not the same “thing” as the actual device: its job is just that of interpreting a set of items. The developer’s strategy for protecting the cell phone is frustrated by the stratagem of deception that the user can operate for spying into other people’s conversations: the deceiver knows that permission is based not on the recognition of the cell phone itself (the apple), but on the recognition of the features that define the device within the software’s code (redness, roundness, and objectness). As long as a red, round object is submitted to the decoding apparatus, thus, the cricket ball-device can gain access to the conversations of the apple-device.

In this case the stratagem of deception is aimed at making transparent what is intended to be black-boxed (in the device’s settings in the interest of the single user). The instructions on the website recommend to get permission from the owner of the spied device before spying. But even if this recommendation was always honoured, what about those who are in conversation with the owner? This is undoubtedly a break of their privacy at the very least. Or is the fact that I spy for a good reason (for instance if a school mate is bulling my son and I want to collect evidence of that) enough reason to break into somebody else’s privacy? The evil or good nature of such operations is never so obvious, whether they are conducted by the user or by the technology itself. The same application, for instance, has recently introduced a feature that could be seen as a limitation of the privacy of the users justified by an improved transparency of one of the conversational features: the “double blue checkmark” read receipt notification.

Here, a more stratagematic interpretation than the usual pro-or-against transparency discourse is required, an interpretation where the “transparency argument” might be used to conceal the real motivations for adopting the feature, considering that the users did not request it in the first place. We might wonder in whose interest this choice was made, and for what purposes. In which respect does the double blue checkmark feature enhances transparency? In fact, it might instead add layers of interpretability, and therefore opportunities for deception. In order to understand the developer’s choice we need to look at the whole media ecology the app is part of. If we look at the socio-technical needs Whatsapp fulfilled before the acquisition by Facebook, we see that it was successful precisely because it offered the private communication space that the social platform was not offering: “While the read receipt has long been present in messaging apps like Facebook Messenger and BBM, the absence of the function in WhatsApp was a big draw for some people.“ [7]

After the acquisition, the buying company declared that the two apps would maintain their different profile as far as privacy policies concerns. Was this purpose real? Perhaps, the goal of the buyer was that of conforming the new app to the business model of the social platform, setting it to acquire the maximum of information about the users, and the maximum right to use that information. Neither the goal nor the means that were going to be used to reach this goal were revealed in advance, though, so that the developer could maintain space for manoeuvre, the possibility to adapt its choices in stratagematic way after having observed how the users respond to minor moves in that direction [8]. Another possibility is that the company did not have a predefined goal and strategy fixed in advance. In this case, the purpose would be that of trying out different modifications in order to envisage new functions, new relations between the two apps, and possibly a new business model that does not entail that much sacrifice of privacy in the first place. [9]

A different interpretation of the introduction of the double blue checkmark will emerge from each of those scenarios . How would the adoption of a “coveillance” feature in this app fit those different strategic and stratagematic orientations? In the case of the second scenario, which seems to be the most probable, Facebook would have forced transparency on the users’ behaviours, counting on the fact that the adoption of comparable features on the social platform were contested, but nevertheless accepted because the users’ convenience to remain on the platform was stronger than their discontent, something we could call “fidelity by necessity” (cf. Schaefer 2011, 155). There could even be a sufficient number of users which -for ideological or practical reasons- accept and appreciate the adoption of such transparency features.

After the protests that followed the introduction of the double blue checkmark, the developer might have sensed that the users’ fidelity was not ensured after all. The somehow disproportional negative reaction to the blue checkmarks was perhaps fuelled by a much deeper dissatisfaction of the customers for Facebook’s privacy policy [10], following scandals like that of the “emotion contagion” experiment, a secret psychological test conducted by manipulating the timeline of more than 700.000 users [11]. The risk of having a great number of users abandoning the platform was also increased by a change in the social media ecosystem where new serious competitors appeared on the scene (Ello, Diaspora, and other social platforms with different privacy policies [12]). Following those changes in the ecosystem, the company decided to turn course in the opposite direction, with a couple of significant moves. The first astonishing decision was that of opening to anonymous networking (Tor):

In a first-of-its-kind move for a Silicon Valley giant, Facebook on Friday launched a Tor hidden service, a version of its website that runs the anonymity software Tor. That new site, which can only be accessed by users running the Tor software, bounces users’ connections through three extra encrypted hops to random computers around the Internet, making it far harder for any network spy observing that traffic to trace their origin. [13]

As for WhatsApp concerns, the double blue checkmark became an optional feature, and a much more radical privacy-enhancing security feature was introduced: end-to-end encryption for each conversation. [14]

While processes such as those described above are unfolding, different opportunities for deception are created on both sides. If we analyze a development starting from a given point and follow the many threads that link events, actors, intentions, and relations, the intention-action-effect trajectories often appear to become more and more complex. Alongside the windy path towards their own chosen destination, proceeding through the many transparent and opaque layers that constitute a particular socio-technical ecosystem, developers and users may need to deceive each other. In these trajectories, the ideals of privacy and transparency can be sincerely pursued, or they can be used in the communication play to convince a particular public of the rightness of a particular choice. Sometimes, admitting that there was an egotistical motivation behind an actor’s choice, a motivation that was in the first place disguised by a series of “in-the-interest-of-the-user” pitches, might be a manoeuvre of deception in itself. The acknowledgement that there is an economic interest behind the monitoring of the users’ behaviour -for instance in the case of Facebook’s emotion contagion experiment- can be a means to conceal the presence of other, more evil interests and motivations by actors operating in the backstage. [15]

Stratagems of shifting grounds of action (and expectations)

This stratagem is about following unusual paths through the different levels and tools of mediation, exploiting the grayness of media for one’s own advantage. We tend to think that choosing for a device -a game, a project plan, a social platform, or any kind of socio-technical gadget- automatically implies to activate all its functions and prescribed trajectories, and in fact this is what the device invites us to do, in order to implement its own matrix system, and to complete the perfect cycle of the user-device frictionless interactions. The presence of “smart” mechanisms in many fully digital and hybrid devices is supposed to introduce movement -in the sense of improvement- in this perfect but rather static frictionless cycle. The smart algorithm sets the first move making a prediction, and adjusting the design according to statistical calculations based on the users’ past behaviour. After the adjustment it is likely that more users will focus on the functions the algorithm assessed to be more important, and enjoy the improvements, sending positive feedback to the controlling algorithm that will further implement the feature according to the new data. At this point one might ask oneself: were those features developed to fulfil the real needs of the users, or were they the product of the optimization process itself, a process where users are encouraged and facilitated in the use of previously enhanced features? The optimization mechanism, in other words, may beget an echoing sequence of self-fulfilling prophecies.

Users might even decide to extend this self-improving apparatus with plug-ins that consist in patterns of human behaviour. A good example of such a “human add-in program” is that of the party girl Gina, whose story, according to Douglas Rushkoff (2011, 42), is exemplary of how decentralization enhanced by digital networks may lead to the disconnectedness of the people from their social environment. Gina’s Twitter followers read her posts to find out where the trendiest party is at a given moment. When she is at a party she has chosen -and publicized on her social media profiles- instead of enjoying it she keeps looking at the messages on her mobile phone suggesting possibly better parties. When a better alternative appears (usually shortly after she arrived at the chosen location), she rapidly heads off toward the trendier event. The Gina+Twitter socio-technical system is an efficient “ranking algorithm”: it optimizes her trend-setting actions, which consist in a final editing action, or conclusive decision on what others -the network of “friends” and the network of algorithms that channel their messages- have built up. The mechanisms can be further implemented by including more friends and networks in the system, by updating the ranking more and more often (in order to keep being “followed”, both in physical and in social media sense) until too many people and parties are involved, so that the choice looses its distinctive power [16] and the people realize that they are too busy assessing the level of coolness of the party to actually enjoy it.

While optimization dispositifs [17] focus on one trajectory, one algorithm, and one goal at the time (in Gina’s case, the more people move to a party, the cooler the party is), things in the real world proceed in a much more complex way. Actors can always decide to switch from the one trajectory/algorithm/plane of action to the other, exploring the interesting consequences of using the media while eluding their optimization dispositifs. This stratagem may prove to be more gratifying, and more efficient from a practical vantage point. The British artist Bansky is a successful trajectory-shifter, who works combining at least three layers of the socio-technical matrix: the layer of the physical urban interface (graffiti are the located/situated art works par excellence, as concrete as the responsibility of the artist who is illegally painting them); the layer of the digital networks (the artist has an internet website and an Instagram profile which he uses to talk about his work, or to produce it, if we consider those on line actions as part of his art); and the layer of the art market, where value is created by the interaction of processes at different levels too (capitalistic market mechanisms, social capital mechanisms, etc.).

The artist’s real identity has remained a mystery for more than twenty years. Banksy’s work is produced and exhibited within the art world, or, more precisely, on the edge between typical art world institutions, like in his artist-in-residence month in NYC in October 2013, and more peripheral, alternative stages. In the case of the artist-in-residence program, for instance, Banksy was not invited by an art institute, but he nevertheless worked on location, organized exhibitions, released communication material on line, as any regular artist would do during an artist-in-residence period. The artist’s professional identity is well defined, thus, but since it cannot be coupled with the ID of a real person, questions of marketability, authenticity and ownership arise.

Bansky uses anonymity to gain media attention and to control the market system, a system according to which scarcity means more value: information about him is scarce and his work is scarce in the sense that it cannot always be purchased (sometimes it is destroyed by the public or the police). By being anonymous Banksy increases the value of his work (both in materialistic and symbolic way) and at the same time makes it difficult for the art system to exploit it in its own circuits. Instead of redistributing this value in the way the capitalist market algorithm requires, he transfers the value created according to the rules of the art world to those who are marginal to the system (involuntary beneficiaries of his art) by granting them the right to possess and use the works. As he states on his website: “As a kid I always dreamt of growing up to be a character in Robin Hood. I just never realised I’d end up playing one of the gold coins.“ [18]

The artist does not attack, he erases, masks, [19] or simply dribbles parts of the system he is dealing with: not those parts that are programmatically erased, covered or censured by the black-boxed matrix, but those whose elimination suits his plan to get control of the system. In this way his work acquires a political quality. By denying the art dealers to determine the value of his work within the usual socio-cultural frames, he reaffirms the sheer value of originality. In principle, anybody can claim to be Banksy, and because there is no way to determine if one’s claim is true, except perhaps the quality of the message, the value has to be found in the message itself and in nothing else. It is not just an act of disobedience, it is a creative process: the message emerges in between layers and creates new layers, feedbacking unsettling inputs to the other levels (the political discourse, the art world, etc.). Banksy, for instance, establishes a particular relationship with the viewers, allowing them to do whatever they want with the works, opening up different spaces for engagement: “Banksy’s graffiti understands and predicates a relationship between the viewer and the street, something that graffiti that merely shouts the artist’s name or icon over and over… doesn’t do.“ [20]

What Banksy does is not so much eluding surveillance, but using different interfaces for different layers of his identity. He goes unnoticed in the real world precisely because he knows how to lure us to hunt him in the virtual one, and the other way around. By means of a good combination of stratagems he, or she (or they, perhaps), [21] manages to remain anonymous. The army of journalists, policemen and fans who are trying to be simultaneously everywhere -in the physical and the virtual dimension, like they think the artist is- cannot catch him. It might be more effective to look for him there where he appears, making an in depth analysis of that appearance, which also unfolds in a multilayered way. A method of enquire that is adopted by the British journalist Brown Moses [22], who proceeds by shifting between the layers that constitute what is erroneously considered to be a single layer: the Internet. Moses investigates cross-border crime and corruption by collecting data and open source information that are available on line, instead of being on the crime scene as traditional investigative journalism prescribes. He analyzes the complexity of representations transmitted through digital networks: this allows him to notice the discrepancies and the details that those who focus on a supposedly direct relationship between physical reality and representation fail to see.

There are countless examples of stratagems of shifting grounds, examples with a varying degree of playfulness and engagement. Flash mobs, for instance, can be organized for political purposes or as a way to create a new space of action within the matrix. Events such as Dance walks or Dance where you are [23] establish a connection between on line and off line communities in order to recreate, through and beyond the virtual spaces, the subcultural context that, with its local and tribal roots, generated club cultures.

Conclusions

Privacy and transparency are not opposite but complementary. Privacy is a space of possibility, an “unmapped country within us” [24], a dark spot in the shadow where agency and responsibility can be restored against the deterministic power of the socio-technical matrix. Transparency is thus a representation of processes and events that is based on collectively acknowledged trajectories of agency and responsibility; it presupposes awareness of the mediating role of technologies, and commitment to the ethical construction of shared social and cultural frames.

There is no fixed socio-technical configuration that can ensure total transparency or absolute privacy; there is no society that can live without both transparency and privacy. Users will always have to jockey their own way towards privacy as well as transparency, using different devices and applications, shifting roles and planes of action, acting stratagematically in order to boost emancipation and supervise control. What Gianni Vattimo (1992) wrote talking about the mass media is still valid for their contemporary digital heirs: “What I am proposing is: (a) that the mass media play a decisive role in the birth of a postmodern society; (b) that they do not make this postmodern society more ‘transparent’, but more complex, even chaotic; and finally (c) it is in precisely this relative ‘chaos’ that our hope for emancipation lie.”

Footnotes

[1] This term elaborates on the compound “socio-technical (eco)system,” as defined by media scholar Mirko Schaefer (2011, 18): The term socio-technical ecosystem is derived from the concept of a ‘socio-technical system’, used in management studies and organizational development to describe the interaction of people and technology in workplaces… Socio-technical ecosystems describe an environment based on information technology that facilitates and cultivates the performance of a great number of users. “Socio-technical matrix” refers not to the totality of the socio-technical environment, but to the design and implementation of an overarching construction principle for the development of more controllable systems by the most powerful social actors.

[2] WikiLeaks, for instance “attracts dubious labels of journalism and whistle-blowing even though it does not perform traditional journalistic tasks like investigation, editing, writing, and broadcasting. Instead, it mimics aspects of each label undermining the relationship between the two categories as traditionally defined.” (McCarthy 4; Lovink) Like whistle blowing, hacking is a technique rather than a movement, a weapon that can be used for different purposes by different social groups (as examples in the section “Strategies of Retreat” will show).

[3] Carole Cadwalladr’s interview with American activist Laura Poitras, author of the documentary about Edward Snowden Citizenfour, gives an idea of the deep intersections between politics, technology and intelligence (http://www.theguardian.com/world/2014/nov/09/berlins-digital-exiles-tech-activists-escape-nsa?CMP=fb_gu, accessed December 22, 2014.)

[4] In the documentary film Zero Days, the Dutch filmmaker Hans Busstra provides an interesting overview of various hacking subcultures (http://tegenlicht.vpro.nl/afleveringen/2014-2015/zero-days.html, accessed November 20, 2014).

[5] Here I substitute “socio-technical matrix” with “socio-technical ecosystem” because I am considering the actual socio-technical environment and not the dominant overarching construction principle.

[6] http://www.wikihow.com/Access-Someone-Else’s-WhatsApp-Account, accessed November 20, 2014.

[7] http://www.bbc.co.uk/newsbeat/29933261, accessed November 25, 2014.

[8] José van Dijck gives some examples of how FB modified its business strategies and privacy policies in the past (Van Dijck 46-56).

[9] The company might have considered that the introduction of a new element in the ecosystem at the level of the technological infrastructure, such as the new modular network structure that FB is building in the Altoona data center (http://www.wired.com/2014/11/facebooks-new-data-center-bad-news-cisco/?mbid=social_fb, accessed November 20, 2014), could contribute to changes in the business model as well.

[10] In some cases, this dissatisfaction was not even just a matter of privacy. Some users’ groups strive for permission to use nicknames on the social network because they consider nicknames as a fundamental part of their professional and psychological identity (the case of the drag queens’ fight against Facebook is perhaps the most enlightening one: http://online.wsj.com/articles/facebook-changes-real-name-policy-after-uproar-from-drag-queens-1412223040, accessed November 25, 2014). This aspect of the privacy problem is related with the topic of the social and psychological value of avatars, nicknames or even complete anonymity in the virtual world, something I will talk about later in this chapter.

[11] http://www.theguardian.com/technology/2014/jun/30/facebook-emotion-study-breached-ethical-guidelines-researchers-say, and http://www.theguardian.com/technology/2014/jul/02/facebook-apologises-psychological-experiments-on-users, accessed November 25, 2014.

[12] http://www.dailydot.com/technology/diaspora-ello-facebook-battle-of-social, accessed November 25, 2014. See also http://www.theatlantic.com/magazine/archive/2014/12/the-fall-of-facebook/382247/2, accessed December 5, 2014 and http://www.theatlantic.com/magazine/archive/2014/12/the-fall-of-facebook/382247, accessed December 10, 2014.

[13] http://www.wired.com/2014/10/facebook-tor-dark-site/?mbid=social_fb, accessed November 20, 2014.

[14] http://gizmodo.com/whatsapp-now-provides-end-to-end-encryption-for-your-me-1660089798, accessed November 25, 2014.

[15] Governmental actors were accused to be directly or indirectly involved in the emotion contagion project: http://www.theguardian.com/technology/2014/jul/04/facebook-denies-emotion-contagion-study-government-military-ties, accessed December 1, 2014.

[16] For a description of the notion of coolness and the related dynamic interplay between conformism and distinction see Turco (2014, 98-108).

[17] Evgeny Morozov (2013) defines as “technological solutionism” the ideology according to which new technologies are supposed to improve efficiency in all fields of action and solve all kinds of life “problems.”

[18] http://banksy.co.uk/faq.asp, accessed December 11, 2014.

[19] Erased texts are a typical trademark of Banksy’s work: they may represent the growing tendency towards censure, on the streets as well as in the digital networks.

[20] http://www.citylab.com/design/2014/11/why-banksy-is-probably-a-woman/382202, accessed December 3, 2014.

[21] In fact there is no evidence that the artist is a man either: many suggest the possibility that there is a woman or a team behind the Banksy pseudonym. Cf. http://www.theweek.co.uk/world-news/55507/banksy-uncovered-why-are-we-so-keen-unmask-artist, accessed December 3, 2014.

[22] See https://www.bellingcat.com, and https://twitter.com/Brown_Moses, accessed December 4, 2014. In July 2014 Moses launched an “open source investigation tool.” Cf. http://brown-moses.blogspot.nl, accessed December 4, 2014.

[23] See for instance the Dance Walk NL Community, and the DANCE WHERE YOU ARE/ Silent Dance Amsterdam groups on Facebook.

[24] Psychoanalyst Josh Cohen uses these words by George Eliot to explain that the private self is by definition unknowable. See http://www.theguardian.com/world/2014/aug/03/internet-death-privacy-google-facebook-alex-preston, accessed December 11, 2014.

Literature

Brin, David. 1998. The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom? New York, NY: Perseus Books.

Van Dijck, José. 2013. The Culture of Connectivity: A Critical History of Social Media. New York: Oxford University Press.

Flusser, Vilém. 2000 (1983). Towards a Philosophy of Photography. London: Reaktion.

Fuller, Matthew and Andrew Goffey. 2012. Evil Media. Cambridge, MA: MIT Press.

Lovink, Geert and Miriam Rasch. 2013. Unlike Us Reader: Social Media Monopolies and Their Alternatives. Amsterdam: INC.

Lovink, Geert, Patrice Riemens. 2013. “Twelve Theses on WikiLeaks.” In Brevini, Benedetta, Arne Hintz, and Patrick McCurdy (eds.), Beyond WikiLeaks: Implications for the Future of Communications, Journalism and Society: pp. 146-165. Hampsire, EN: Palgrave Macmillan.

Morozov, Evgeny. 2013. To Save Everything, Click Here: The Folly of Technological Solutionism. Philadelphia, PA: Public Affairs.

Rushkoff, Douglas. 2011. Program or Be Programmed: Ten Commands for a Digital Age. Berkeley, CA: Soft Skull Press.

Schaefer, Mirko. 2011. Bastard Culture! How User Participation Transforms Cultural Production. Amsterdam: AUP.

Turco, Marina. 2014. Dancing Images: Text, Technology and Cultural Participation in the Communicative Dispositif of VJing. PhD dissertation, Utrecht University.

Vattimo, Gianni. 1992. The Transparent Society. Baltimore, MD: John Hopkins University Press.

Mara Vandorou Editorial: The Public Human Strategies of living in the transparent society
Carmin Karasic 1. Hacktivism in My Words Becoming a hacktivist through electronic civil disobedience
Stephanie de Smale 2. Tinkering with Life Strategies for 'literacy' in the age of biotechnology
Ben Borrow 3. The (in)convenient surveillance device The Mobile Phone as both Enabling Surveillance yet Empowering the Individual
Alexandra Woelfe 5. Surveillance of the state Connections to identity, autonomy and Foucault's notion of biopower
Dr David Barnard-Wills 6. Stanza An artist's engagement with surveillance, privacy, technology and control
Joeri Taelman 7. Biopolitics through the internet of bodies The act of looking back might sound appealing, but it might very well mean the disappearance of disappearance
Suze Krijnen 8. Waarom je online privacy kunt vergeten (Tenzij we als publiek onze verantwoordelijkheid nemen)
Hans de Zwart 9. Privacyrede 2014 Deze rede werd op 2 september 2014 uitgesproken voor SETUP en Studium Generale UU in de Senaatszaal van het Academiegebouw in Utrecht.
Nienke Huitenga 10. Mijn Digitale Schaduw Ooit ging het om je persoonsgegevens. Nu geven we iets veel waardevollers weg.