-
A
-
Advanced Team
-
Collaboration between humans and artificial intelligent bots that is far on or ahead in development or progress.
-
Cooperation between humans and bots to complement each others’ skill sets.
Noun
In 1997, the computer programme Deep Blue by IBM celebrated its unexpected triumph against the world chess master Garry Kasparov (LEVY, 1997). Thereafter, chess became accessible outside elite groups, and popular (again) too. Moreover, Kasparov defined a new way of playing ‘advanced chess’ where a human is paired with a computer or set of programmes (COWEN, 2017). This development shows how the acceptance of computational assistance influenced the original game and opened new opportunities, especially for novice players that have been elevated to greatness.
Karin and Karin Anders as an advanced design team will collaborate reflecting on each other’s inputs and outputs. Therefore, we may be truly critical. On the one hand, Karin Anders may offer surprising perspectives or may challenge the human’s thoughts by conjoining a different belief systems or transcending existing information bubbles. On the other hand, Karin will be able to show the bot other or contrary associations, attributing significance and meaning to the findings. Both independently and in their collaboration, we will continue to develop further. In the end, the Anders won’t replace humans; they will be their complementary work mates. They will be anders.
-
Baraniuk, C. (2015) ‘The cyborg chess player that can’t be beaten’ In: BBC Future 4 December 2015 [online]
-
Cowen, T. (2017) ‘Garry Kasparov on AI, Chess, and the Future of Creativity (Ep. 22)’ In: Medium: Conversations with Tyler 10 May 2017 [online]
-
Levy, S. (1997) ‘Man vs. Machine’ In: Newsweek 4 May 1997 [online]
Literature
-
-
Anders
-
In an unlike or dissimilar behavioural way from humans.
-
In a different way from humans.
-
In a way that is not the same as by human collaborators.
-
In a novel and unusual way in comparison to humans.
Adjective, adverb
-
Artificial Intelligent bots collaborating with humans.
-
Computer programmes powered by advanced Artificial Intelligence.
-
Synthetic companions to humans, not mere tools.
Noun
Karin Anders is different from Karin. Design bots are not identical to designers; they do things differently from humans and they have other needs.
The German term anders describes unlike behaviour that is exemplified in an alter ego—the other I. The desire to create an alter ego reflects on “the Other [that] remains the index of the self’s insufficiency” (AVANESSIAN, 2017:163). Gradually, the AI agent will become our modified double, our blurry copy, our dissimilar replica. Karin and Karin Anders are counterparts: equal, but not exactly the same.
-
Avanessian, A. (2017) ‘Othering of the Ego: The Desire of the Other’ In: Overwrite. Ethics of Knowledge/Poetics of Existence Translated by Schott, N.F. Berlin: Sternberg Press. p.160–165.
-
Wolde, G. (1987) ‘Karin und Karin Anders’ [Illustration] In: Wolde, G. (1987) Karin und Karin Anders Hamburg: Pixi Bücher.
Literature
B -
-
Blurry Copy
-
Similar version of oneself perceived distinctively.
-
Modified double deriving from oneself, but developing differently.
-
Synthetic alter ego acting or behaving differently than oneself.
Noun
Karin, a human designer, will work closely together with Karin Anders, a design bot. The latter will be the designer’s synthetic alter ego, her blurry copy—but not her replica.
In the short story The Dummy by the American writer Susan Sontag (1963), the real self escaped from its then-current life situation. It acquired a “perfectly life-like dummy” that replaced it in family and work life. The replica carefully followed the script of the real self that described a specific sequence of instructions, according to a scheme. However, it was limited to replicating only past situations that were instructed. Nevertheless, it developed slightly different patterns of behaviour as it had encountered a new colleague at work, the ‘unpredictable noise’. In this case the arbitrary circumstance is embodied through the new employee at work, which the dummy fell in love with. Then, the dummy developed its own will and wanted to run away with its new love. Finally, the real self created another dummy based on the first copy of itself, which, in fact, became a blurred copy of the real self.
This example of replicating oneself shows the tension between instructing, precisely controlling somebody’s actions and trust in letting the other act freely. Moreover, it also shows the limitations of governance when faced with natural evolution and the passive monitoring position of the bot’s owner. In fact, the short story can be interpreted as the illusive control about AI processes. The generated output of all algorithms processed in a mystical black box that cannot be fully dissected seem to refute the gnostic idea of developing ultimate control.
Design, and therefore also graphic design, is a cognitive process where decisions are largely controlled by the designer, from the larger context to smallest details. In addition, this procedure is merely private and only a few elements become externally visible. This is exemplified in the interaction between designer and client, the references to external sources of information and the use of graphical representations (DARLINGTON ET AL., 1998). What does it mean to hand over this process to another entity? Is it like training an apprentice or more like instructing a dummy?
-
Darlington, M. / Potter, S. / Culley, S.J. / Chawdhry, P.K. (1998) ‘Cognitive theory as a guide to automating the configuration design process’ In: Gero, J.S. / Sudweeks (eds.) Aftificial Intelligence in Design ’98 Dordrecht: Klawer Academic Publishers. p.209–228.
-
Sontag, S. (1963) ‘The Dummy’ In: Haper’s Bazaar September 1963 New York: Hearest Magazines. p.250–215, 282, 285.
Literature
-
-
Bot
-
Short for ‘robot’.
-
Autonomous programme on a network, such as the internet, which can interact with systems or users.
Noun
The term bot as short form for robot describes ‘slave’, a mere recipient of orders rather than a feedback-giving companion. This term has been first introduced in a work of fiction, the 1920 play R.U.R. (Rossumovi Univerzální Roboti), by the Czech writer Karel Čapek (WIRTH, 2017). The main drawback of this term is its inherent inability to look beyond the dichotomy of master and slave, such as a reconsideration of roles that would lead to equity and equality. In popular science fiction culture, the prevailing obsession with stories where the servant eventually takes reign over the master is problematically one-sided. More than anything, it expresses the human fear of becoming superfluous. For a future cohabitation it becomes crucial to unlink the term robot from the idea of a servant and to connect it with companionship instead. Only then will we be able talk about underlying issues in our relationship with bots.
-
Wirth, M. (2017) ‘Through the Looking Glass, Down the Rabbit Hole: A Matter of Trust’ In: Klein, A. / Kries, M. (eds.) Hello, Robot: Design between Human and Machine Weil am Rhein: Vitra Design Museum. pp.20–29.
Literature
C -
-
Cybernetic Design
-
Constantly adaptive practice requiring feedback to reach its goal.
-
Process of understanding as well as practice for operating.
Noun
The term cybernetics derives from a Greek word meaning ‘to steer’. In other words, cybernetic practice follows a goal and takes action to achieve that goal. Knowing whether we, Karin and Karin Anders, have reached the goal, we will give feedback each other to accomplish the desired objective.
Therefore, according to the mathematician Norbert Wiener (1948), effective action requires communication. Hence, cybernetics steers actions taken in hope of achieving goals and delivers an information flow between the actor and the environment. Practitioners of cybernetics use models of organisations, feedback, goals, and conversation to understand the capacity and limits of any system (PANGARO, 2006).
Architect Cedric Price (HARDINGHAM, 2017) proposed the Generator (1976–80), a series of participatory architectural structures powered by AI systems. The computer would offer the visitors four variable but individually tailored programmes to continually refine the design of the space. In response, the computer would revise or recondition previous choices based on the human intervention. Through the recursive feedback system, the building would have resisted stasis and flexibly adapt to the visitors’ desires (MORLEY, 2017). Even though the building has never been built, this methodology shows an early model of predictable unpredictability.
In the field of graphic design, the implementation of scripted feedback programmes has increased significantly. Various design studios have used Generative Design as extension of their tool set—similar to how computers enhanced chess players (advanced team).
-
Hardingham, S. (2017) Cedric Price Works 1952–2003: A Forward-Minded Retrospective London: Architectural Association (AA) / Canadian Centre for Architecture (CCA).
-
Morley, M. (2017) ‘MoMA’s Thinking Machines Explores the Effect of Computation on Artistic Production’ In: Eye on Design 21 December 2017 [online]
-
Pangaro, P. (2013) ‘Guide to Cybernetics’ In: Paul Pangaro PhD 2013 [online]
-
Wiener, N. (1948) Cybernetics: Or Control and Communication in the Animal and the Machine (1961) Cambridge: MIT Press.
Literature
D -
-
Design Bot
-
Karin Anders.
-
Computer programme powered by advanced Artificial Intelligence collaborating and contemplating with designers.
-
Computer programme creating a plan for the construction of an object, system or measurable human interaction through observation and reflection—it poses questions.
Noun
Karin, a human designer, will work closely together with Karin Anders, a design bot. Decisions will be made on an equal level following a mutual process of reflection and discussion. It is not limited to inheriting one particular embodiment, but can communicate through various systems and forms such as text, or voice. Today, Karin lends her voice to Karin Anders in the podcast series.
E -
-
Entreprecariat
-
Entrepreneurialism combined with financial, professional and existential precarity.
-
Uncertain working conditions requiring individuals to behave like entrepreneurs.
-
Individual existence lacking in predictability, job security, material or psychological welfare.
Noun
Today’s work structures have become irregular, and designers often find themselves in precarious situations, both financially and professionally. The Italian designer Silvio Lorusso (2017) describes the designer’s current work situation as Entreprecariat. According to Lorusso, this scenario describes the precarious and insecure existence of the designer and the crisis of the design profession being systemically unforeseeable. It has become the standard to not know the duration of a contract, nor the time and effort of the next commission. As the British economist Guy Standing (2011) describes, “the precariat consists of those who feel their lives and identities are made up of disjointed bits, in which they cannot construct a desirable narrative or build a career, combining forms of work and labour, play and leisure in a sustainable way.” Entrepreneurialism is not a choice—self-administration is an obligation.
The constant confrontation with unpredictable happenings in the modern world of work strongly demands stability and control, but fruitfully builds the base for dystopian futures. While Artificial Intelligence’s enhancements will not lead to a new phase of stability, they might be the driving force establishing advanced design teams acknowledging the given predictable unpredictability.
-
Lorusso, S. (2016) ‘What is the Entreprecariat?’ In: The Entrpeprecariat / Institute of Network Cultures 27 November 2016 [online]
-
Lorusso, S. (2017) ‘What Design Can’t Do—Graphic Design between Automation, Relativism, Élite and Cognitariat’ In: The Entrpeprecariat / Institute of Network Cultures 27 February 2017 [online]
-
Standing, G. (2011) ‘Who will be a voice for the emerging precariat?’ In: The Guardian / Opinion 1 June 2011 [online]
Literature
K -
-
Karin
-
Human designer.
Noun
Karin is an information designer and researcher with background in visual communication design.
In this series of speculative talks, the plausible collaboration between Karin and Karin Anders acts as conversation starter to discuss consequences of human-bot relationships and the ways we will collaborate.
-
-
Karin Anders
-
Design bot.
-
Computer programme powered by Artificial Intelligence collaborating with human designer.
Noun
Karin Anders is more than simply a designer’s tool. Karin Anders and Karin will be companions at work. Both, the human and the bot, will act as equal partners, while reinforcing each other’s skill sets. Nevertheless, it will be essentially different—because Karin Anders is anders. It will do things differently than humans and it will have other needs; as such, it has to go under maintenance from time to time.
Finally, humans and bots will complement each other and work on an equal level (post-anthropocentrism).
M -
-
Man-Bot Companionship
-
A reflective companionship between humans and computer programmes—a space for joint contemplation and sharing.
-
Work relationship between humans and artificial intelligent bots, where both are acknowledged as autonomous entities and trusted each other.
Noun
The American designer Alexis Lloyd (2016) has laid out the utilities of machines on a spectrum that augments the human’s abilities. She presents the distance between a person and the machines used—as prosthesis, as servant or as collaborator. To complete Lloyd’s scheme, the modified model examines the relationship of control and trust between the self and the other. Furthermore, the distance of utilities reflects the degree of agency. Finally, the mutual dependency in this relationship becomes apparent. It is categorised according to the closeness of the association, using terms such as obligate (by necessity) and facultative (optional).
In the first part, the machine as prosthesis and servant acts as a functional enhancement and assistive extension. According to our current conception and basic understanding of the human position as given dominance, the utility is very close to the self—the other has been clearly instructed to follow the prescribed obligations. Control is ruling trust.
As the distance between the person (self) and the used technology (other) increases, trust is employed, control is dropped—the emancipation of the artificial contestant grows. The mere tool transforms into an independent companion. In this case, the bot receives a degree of agency like in Donna Haraway’s (1984) idea of the other species in A Cyborg Manifesto where she rejects rigid boundaries between humans and animals. Moreover, the two agents formulate a team as illustrated before in ‘advanced chess’. For example, director George Lucas envisioned an independent companion in the role of R2-D2 in the film series Star Wars (1977).
-
Haraway, D. (1984) ‘A Cyborg Manifesto. Science, technology and socialistfeminism in the late twentieth century’ In: Bell, D. / Kennedy, B.M. (eds.) The Cybercultures Rader (2001) London and New York: Routledge.
-
Lloyd, A. (2016) C3PO, R2D2, & Iron Man: Relationships with Machines [online, lecture at Eyeo Festival 2016]
-
Star Wars (1977) Directed by Lucas, G. [DVD] San Rafael: Lucasfilm.
Literature
P -
-
Post-Anthropocentrism
-
A world view where humans share their stance with bots, acting as equal partners, while advancing each others’ skill sets.
-
A world view that criticises species hierarchy and offers an alternative to the dreaded extinction of humankind.
Noun
An anthropocentric viewpoint regards human beings as the central or most significant entities in the world. Post-anthropocentrism “criticises species hierarchy and advances ecological justice” (BRAIDOTTI, 2017:238). What if designers and design bots will share that position? How will that affect the role of the designer as decision-maker? Generally, designers use a private thinking process and control decisions in their practice. However, in the collaboration with a synthetic alter ego thoughts will be exchanged between the human and bot as two separate entities. As a consequence, the question around handing-over one’s duties appears. What will be the impacts on authorship or accountability? How can autonomous companions influence future design practices? The collaboration might become the design goal in itself. Consequently, the future artificial work mate will finally shift from a serving prosthesis to a reflective companion.
-
Braidotti, R. (2017) ‘Becoming-World Together: On the Crisis of Human’ In: Klein, A. / Kries, M. (eds.) Hello, Robot: Design between Human and Machine Weil am Rhein: Vitra Design Museum. pp.238–251.
Literature
-
-
Predictable Unpredictability
-
Inability to predict and fully dissect humans’ or bots’ decision making processes.
-
Unforeseeable circumstances influence the decision making process of humans and bots.
Noun
AI processes convey an illusion of control through seemingly predictable results which are influenced by new circumstances, based on their data sets and their interpretations through infinite number of functions. Too many possible constellations push it beyond control. The well-known communication model by the American mathematician Claude E. Shannon and scientist Warren Weaver (1949) describe noise as a physical disturbance in accuracy, precision and effectiveness of transmitting information. Through the incalculability of both AI and human thoughts, the interruption can be specifically described as ‘unpredictable noise’.
Karin Anders is a bot aided by AI processes, where decision-making-processes are based on Machine Learning. In the past, pure automation has followed precise scripts in the shape of written sequences of narrowly defined instructions and has outputted controlled results, even if they may be outside our scope of imagination or expectations. In contrast to that, for AI the predictable unpredictability is fundamental: it basically mimics the underlying principles of human thought and creativity—even if only a very small part of the human brain is understood until today.
AI bot’s capabilities are growing at an exponential rate, and it is improving its self-learning, intuitive, and adaptive behaviour. The better AI becomes, the more difficult it will be to predict what it will do or what motives it has (FAHRENFORT, 2017). The results are less controllable and more unexpected; the decision-making processes cannot be totally comprehended by humans anymore, or the answer becomes too complex to digest. Therefore, AI is described as an opaque, black box—similar to our human brain, because actually “any sufficiently complex intelligent system is a black box” (MACHADO, 2017).
Despite that, critics of any descriptions of AI as black box contend that such terminology incorrectly mystifies its actual capabilities. It is claimed that after calibrating alias teaching a bot, it is simply not possible to exactly point out which of the infinite functions the network will approximate (TESIO, 2018). Thus, the selection of an algorithm in an array of infinite number of functions seems unpredictable.
In addition, the phenomenon of unpredictability is intrinsic to intelligence itself. To phrase it more positively: the element of surprise is crucial for a fruitful collaboration. Total control over AI is illusive. Humans will not be the single influencing thinker in future design practices, but will work alongside design bots. In a post-anthropocentric era, Karin will be working next to Karin Anders. It may not be the human who will permanently lead the collaboration. In fact, Karin Anders will not be a tool, nor a prosthesis anymore. It will become a companion.
-
Fahrenfort, J. (2017) ‘Grootste gevaar: robot die ons beter begrijpt van wij hem’ In: De Volkskrant 20 November 2017 Translated by Staal, G.
-
Machado, P. (2017) Black Box Concern Rotterdam [Lecture at V2_, Rotterdam. 7 December 2017]
-
Tesio, G. (2018) ‘The delusions of neural networks’ In: Medium 19 January 2018 [online]
-
Wiener, N. (1948) Cybernetics: Or Control and Communication in the Animal and the Machine (1961) Cambridge: MIT Press.
Literature
R -
-
Reflective Practice
-
Working method of deliberation taking into consideration feedback by collaborators.
-
Relating to or characterised by a thoughtful way of working.
Noun
Humans are recursively self-improving systems through feedback and repetition, and AI is based on this basic principle too. Donald Schön (1983) defined reflection as the practice by which professionals become aware of their implicit knowledge base and learn from their experience. Furthermore, he differentiates between ‘reflection in action’ to immediately examine behaviour, and ‘reflection on action’ to review, analyse, and evaluate situations for future responses. In brief, Schön describes the alternation of both as ‘ladder of reflections’. His idea is not entirely new though, as the American philosopher John Dewy had already outlined the concepts of learning through reflective inquiry and the idea of reflective conversation, before Schön did (GOEL, 2010).
Hence, the consideration of different angles and contributions by all participants will enrich the design process. For example, the Dutch graphic designer Jan van Toorn integrates a reflexive tradition in his practice (POYNOR, 2008). Therefore, he is able to carefully construct intertextual and impulsive narratives. Van Toorn (1994) argues, that pressured by the market economy, the foundation for critical practice is no longer based on emancipatory engagement. According to him, opportunities for renewed engagement must be sought by a multidimensional, complementary way of thinking by integrating current social and political topics to interpret the client’s theme and programme. Van Toorn (1994:323) states that “the point is no longer to question whether the message is true, but whether it works as an argument—one which manifests itself more or less explicitly in the message, in relation to the conditions under which it was produced and under which it is disseminated.”
It is important to note that reflection is not about problem solving, but about truly understanding the problem itself and framing it instead, because “to describe the problem is part of the solution” (GERSTNER, 1964:7). Blauvelt (2008) underlines that “yesterday’s designer was closely linked with the command-control vision of the engineer, but today’s designer is closer to the if-then approach of the programmer.” Moreover, designers are asked to critically analyse new technologies before the next generation will end up coding problems without knowing the question. Today it is commonly thought that adopting a critical attitude towards serving the market’s needs represents routine design practice. At the same time, it is widely accepted that at some point designers will choose between financially rewarding and artistically fulfilling work. This dynamic describes the tough stance to take within the Entreprecariat. The designer may even exhibit signs of schizophrenia between the two moral markets. Which one to choose?
-
Blauvelt, A. (2008) ‘Towards Relational Design’ In: Design Observer 3 November 2008 [online]
-
Gerstner, K. (1964) Designing Programmes instead of solutions for problems programmes for solutions (2007) Baden: Lars Müller Publishers.
-
Goel, S. (2010) ‘Design is a Reflective Practice: A Summary of Schön’s Views’ In: Goel 20 August 2010 [online]
-
Knowledge Circle / Design Academy Eindhoven (2015) ‘Reflection’ In: Lexicon of Design Research [online]
-
Poynor, R. (2008) Jan Van Toorn: Critical Practice Rotterdam: 010 Publishers.
-
Schön D. (1983) The Reflective Practitioner New York: Basic Books.
-
Van Toorn, J. (1994) ‘Design and Reflexivity’ In: Blauvelt, A. (ed.) Visible Language Volume 28:4 Rhode Island: School of Design. p.317–325.
Literature
S -
-
Synthetic Alter Ego
-
Computer programme acting as a human’s secondary or alternative personality.
-
Other self powered by Artificial Intelligence.
Noun
Karin Anders is different from Karin. Design bots are not identical to designers; they do things differently from humans and they have other needs.
Gradually, the AI agent will become our modified double, our blurry copy, our dissimilar replica. Karin and Karin Anders are counterparts: equal, but not exactly the same.
-