Pathways to Innovation
in Digital Culture
July 7, 1999
Centre for Research on Canadian
Cultural Industries and Institutions /
Next Century Consultants
-----
current
contact updated October 2013
Michael
Century
Professor, Arts Department and iEAR
Studios
115 West Hall
Rensselaer Polytechnic Institute
110 8th St
Troy, N.Y. 12180 - 3590
Phone: 518-276-2302 Fax 518-276-4370
E-mail century@rpi.edu
http://www.nextcentury.ca/
Acknowledgements
Financial support for the research and writing of this
report was generously provided by the Rockefeller
Foundation, Arts and Humanities division. Special thanks to Joan Shigekawa.
To all the many people whose conversations and advice have helped to inform
this report, I give my heartfelt thanks. In particular, I wish to
acknowledge the hospitality and stimulation provided by SPRU (Science and Technology Policy
Research) at Sussex University, and McGill University’s Graduate Program in Communication.
Copyright 1999 Michael Century. You may download content only for your
personal use for non-commercial purposes but no modification or future
reproduction of the content is permitted. The content may otherwise not be copied
or used in any way.
Please send comments to Michael Century, michael@nextcentury.ca
1.
Introduction
2. Transdisciplinary Knowledge Production and the Arts
Studio Labs since 1960
Innovation
types
Sampling
of Studio-Laboratory Institutions and Structures
Summary
Table
3.
Discussion Themes
Instruments and
the Imagination
Creative
Users in IT Design and Diffusion
Beyond
the Access Paradigm
Cultural
Critique, Reflexivity and Innovation
Broadening
Public Awareness of Techno-Science
4.
Conclusion
References
Abstract
This report presents a multi-perspective framework from which to view the
rising density of communication between the worlds of art, technology, and
science. Designating the site of this hybrid activity as the studio-laboratory,
the first section traces the development of such organizations historically,
compares their dynamics to that of "transdisciplinary" knowledge
production in science and technology, and argues that they foster incremental,
radical and systemic innovation. The second section examines this framework
through the prism of five discussion themes: Instruments of the imagination,
Creative users, Access, Reflexivity, Public awareness. A brief conclusion identifies
five issues and questions for further investigation.
1. Introduction
This report presents a framework for thinking about the artist as an actor
in the innovation process in information and communication technologies. The
framework differs from most approaches to the interactions between the creative
arts and techno-science in two ways. First, it attempts to identify and
characterize the range of innovative outcomes and the factors that shape them
along multiple dimensions -- aesthetic, technological, scientific, economic --
and time frames, both long and short. Second, the framework stresses the
importance of a new class of hybrid innovative institution, the studio
laboratory, where new media technologies are designed and developed in co-evolution
with their creative application.
The research is informed by an overview of contemporary studio-laboratories,
a historical case study tracing the build-up of a strong digital media
capability in Canada, and a review of literatures bearing on the sociology and
economics of innovation. Numerous individuals artists,
researchers, theoreticians and policy-makers have been consulted. The framework
presented widens the way contemporary artistic practices are understood by
placing them in the context of innovation studies; and in turn, it broadens the
way in which the literature on innovation has up till now addressed the
contribution of the creative artist in the digital media design and diffusion
process.
The report is organized in a series of short thematic chapters, each
treating in a different way the common thesis unifying them: that in the
emerging digitally networked society, the creative arts and cultural
institutions in general are mutating by forming a constellation of productive
relationships with the science and technology research system, industry,
humanistic and social science scholarship, and with emerging new structures of
civil society. This apparently rising density of communication suggests the
need to begin rethinking some aspects of the relationship between cultural
support policy, innovation and research policy, and the still nascent but
interconnected set of concerns about the requirements for widespread creative
participation in a "techno-sphere" increasingly shaped by fast-changing
digital media technologies. The concluding section identifies a set of possible
interventions and topics for further study, though the phase of research does
not permit the preparation of detailed designs or proposals for specific
measures.
Cultural theorists will no doubt recognize the shifts briefly alluded to as
continuous with a progressive reduction throughout the 20th century
of the so-called autonomy of the artist as an alienated or estranged figure
existing on the margins of society. Particularly among groups who have defined
their "art" more or less in terms of technological innovation, this
turn away from the Enlightenment notion of the aesthetic as the
"disinterested play of the senses" can sometimes provide the material
basis for establishing sustainable linkages with highly charged sectors of the
global economy -- the entertainment and information industries -- and their
associated scientific and technological bases. But it would be a mistake to
consider the breadth of these shifts only as a widening of the well-established
role of creators in industrial design to include such relatively new, trendy
factors as "interaction design" or "relationship
technologies". As art historians have pointed out, the movement of the machine
into the studio is a progressive one which can be variously traced to the early
20th century avant-gardes, but in particular, a marked tendency
since the 1960s to engage critically with the "technological sublime"
as both material and subject-matter.[1] This critical
orientation, at least among some of emerging "media-art and
technology" community, is part of what makes the phenomena difficult to
describe from a singular disciplinary perspective. Works conceived to make a
conceptual or critical point by re-appropriating simple or older techniques can
be misread when only evaluated in terms of technological novelty; just as,
conversely, the point of "speculative" technological invention may at
times be missed by developers seeking only incremental innovation understandable
in terms of existing markets and users.
Similarly, the sites of innovation with which we will be concerned in this
report, "studio-laboratories", need to be understood as emergent
formations fed by, and flowing into artistic, techno-scientific, economic and
discursive sources. This anti-reductionist approach is unavoidable, given the
complexity of interests in and about digital media today. While we aim to
characterize a wide range of linkages between art, science, technology and
society through digital media, the emphasis will be on identifying those
"pathways to innovation" with the greatest potential benefit to the
widest number of actors. Somewhat differently conceived, pathways are perhaps
better understood as configurations, since multi-finality is taken for granted
in the phenomena being discussed. As such, the approach will contrast sharply
with other current stances towards the "unity of knowledge" question
that continues to be widely debated on both sides of the postmodern divide. For
instance, the socio-biological project of E.O. Wilson proposes to bring the
arts and their interpretation safely within the purview of contemporary
neuro-science, explicitly aiming to demystify the "truth and beauty"
of the arts in terms of epigenetic regularities yet undiscovered. Notably,
Wilson's consilience, a term for transdisciplinary coherence, dismisses
the messy hybridity of today's "unpleasantly
self-conscious form[s] of scientific art or artistic science" [2, 211].
Self-conscious or not, it is precisely towards these intermediary zones -- open
to the logic of "both-and" rather than the categorical closures of
"either-or"[3] -- that we must turn to make sense of the otherwise
baffling multiplicity of today's creative practices and institutional forms.
In 1974, pioneering electronic artist Nam June Paik assumed the role of
technological forecaster and submitted a report to the Rockefeller Foundation
urging the construction of a global "broadband telecommunications
infrastructure"[4]. While critical of mandarin intellectual disdain for
mass media, surprisingly Paik did not even bother to advocate spending on the
avant-garde arts, or on the promotion of the work of his fellow video-artists.
Rather, he envisioned a two-way, high-capacity video and data network the "electronic
superhighway" that would augur a profound cultural shift. In the framework
of this now familiar wired world, artists and intellectuals would have the
opportunity to make a broader social contribution, what he called "output
capacity", beyond the convention-bound production of luxury cultural goods
for limited circulation.
This broader role was to "humanize technology", according to Paik,
a more complex social implication that follows his consideration of the artist
or intellectual in the context of then-current notions of the post-industrial
society. Paik drew on Daniel Bell for his understanding of art as information,
and John Kenneth Galbraith to underwrite an increasingly central role for the
arts as a factor in economic growth. He conceived an amalgam of media,
information, knowledge and communication, serving as "a lubricant and
impresario to facilitate the relationships and cybernetic interaction of the
society of the future".
Now, twenty-five years later, much of the infrastructure aspect of Paiks
vision seems to be in place, owing in large measure to the incredibly rapid
uptake of the internet for multimedia as well as
transactional communication. The kinds of immediate benefits Paik foresaw an
electronic superhighway providing, easily distributed educational programming
and greater connectivity for work and pleasure, are
becoming commonplace for the growing members of the "virtual class".
The falling costs of hardware, coupled with relatively cheap or free software,
make the barriers to entry for creators lower than they were in Paiks day, when
he was one of the earliest to adopt portable video equipment and to devise his
own techniques for electronically processing images. And today digital media
are widely understood to be facilitating, as Paik predicted, new and varied
kinds of relationships and not only between buyers and sellers, teachers and
learners, creators and audiences. Further, they have attracted the
participation of a significant number of the very cultural élites whose disdain
for the public television of the 1970s Paik took pains to criticize in his
report.
Yet from the vantage of the late millennium, it is no longer possible to
share Nam June Paik's optimism about the wonders of global connectivity, nor,
from an analytical standpoint, his deterministic belief in the sufficiency of
technological infrastructure for stimulating a widespread culture of active
producers of new creative expression. The internet repeats aspects of the early
history of radio broadcasting [5] with the growing consolidation of corporate
interests at the high end of broadband and advanced applications; cultural
applications of interactivity have bunched up around a relatively narrow group of
heavily promoted large-market entertainment products (even if, in some cases,
they are played online in technologically innovative multi-player
configurations); and thirty year-old visions of new kinds of computer-enabled
literacy, extending sensory acuity and augmenting intellectual capacity, seem
to be more stalled than spurred by the current market frenzy around media
technology. Most crucially, in the 1970s, Paik was not yet in a position to
address the key issue of how to bridge the new skill-sets associated with
digital technologies with existing, often age-old capabilities grounded in
embodied, locally specific practices.
Software indeed has a dual nature, as both medium and tool; practices cannot
transcend the limitations of the constraints built into software tools, unless
these are reflexively designed to permit extensible, evolving
development in the process of use. This is not just the familiar problem of
market power exerted by the dominant position of a few large software
companies, whose application packages define a de facto standard that, for
better or worse, tends to be accepted as the benchmark of digital literacy. In
the arts community, too, disquiet rises among the more
reflective, like Carnegie Mellon professor of both art and robotics, Simon
Penny [6]:
"every day we come to new
reconciliations between our artistic goals and methods and the requirements and
restrictions of the machines we work with. With a little critical distance, we
can see that we are reshaping artistic practice to suit a new set of
tools."
Yet these concerns, which have
circulated uneasily among the electronic art, music, and graphics communities
since the 1980s, are rarely considered in relation to those of the apparently
opposite end of the technological spectrum (and world) -- the digitally
disenfranchised, to whom, typically, technological capability is presented as
nothing else but the adoption of a set of pre-set, externally-defined
"solutions". Yet the same questioning can illuminate both sides of the
spectrum: how can local, contextually-relevant capacities be developed, which
at once build on but also provide the potential to transcend the existing media
ecology? Manuel Castells, addressing the culture of the network society,
insists on the need to look for and understand the "specificity of new
cultural expressions, their ideological and technological freedom to scan the
planet and the whole of humankind, and to integrate, and mix, in the supertext
any sign from anywhere" [7]. This cultural specificity, or capacity to
adapt material means to self-defined expressive uses, is by no means a given
result of technological deployment, on the one hand, nor of the transmission of
pre-existing messages through digital channels, on the other. If the image of
digital expression as a "dynamic, moldable medium" dates back to the
early years of the computer era [8], its reality is not a lot more widespread
now than it was then.
This report on Pathways to Innovation in Digital Culture will concentrate,
as Nam June Paik put it in 1974, on those configurations with the greatest
potential for "humanizing technology". But it will also take careful
heed of the various skeptical voices who over the ensuing decades have
developed a paradoxically "post-humanist" stance towards the liberating
potential of human-machine communication and expression. After Donna Haraway's
celebrated feminist "manifesto for cyborgs", or more recently
Katherine Hayles tale of how since cybernetics "we became post-human"
[9], there is no need anymore to rehearse familiar myths of empowerment in
terms of the "liberal unified humanist subject". The vision of human
expression seamlessly articulated with intelligent machines, pleasing to few
adherents of art's proudly transcendent claims to Truth and Beauty, nonetheless
provides a basis for building fruitful understandings between the diverse
social actors with interests in the shaping of digital media -- researchers,
technology developers, artists, and theorists. Increasingly, it appears that
these meetings are taking place within innovative institutional structures --
spanning organizations, research networks, and projects. And it is to these
sites -- the "studio-laboratory" for combined art production and
technological research -- that we now turn.
2. Transdisciplinary
Knowledge Production and the Arts
The concentration of scientific research in structurally distinct industrial
or institutional laboratories dates only from the later 19th
century. Current scholars describing what are now termed "systems of
innovation" have pointed out common trends, as well as national
differences, in the transition from pre-industrial to the more familiar
industrial and now post-industrial organization of research and development.
During the first of these phases, it is sometimes overlooked how strong was the
artisanal component -- mechanical skills, like spatial imagination, dexterity,
and fluency with materials -- in enabling early industrial innovation. With the
spread of advanced professional university training, as well as the formation
of scientific and engineering societies, the specialized research and
development laboratory became increasingly common in the early 20th
century, bringing disciplined scientific knowledge to bear on industrial
problems. With important national differences, the role of the state was always
crucial, particularly in steering priorities towards the military, health, and
particular industrial sectors [10].
After World War II, and the decisive impact of the mission-oriented
Manhattan project in the U.S., the distinctions between "pure"
scientific knowledge from its "applied" technological development
began to erode. Not just the close interaction of multiple branches of science
was at work here, but also the importance of new developments in technology,
and especially instrumentation, in setting the very research agendas for
science. A compelling, if somewhat stylized interpretation of these complex
shifts distinguishes between two concurrent "modes of knowledge
production".[11] Gibbons, a former director of
the Sussex University Science Policy Research Unit (SPRU), along with an
international team of social scientists, calls traditional discipline-bound R
and D "Mode 1 knowledge production". He summarizes the emergent
second mode in terms of a set of key trends:
- Transdisciplinary. Further than
inter-disciplinary work, in which different fields address separate
problems inside a common framework, transdisciplinary research involves a
stronger "interpenetration of disciplinary epistemologies".
Effectively, this means new fused horizons become possible, beyond or
transcending paradigms existing within single disciplines. Consciously
pursued, transdisciplinarity is an approach to problem-solving
suited to settings where disciplinary modes prove inadequate.
- Multi-site. More numerous
organizations become involved as partners or collaborators in research,
making the process more socially distributed as well as heterogeneous.
Scientific discovery becomes more collective, as evidenced by publication
authorship, and it becomes more organizationally diverse: hospitals,
institutes, user-groups, consortia, networks, etc.
- Applied. Gibbons et. al. classify much
transdisciplinary research as "essentially a temporary configuration
and thus highly mutable. It takes its particular shape and generates the
content of the theoretical and methodological core in response to
problem-formulations that occur in highly specific and local contexts of
application".
- Reflexive . Social
accountability becomes more important in determining research agendas;
furthermore, greater inter-communication between fields tends to foster a
higher degree of self-awareness in defining and explaining disciplinary
frameworks.
In the arts and humanities,
transdisciplinarity has had a different career since 1850. Nineteenth-century
sensibility was decisively rocked by the Wagnerian notion of the total work of
art -- the Gesamtkunstwerke -- which, in an abstract
sense, can be understood as initiating a movement towards more expansive and
deliberate synchronization of the separate disciplines of the arts into new
synthetic combinations. The legacy of this creative and conceptual innovation
was a radical way of thinking about artforms or media in terms of the
inter-relatedness of their codes or constituent parts. By the second decade of
the 20th century, and alongside the rapid growth of mass
industrialization, the conceptual scope of some artists and cultural theorists
extended still further, to embrace "art and technology [as] a new
unity". This 1922 slogan of Walter Gropius, from the Weimar Bauhaus,
underlined a strongly applied socio-technical project to shape the quality of
mass reproduced designs with all the imaginative resources of the contemporary
creative spectrum -- not excluding abstract art, modernist music, architecture,
and theatre.th century; its technological realization, with
the diffusion after 1945 of electronic and telematic media, provides an often
neglected connecting thread between todays virtual worlds of interactivity, and
those of the early 20th century avant-gardes.
These basic shifts in culture, touched upon all too briefly here, are rarely
seen as pertinent, even conceptually, to the changes in knowledge production
previously summarized. Gibbons' treatment of the arts and humanities identified
some aspects of Mode 2 processes, like the increased role of instrumentation in
the humanities (e.g. the use of the computer to produce theoretical models) and
what is called the "re-shaping of aesthetic response"[11]. But
overall, he remains ambivalent about the way in which artists and humanists fit
into the new mode of knowledge production. They are described as:
"standing aside as quizzical
commentators who offer doom-laden prophecies or playful critiques, and as
performers who provide pastiche entertainment or heritage culture as a
diversion from threatening complexity and volatility. In other senses, they are
even more deeply implicated: through the culture industry, they fashion
powerful, even hegemonic images, and through higher education they play a
direct part in the new social stratification." (110)
This report will demonstrate a
set of closer affinities, by looking at the growth of what we have designated
the "studio-laboratory", as a site within or through which artists, scientists,
technologists and theorists commingle. In a study commissioned by the French
Ministry of Culture, Norman [13] has previously profiled a dozen current
European cultural laboratory and media centres where
"transdisciplinarité" contributes to the "creation of new
aesthetic forms" grounded in development of new technologies. Besides
transdisciplinarity, this study confirms a marked tendency towards multi-site
co-operation and, among several cases, a strong vocation to serve as a bridge between
social needs (often expressed as "the culture of the network
society") and the technology development process.
A 1996 conference Art@Science, sponsored by the Japanese research consortium
ATR, has produced a collection of papers which, among other things, reinforces
what Gibbons might call the interpenetration of applied ("artistic")
and theoretical ("scientific") components in the Mode 2 research
context.[14] The conceptual framework for this
contribution, at least at the editorial level, tends however to stress a
putative "convergence" between art and science, rather than the more
contingent, evolutionary models implied in Gibbons' notion of Mode 2 knowledge
production.
The rest of this chapter considers the studio-laboratory phenomenon in
relation to the wider dynamics of contemporary research. The first part
interprets the growth of studio-laboratory settings since the 1960s; second,
their historical emergence in relation to a common classification of types of
innovation; and third, an introduction and brief description of a diverse
illustrative range of studio-laboratories and related structures.
Studio Labs
since 1960
In recent years, scholars have begun to unpack some of the persistent habits
of thought which have tended to construe art and science as dichotomous.
Caroline Jones and Peter Galison, respectively historians of art and of
science, summarize the aim of a recent collection as moving beyond the
"focus on art and science as discrete products," to look at
"commonalties in the practices that produce them." [15] Still,
little attention has yet been given to the institutional development of the
contemporary studio-laboratory. Three overlapping phases may be distinguished.
In the first phase, dating from the 1960s and 1970s, artist
centres, networks, university-based institutes and public sector labs were
established to support open-ended exploration of new and emerging technologies
by artists. Among the most celebrated examples was Experiments in Art
and Technology (E.A.T.) founded by artist Robert Rauschenberg and Bell Labs
physicist Billy Klüver in New York in 1966. The goal of E.A.T. was to establish
"an international network of experimental services and activities designed
to catalyze the physical, economic and social conditions necessary for
cooperation between artists, engineers and scientists." The research role
of the contemporary artist was understood by E.A.T. as providing "a unique
source of experimentation and exploration for developing human environments of
the future."[16] At the same time, other Bell Labs scientists were also
engaged in collaborative research, in computer graphics and vision, music and
acoustics.[17, 18]
Also during the late 1960s, at MIT, the Hungarian artist and Bauhaus
affiliate Gyorgy Kepes founded the Centre for Advanced Visual Studies,
providing a stable location for collaboration between artists-in-residence and
university-based scientists and engineers. In the 1970s, composer Pierre Boulez
launched the IRCAM (Institut de Recherche et Coordination en Acoustique et
Musique) in Paris, based on a dialectical conception of research/invention as
the central activity of contemporary musical creation; not incidentally, Boulez
invoked the model of the Bauhaus as interdisciplinary inspiration for what he
considered the inevitable collaboration of musicians and scientists.[19]
The relative autonomy of these new centres in the case of IRCAM., established with a fiercely guarded aesthetic
independence setting it apart as a modernist citadel distinguish them from the
more publicly oriented type of media centre that began to appear in the 1980s
and 1990s. Typically incorporating festivals, exhibitions, commissions and
competitions of electronic art, this second phase saw the increased commitment
of both public administrations and private corporations towards exposing the
most radical media-based creativity to a wider public. As festivals such as Ars
Electronica or SIGGRAPHs non-commercial art exhibition became global in scope
during the 1980s, so plans were drawn up in most advanced industrial countries
to establish permanent centres able to incorporate a dual research/development
and public education mandate. To mention only a few of the most conspicuous of
these institutions, the Zentrum fur Kunst und Medien (ZKM) and the NTT
InterCommunication Centre were active in commissioning and publishing
throughout the 1990s even before their physical centres were opened in 1997.
The German philosopher and critic Florian Roetzer analyzed the media centre
bandwagon of the late 1980s, when he commented sardonically that
"everywhere there are plans to inaugurate media centres, in order not to
lose the technological connectionThis new attention is supported by the diffuse
intention to get on with it now, the contents remaining rather arbitrary, so
long as art, technology and science are somehow joined in some more or less
apparent affiliation with business and commerce." [20] Roetzer was then
not alone among critical intellectuals in harboring a deep ambivalence about
these institutional developments, fearing that they would serve only to
accelerate the public acceptance of automation in everyday life, on the one
hand, and to co-opt artists "with their purported creativity" into becoming
commercial application designers, on the other.
As it has turned out, explicitly designed linkages between art, research and
innovation have developed a good deal beyond Roetzers cynical prognostications,
and now form the basis for the third phase of the contemporary
studio-laboratory. Many observers would probably count the MIT Media Laboratory
as the main propagandist, if not initiator, of this phase, in spite of the
secondary importance of artistic practice or input in its research activities.
Xerox PARC since the early 1990s has prominently supported an in-house
artist-in-residence program (though whose modest scale perhaps belies the
extensive attention it has received). In the words of its manager John Seeley
Brown, the program serves as "one of the ways that PARC seeks to maintain
itself as an innovator, to keep its ground fertile and to stay relevant to the
needs to Xerox"[21]. Other Silicon Valley, Japanese and some European
private firms have followed suit, in differing flavors, though more or less in agreement
with PARC's position that the traditional model of "corporate support for
the arts" -- hands-off, patrician, and marketing-driven -- overlooks basic
potentials for core innovation. Among cultural organizations, the Banff Centre
for the Arts in Canada was early in initiating a major-scale investigation of
"virtual environments" as a partnership with university researchers
and industry sponsors.[22] Since 1995, research
networks have begun to appear with the express aim of linking multimedia art
with technological development and the social sciences. In short, the
deliberate involvement of artists as collaborative researchers in innovation
programs now takes place in a wide variety of social and economic settings,
with a corresponding diversity of approach and program design.
Figure 1 below illustrates the increasing pace of establishment of
studio-laboratory sites in the 20th century, which clearly shows a
grouping of activity in or bordering the 1960s, and again, the 1990s. This pace
has now reached a point where it is no longer conceivable to keep accurate
track, particularly with the proliferation of all manner of "new media
centres" at various degrees of sophistication and scope on university and
college campuses, within corporations, as regional industrial development
efforts, and as catalysts for public access and digital literacy efforts.
Rather than even attempting a comprehensive listing of such sites, we will
focus below on characterizing the range and styles of their approaches to
innovation.

Before turning to this, however it will be useful to briefly consider the
widening scope of the Research and Development process in the context of recent
critiques of the so-called linear model of innovation. This critique,
undertaken since the 1960s by sociologists, historians, and economists of
science and technology, makes explicit what Gibbons Mode 2 concept of knowledge
production accepts implicitly: the inadequacy of the simple model of a one-way
flow of ideas from basic science through applied research to development and
commercial innovation. In the place of the traditional mechanistic model,
evolutionary, interactive models emphasize the linking of inventions to
markets, with significant stress on user innovation and the role of embodied
skill tacit knowledge as determinants of innovation.
Innovation
types
Economist Christopher Freeman distinguishes between four categories of
innovation and their diffusion: incremental innovations, radical innovations,
new technological systems, and changes in techno-economic paradigm.[23]
1. Incremental innovation involves small-step improvement of existing
technologies or processes; as such it covers the vast majority of patents that
are taken out in the world, as well as typical changes in product design or
styling within industry. It is worth adding, in this particular context, that
it also includes the bulk of contributions to scientific research. Indeed
Thomas Kuhn, the philosopher of science whose book on the structure of
scientific revolution brought the concept of "paradigm change" into
common use, defined "normal science" as
puzzle solving. Whereas within the arts, "innovation is a primary value,
in science it arises only as a response to crises in established
paradigms."[24]
2. Radical innovations are discontinuous events, going beyond
variational creativity. In the oft-told explanation, no combination of
horse-driven coaches could have produced the railway; so, for many artists
interested in working with information technologies, the aim is often to
explore or invent new media forms, as the unit of innovative work, as opposed
to working within established techno-cultural genres. It is worth noting how artists ideas about radical innovation since the 1960s have
been in part shaped by the way in which Marshall McLuhans widely diffused
discourses about "media as art forms" characterized experimental
artists as prophetic. Although McLuhan was himself thinking mainly about the
modernist writers and painters whose radical innovations (Eco's "open
work") actually anticipated aesthetic structures now embodied in
electronic media, the very notion of new media artworks as perceptual training
for yet-to-be-invented new media environments now has taken hold widely. This
makes it possible, today, to consider the proliferation of user interface
creations in aesthetic terms much as McLuhan spoke of the content of new media
in terms of the features of previous ones.
3. New technological systems involve constellations of interrelated
innovations, both radical and incremental; as systems, they entail economic and
social as well as technological changes. Examples include plastics and
synthetic materials, in the 30s and 40s, consumer electronics in the 1960s, and
digital networks in our time. Taking the latter case as illustration, changes
are underway in how knowledge is technically produced and distributed, in models
of education and life-long learning, in the globalization of finance, and the
rise of electronic commerce. These interrelated technologies and organizational
changes combine to produce trajectories, along which new innovations that would
have been radical become incremental as the system matures. The idea of
technological trajectory is closely associated with that of path-dependency,
the familiar effect of lock-in which takes place when new technologies and
associated human skills are widely diffused[25].
Another standpoint on the reversibility of technological trajectories, perhaps
more suited to the complex patterns of interaction between art and technology,
is provided by the French sociologists of innovation associated with the
so-called "actor-network theory". These scholars speak of
"socio-technical dispositifs" - a set-up, or dynamic apparatus
- which combine objects, both human and non-human, the conditions under which
they are used, plus the means through which new entities or agencies in networks
emerge. [26] From this anti-reductionist angle, constraints are in both things
and people, and are both limiting and generative. Technological systems grow
out of the co-evolution of actors and techniques during the conception and
adoption of innovations [27]. Crucially, for the digital dispositifs
under consideration here, it would appear that artistic conventions, craft
routines, and related embodied practices can play an
important role in the growth of new networks (or trajectories).
4. Changes in techno-economic paradigm refer to the so-called
long-waves of economic and social change which,
according to some evolutionary economists, have articulated the history of the
industrialized world in 50-60 year periods since the mid-18th century. Techno-economic paradigms are
pervasive shifts, based on the arrival of new material inputs that are cheap,
widely available, and revolutionary in impact. The current Information
Technology paradigm, by this account, was in preparation since the 1940s and
50s, but only began in the 1980s with the widespread and cheap availability of micro-electronics. (The previous mass-production paradigm
began in the 1930s and 40s, organized around the cheap availability of energy
supplies including oil.) See Figure 2 for a representation of the five waves of
innovation since the 18th century, ending with the current wave
characterized by "digital networks, software, new
media".

As interpreted by social scientists such as Manuel Castells, the information
technology paradigm provides the basis for producing a vast synthesis of
current political, social, economic and cultural tendencies[28];
however, so far little attention has been given to what sectors may now be
forming in preparation for the next techno-economic paradigm. It seems apparent
from the vantage of the late 1990s that some combination of bio-technology
and cheap bandwidth will likely form the basis in coming decades of the next
techno-economic paradigm, distinct from but building on information technology.
What philosopher Vilem Flusser already identified as an emerging "ars
vivendi" in the late 1980s clearly signaled what is turning into a central
issue for creators in the arts and techno-science, as we begin to imagine what
it means to move beyond mere biological analogies to the practical construction
of post-organic life.
Sampling of
Studio-Laboratory Institutions and Structures

By juxtaposing the starting dates of studio labs against the five innovation
waves, it can be shown (Figure 3) that they cluster around the rising portions
of the waves. No rules or strong theories are meant to be
implied by this observation. It is surely suggestive to think of the
Bauhaus as catalytic in relation to the broader flow of innovation within the
Fordist mass-production regime. Many of the studio-labs that appeared between
1950-65 dealt broadly with a range of material technologies, light,
electronics, and kinetic or cybernetic systems. However from the standpoint of
the aesthetic paradigms which they explored and
defined, they could be understood as preparing the terrain for the new material
possibilities afforded only by very powerful networked micro-processors, which
only became a reality toward the mid-1990s. As will be seen in the following
survey, the current studio-laboratories are active in all four of the
categories of innovation previously introduced. Some, a distinct minority but
noteworthy nonetheless, are oriented toward the issues and challenges
associated with what may be a new emerging bio-techno-economic paradigm. For
the most part, however, description here centers on the still far from
exhausted potential of digital media (some would say, recalling the perennial
"software crisis", barely tapped).
The studio-laboratory as a class is by no means homogenous. Some are
privately funded by corporations, seeking to understand the properties of
radically new media technologies via aesthetic R & D programs; others are
public funded and linked to traditional museological mandates for public
education; others are industrially sponsored pre-competitive laboratories based
in universities; still other models are network-based and more or less
explicitly tied to long-term state or regional industrial development
objectives. The studio-laboratory can be understood as providing a site for an
ongoing and progressive series of negotiations between artist-users and
technology designers, which simultaneously shaped the technology, its use, and
users.
The survey is divided in three parts. First, stand-alone institutions,
divided into those with mainly cultural roots and funding bases; those located
in and financed by private corporations; and government agencies or institutes.
Second, network structures, of three kinds: research networks; networks linking
cultural with socio-political organisms (civil society); and art production networks.
Finally, a group of project-based initiatives is discussed. Two further
prefatory notes: the sampling aims not at inclusivity, but rather
representative breadth. Second, in each case, extensive online information is
available, and the internet address is provided.
1. Institutions
R and D laboratories in publicly financed cultural organizations
L'Institut
de Recherche et Coordination Acoustique/Musique
(IRCAM) Paris
www.ircam.fr
As
previously noted, Pierre Boulez founded IRCAM as a transdisciplinary centre for
musical research, experimentation, and cultural diffusion. Since its founding in 1977 it has been at the forefront of
experimental artistic practices involving electronic media. It has always employed
a substantial scientific staff researching perception, material science of
instruments, and developing software systems for musical production. While
oriented in its first decade towards powerful, specialized resources only
available to composers on site, it has since the late 1980s focussed more on
diffusing its innovative software to a world wide user
community. Several of its applications have been commercialized and are in wide
use by musicians and other interactive artists. According to Norman[13],
it has provided an invaluable template from which many of the more recent
establishments have drawn their plans. However, the challenge facing IRCAM now
is to remain current and establish relations with the many new centres/networks
set up in its pioneering wake.
The Zentrum für Kunst und
Technologie (ZKM) Karlsrühe, Germany
www.zkm.de
The ZKM is now the largest and widest ranging centre for art and new media in the world. With the first
large-scale museum dedicated only to "media art" since 1945, the ZKM
in some ways is playing a role in relation to emerging interactive art
practices similar to that played in the 1930s by the Museum of Modern Art to
photography. That is, establishing the field from a museological standpoint,
especially with regard to the special problems of maintenance, education, and
support for complex technological installations. Combining, as it does, both in-house research, production, as well as innovative
forms of cultural diffusion, ZKM is from the standpoint of the density of its
connections, the richest and most complex of current studio-laboratories.
Two institutes for research and experimentation are
also located at the ZKM, one dedicated to Image and the other to Sound. The Image
institute has in particular been influential by commissioning some 70 new works
by international artists since 1990, many developed in-house and supported
technically by staff engineers and researchers. The ZKM has also been actively
associating itself with scientific expertise centres in Europe, through the
European Union's Esprit program for long-term research. As well, it has
developed similar productive links with other culturally-oriented
media centres in Europe, such as Ars Electronica, V2. However, its very scale
raises questions of sustainability. Financially dependent on state authorities,
ZKMs global program has already raised questions about relevance to local
audiences and possibly also, business enterprise. So far, the artistic program
at ZKM has been deliberately independent of industry sponsorship; pressure may
rise for it to become more responsive to applied or sponsored research, a
deeply controversial point at the time of writing.[29]
De WAAG - The Society
for Old and New Media, Amsterdam
www.waag.org
The Society for Old and New Media exemplifies what
might be termed a new breed of interventionist, policy-oriented public new
media centres. The name signals its approach, which places both new and old
media within a common framework, and one of its key tactics is to seek and
amplify resonances, both historical and practical, between them. From a
mediaeval-age location in central Amsterdam, it inverts the typical
"high-tech" image of the research laboratory, in line with its
program of driving technical developments with a rich mix of cultural and
historical references. In its applied research programs, de Waag has so far
emphasized the application of design and technical creativity to enrich the range
of what can be termed the "public domain" of cyberspace. A clear
example is its award-winning public internet
interface, based on the 19th century Dutch reading table. Its
programs include competitions, symposia, workshops and commissions; it grew in
part out of one of the largest and most active "Digital City" internet sites in Europe, and from this has a strongly
defined political and social program for defining a democratic "public
domain" in the digital sphere.
The Society has played a European leadership role
the policy arena, advocating the growth of broadly-based
network of cultural innovation centres across Europe. Much of this material has
been summarized in the recently published "New Media Culture in
Europe" [40].
The Banff
Centre for the Arts, Alberta Canada
www.banffcentre.ab.ca
The Banff Centre is unusual in its location in a
remote, non-metropolitan setting, which fosters an intensive, residential
structure of activities. Its interests in advanced media and technological
development grew out of a deliberately interdisciplinary arts context, spanning
music, theatre, literature and visual art. When it established a media research
initiative in the late-1980s, one of its aims was to attract support from
academic and corporate partners for in-depth investigations of emerging media
by diverse teams of artists all working with the same research and development
team. A second aim was to make space in the formative stages for dialogue
involving cultural theorists, philosophers and other humanists normally
estranged from the sort of active technological development engaged in by
artists and scientists. The difficulties encountered in that unusual effort are
further discussed below; see also [30]. Currently, The Centre operates a
multimedia institute, offering a plethora of courses and seminars, but it has
phased out the research intensive activities.
Ars Electronica Centre,
Linz, Austria
www.aec.at
Founded
in 1979 by Brucknerhaus and the regional television corporation of Upper
Austria (ORF), the Ars Electronica festival was, at the time, the only annual
showcase exclusively devoted to forms of electronic art. Combining the
exhibition of works, the organization of conferences and the recognition of
pioneering electronic-art producers (the "Prix Ars Electronica" were
created in 1987), Ars Electronica figures as a foundational event on the
international scene of contemporary art. In 1996, with the innauguration of the
Ars Electronica Centre, Linz operates year round. The festival and the centre
boast an impressive roster of corporate as well as its state-owned and
institutional funders.
FAE Centre Director Gerfried Stocker defines the
centre's mandate in terms of transdisciplinarity, by which he means, transfer
of knowledge between practices and disciplines. However, Jutta Schmiederer, the
FAE's Producer, also stresses Ars Electronica`s role in dessiminating knowledge
and use of new media by encouraging the local and international comunity to
engage with and transform those technologies. [13] These dual emphases reflect
Ars Electronica's dynamic as a whole. The "Lab of the Future" project
works to develops advanced 3D animation and internet
technologies, while concurrently exhibiting recent products. The coexistence of
this kind of display and simultaneous practice lends Ars Electronica an
unparalelled internal vitality.
Art-labs in private sector firms
Art + Com,
Berlin
www.artcom.de
Art+Com operates as a research and development
centre for computer aided visualization and design. What distinguishes it from
purely industrially-oriented labs carrying out sponsored research is its
emphasis on research on "the new media grammar"; i.e. according to
its chief Joachim Sauter, "how to use computer as a medium, not a specific
tool". Grammar is understood as the expression that is
"inherent" to the new technology. Art+Com maintains a balance of
sponsored and internal research projects; the former, including visualization
systems for firms such as Daimler Benz. Of the latter, a good illustration is a
"grammar defining" project called Zerseher.
"The observer finds himself in a museum environment, a
framed picture hanging on a wall. Upon coming closer, the viewer notices that
exactly the spot of the picture he is looking at is changing under his
gaze."
This
work makes clear the distinction between the computer as a simulated paint brush (tool) and as an inherently interactive medium.
Art+Coms celebrated TerraVision simulator (1994) linked various satellite views
of the earth with visualization systems, giving the user a continuous zoom-in
from space.
Xerox
Parc Artist in Residence Program, Palo Alto, CA
www.parc.xerox.com/red/members/richgold/PAIRBOOK/pair1.html
Since 1993 XeroxParcs Artist in Residence Program
has provided Bay-Area artists with the opportunity to carry out their own projects
in the corporate lab, collaborating with like-minded scientists on common
projects. Pairings are voluntary, and the structure oriented toward process
rather than product; in no case are artists required to implement ideas of
scientists, or vice-versa. PAIR is understood to help the laboratory remain
relevant to the needs of the corporation by encouraging artists to experiment
about the future forms and paradigms of documents. As John Seeley Brown writes
"Xerox is, after all, the Document Company and what artists fundamentally
make are documents, and in particular, new forms and genres of documents.
Artists are really document researchers discovering new kinds of documents even
new definitions of what constitutes a document." [31] The program founder,
Rich Gold, was an avant-garde music composer before he entered the computer
industry through games design; he says:
PAIR is not based on the belief that each person must be
both an artist and a scientist, though such people exist, but rather that there
is a class of extraordinary activity that a scientist and an artist can
simultaneously engage in that is mutually beneficial to both.
Nippon Telegraph
and Telephone (NTT) InterCommunication Centre (ICC), Tokyo
www.ntticc.or.jp
The ICC was opened in 1997 as part of a large scale
Shinjuku cultural complex, through the initiative of the Japanese Public
Association for Telecommunications, and sponsored by NTT. ICC is conceived as a
prototypical "information network oriented arts and science
interface" a new kind of museum for the 21st century depicting
"a vision of life in a post-industrial society". The term
"intercommunication" signifies the inter-linking of art,
techno-science, and society. NTT sees its sponsorship of this cultural project
as contributing to "thematic communication" imagining new uses for
future technologies, and it looks forward to ICC offering "exciting
feedback into the world of technology". Like the ZKM, the ICC maintains a permanent
collection of media art works on exhibition, all highly participatory,
interactive works that exemplify formal openness and multi-sensory immersion.
The centre also has a laboratory wherein artists and engineers collaborate in
the production of electronic art works.[32]
ATR
Corporation, Media Integration and Communication Centre, Kyoto
www.mic.atr.co.jp/index.e.html
ATR International, a consortium of seven research
centres devoted to telecommunication, set up the MICC in 1995 for studies in
art and communication. The research laboratory is divided into four units: the
reconstruction and creation of communications environments, the foundations of
communication, the expression and transmission of mental images, and finally,
the process of human communication. Interactive Art is of central interest to
this lab, as a domain through which engineers are researching the base
technologies for representing/transmitting human emotion (kansei, or sensitivity).
Effectively, the approach is to develop models for machine
"understanding" of gestures, images, and speech. Collaboration takes
place both ways: "Artists present a new concept and engineers provide
technologies to realise itEngineers present a whole concept, and artists
produce the art part" [33]. A group of four media artists work in a fifth
art and technology unit. The goal is that sophisticated communication and
interaction methods will be discovered that "overcome the cultural and language
gaps among people".
Interval Research
Corporation, Palo Alto CA
www.interval.com
Only a stones throw from XeroxParc, Interval mixes
artist-researchers into an already very broad ranging scientific and
engineering research staff. Its charter is to look five to ten years into
the future of computing and media. Rather than the open-ended, voluntary
pairings of the PAIR program at PARC, Interval includes a sprinkling of
researchers with backgrounds in such fields as interactive art, theatre, documentary film. David Liddle, the manager of Interval,
sees them adding cognitive diversity through their unique standpoints. By
bringing in "alien methodologies", he notes, "most of these
people are the herb, not the entrée, in the particular project being baked
[but] the minor ingredients are very, very important. There is no chance of
doing good, new work in these areas in a sterile environment where there are no
herbs allowed" (quoted in [34]). The noted media artist Michael Naimark
recounts how, as a member of Interval research staff, he and computer vision
researchers nurtured a symbiosis in which 3D stereoscopic computer models based
on panoramic landscapes he gathered for an art project provided the researchers
with valuable material. "The fact that it was not simply views of the
parking lot was gravy". (From seasonings to sauce)
Canon ArtLab,
Tokyo
www.canon.co.jp/cast/
Founded in 1991 the ArtLab is a corporate lab
devoted to the integration of the arts and sciences, primarily by encouraging
new artistic practices using digital imaging technologies. The lab itself
consists of offices and a "factory"; the latter employs computer
engineers using Canon digital products in interaction with artists in residence
to produce new digital art works.
Since its launch, the studio portion of the program
has presented exhibitions of the works developed in-house. In 1995, seeking to
introduce multimedia works to the general public, the
ArtLab began its Prospect Exhibitions program, which also circulates the work
of multimedia artists and creators from a variety of new media centres.
Workshops and lectures on new communications technologies and practices are
also organised on an ad-hoc basis, which are both national and international in
scope.
University/Public Sector
Studio-Laboratories
German
National Research Institute for Information Technology (GMD), Bonn
Institute
for Media Communication
viswiz.gmd.de/fleischmann
The
GMD Institut für Medienkommunikation is composed of four departements:
Visualization and Media Systems Design (VMSD), Multimedia Applications in
Telecooperation (MAT), Networks (NW), and Media Arts Research Studies (MARS).
The centre is primarily a teaching and research facility, hosting a number of
innovative projects which actively integrate the
technological and cultural innovation in the development of new media forms and
content.
The centres artistic direction was set by VMSD
Director Monika Fleischmann, along with architect Wolfgang Strauss. However, a
good deal of the initiative to integrate artistic and technological innovation
can be traced to the efforts of Wolfgang Krüger. With the goal of building a
cultural perspective into technical development and technical expertise into
artistic practice, Krüger came from the Berlin centre Art + Com to join the GMD
centre in the early 1990s. Having left the institute in 1995, Krügers legacy of
interdisciplinary practice remains nonetheless.
According to the visions set forth by both Krüger
and Fleischmann, the pure functionalism of technological
research should always be undercut by the cultural meaning or purpose of what
is being developed. In the case of the development of new media tools,
innovative products should necesarily be in the service of the expressive and
aesthetic possibilities defined by cultural producers and creators. As such,
for example, the centres "VizWiz" (Visual Wizards) group developed
new digital tools, such as the Wall of Communication, a sort of virtual
billboard which permits multiple users to post their images and ideas during
teleconferencing sessions, or the Responsive Workbench, a table which serves as
an interactive projection site for group work, where multiple audio and visual
feeds can be cohesively integrated.
Electronic
Visualization Lab, University of Illinois, Chicago campus
www.evl.uic.edu
Since its inception in 1973, the EVL has
established itself as a centre for academic excellence in the development of
computer graphics and interactive media applications through its
transdisciplinary pedagogy. It offers a rare joint graduate degree between the
visual arts and computer engineering departments.
During the 1970s, EVL hardware and software was
used to generate the animation used in the first Star Wars movie, while
in the late 80s the lab began focussing specifically on scientific
visualisation, providing media tools for engineers and research scientists.
More recently EVL activities have encompassed the production of virtual-reality
tools and environments, such as the CAVE (Cave Automatic Virtual Environment)
virtual-reality theatre (1992), and the ImmersaDesk virtual-reality work space. Producers associated with the EVL also showcase
their innovations at a variety of academic, industry and electronic arts
conferences.
Centre for Advanced
Visual Studies Media Laboratory Cambridge, MA
cavs.mit.edu
www.media.mit.edu
Both of these centres date back to the turbulent
era of "art and technology" collaborations of the 1960s. Kepes, the
CAVS founder, had earlier brought the European Bauhaus tradition to Chicago; at
CAVS a dynamic program was quickly established including an international group
of artist and critical Fellows. This was shortly followed by a graduate
degree in visual studies which counts among its alumni
pioneers of virtual reality and interactive art.
The Media Laboratory grew out of the important
research led by Negroponte on Computer-Aided design; known as the Architecture
Machine Group, it built on the strong scientific base at MIT in computer
graphic systems and artificial intelligence. Interaction between these two
groups continued through the 1970s, but seems to have diminished as the Media
Lab was conceived and eventually opened in the mid-1980s. The program of the
Media Lab translates many of the cultural and technological "threads"
of the IT paradigm into a coherent vision of a hyper-mediated techno-scape
premised on breakthroughs in machine intelligence. From the start, artists were
understood to play an important part in the wider Media Lab mix. As then-academic
director Steven Benton explained during its early years, the new
meta-discipline of "Media Arts and Science has a technical, perceptual,
and aesthetic basis, but no-one here is solely an artist..
The Barry Vercoes, Tod Machovers, Muriel Coopers all are doing research with a
technical base. We are not trying to be an art school... It's a new kind of
research trying to be informed by aesthetics". [35] The status of artists
at the lab has been controversial; this has proven to be a complex, sometimes
acrimonious dispute, closely calibrated to how well artists themselves are able
to accommodate the agendas of the Labs' mainly corporate sponsors. Stewart
Brand, author of a quasi-official book on the MediaLab, has commented
,"The Lab was not there for the artists. The artists were there for
the Lab. Their job was to supplement the scientists and engineers in three
important ways: They were to be cognitive pioneers. They were to ensure that
all demos were done with art - that is, presentational craft. And they were to
keep things culturally innovative. Having real artists around was supposed to
infect the place with quality, which it did." [36]
A new generation of researchers may be forging more
integral fusions between the aesthetic, technical and perception than Brand states.
Ishii's "Tangible Media" group designs 3-D and spatial interfaces
that border on the kind of sensory environments often found in the work of the
best media artists. Importantly, cultural traditions, such the abacus from his
own Japanese upbringing, are considered in defining the affordances for
effective human-machine communication. Maeda, a graphic artist, may be the
first of a new, younger breed of artist-engineers. Carrying forward Cooper's
Visible Language Workshop in a research program on "aesthetics and
computation" Maeda's stated aim is the "true melding of the artistic
sensibility with that of the engineer in a single person"[37]
2. Networks
Research networks
European
Unions I3 - Intelligent Information Networks (Esprit long-term research)
ERENA eSCAPE
www.i3.org
escape.lancs.ac.uk/index.html
www.nada.kth.se/erena
This program of the EU finances over a dozen
multi-national, interdisciplinary research networks, organized around three
themes: experimental school environments, inhabited information spaces, and
connected community.[38] Designers of all types --
industrial, graphic, product -- play a central role in all these networks. In
the present context, we note the programs of two networks
which set out to work closely with the electronic and media art
community as equal partners in their research.
The eSCAPE - electronic landscape - network
addresses the difficult problem of inter-communication between virtual
environments, particularly those using quick-maturing spatial and immersive
interaction techniques. Since this is a field in which artists have been
intensively active since the 1960s, the computer scientists directing the
project sought to draw on this rich fund of past work for models and
inspiration. Partners include the ZKM, GMD, and scientific partners in Sweden
and England. A third component in the mix is ethnography, at two levels so far:
first, studies of users in heavily mediated settings (traffic, ambulance
control) to derive new design principles; second, studies of users of
interactive art works (field work done at ZKM) to gain new understanding of the
complex interplay between cognitive, physical, and sensory experience.
Interlinkages between these three components
(technology development, art, social science) have, up till now, been nascent.
According to the EC officials responsible, the coupling is justified in part
because the aim in this research is to look very far forward, to better
"understand how information and communication start making a difference
when they're embedded in a real context". Thus, it's important to
"forget about virtual environments and trying to fit people into some
artificial worldhow can we help people in their everyday environment, and
integrate technology into this?"[39]
The Erena network shares much of the philosophy of
eSCAPE, specifically addressing a set of "arenas" which traditionally
have been considered cultural: performing arts, galleries and museums,
broadcasting. One project combines a telecommunications firm (BT), computer
scientists, a television producer, and performing artists to create an
interlinked live broadcast + 3D internet system
("inhabited TV). Another pushes the limits of synthetic actors in computer
animation.
Civil society focus
European Cultural Backbone
The
concept of a continent-wide "social, cultural and technical
infrastructure" of independent media centres, research facilities,
newsletters, online forums developed through the
1990s, supported by the cultural program of Council of Europe. As Marleen
Stikker explains the concept:
"Sustaining the public sphere is an essential factor in
fostering an innovative European media culture. This means providing
participatory public access to networks and media tools, and privileging public
content, by developing the digital equivalent of public libraries and museums,
as distinct from privately owned databases and networks.
For an effective exchange of expertise and
training, an open, online communication environment is required. Other means of
distributing information and knowledge, including publications, newsletters and
workshops, should also be developed. Such facilities must cater to the
multilingual reality of Europe through the provision of adequate software,
design and translation. To be effective, culture as much as science requires
its domains of primary research, which needs to be supported by appropriate
environments and resources (e.g., independent research laboratories for media
art)".[40]
The
CD-ROM and web site Hybrid Media
Lounge is a self-described "interactive visual representation of
European Network Culture". The first four menu sections hard data, soft
data, context, and network each provide a different representation of resources
available, interests, and linkages between nodes.
Art production focus
ANAT -
Australia Network for Art and Technology
www.anat.org.au
ANAT presents a clear model of a mainly virtual
structure whose role is to "advocate, support and promote the arts and
artists in the interaction between art, science and technology". Founded
in 1985, and supported by Australia Council, it offers annual "summer
schools" for working artists, support for international travel and
exposure, critical dialogues, and funding for projects and residencies. The
vision of its director is "not to build edifices to new media art practice
but rather in building mechanisms, where new media art practice is included
in exhibition, performance, literature based practices." [41] ANAT has
organized its flagship summer school around scientific topics of growing
critical concern to artists, like biotechnology and artificial life. From this
intensive several week session were born several projects by artists
which now continue in scientific labs. Financial support for this
subsequent phase, now taking place, comes from public sources concerned with
the promotion of science awareness.
3. Projects and targetted funding schemes
Art-Science award schemes
Sci-ART - Wellcome Trust, Gulbenkian
Foundation, NESTA - National Endowment for
Science, Technology and the Arts.
London, UK
www.wellcome.ac.uk
www.nesta.org.uk
The Wellcome
Trust is one of the largest bio-medical research foundations. Hoping to widen
public understanding of science, particularly biomedical, it launched a
competitive scheme in 1997 to bring together the "often separated cultural
spheres of science and art". The aim is to match professional artists with
scientists, working on common projects that "grew out of a genuinely
reciprocalinspiration". Two rounds of awards have now been given, some 6
per year each averaging about $25,000 US. The varied formulas for collaboration
in these pairings present a panorama of the dynamics of art-science
cooperation: from the artist as a medical subject for a scientific group
working on the relationship between "looking" and
"reproducing"; to the whimsical creation of a new fashion line
derived pictorially from an interpretation of the dynamics of embryonic
development. [42]
The
Gulbenkian Foundation, UK Branch, has run a granting program since 1997 called
the "The Two Cultures - Arts and Science. Based on this experience, the
Foundation is preparing a major publication about Science and the Arts -- what
it calls, the first 'map of the world' of this vast territory, to appear in the
Fall of 1999. Commenting on the findings, the
foundation reports "Many people take the view initially that the creative
processes in each discipline are fundamentally the same, but that is not what
our current research reveals. Indeed, stepping from one 'planet' to the next
takes some adjusting to and sometimes the views of each on the other (artists
on science, scientists on art) are curiously out of kilter. The book should
reveal many new opportunities for artists but also explain to scientists the
value of seeing the world from the peculiar tangential viewpoint of the
artist". [43]
In 1999, a
consortium was established between the Wellcome and Gulbenkian Foundations,
plus the Arts Council of England and the newly formed National Endowment for
Science, Technology and the Arts (NESTA). A new program is planned in which
NESTA provides funding for "follow-on" stages of projects begun
through science-art collaborations. Details are not announced, but based on the
published charter of NESTA, it is likely to include
investment for commercialization of intellectual property, touring of
exhibitions or performances, and publication.
Hybrid Workspace - the Temporary Media Lab
Model
www.medialounge.net See CD-ROM :
Hybrid Media Lounge
Hybrid
Workspace was a summer-long project in 1997 produced by the Documenta world art
exhibition in Kassel, Germany. It was conceived as a communication experiment,
highlighting the creative process and untapped potential of digital media, more
than the display of fixed aesthetic works. This entailed setting up a temporary
media space, with equipment to produce a range of multimedia, web-broadcast,
pamphlets, television and radio programs. The logic of its planners also
identified the redundancy of many current conferences and professional
meetings, particularly where proceedings are instantly available over the internet. Face-to-face meeting with people in such settings
can rarely progress to the stage of detailed, practical exchange. The idea of
setting up a hybrid workspace was to make possible a series of topical work
sessions, each led by a different group/collective. Fifteen such groups
consisting of artists, activists, critics and their guests presented their
work, produced new concepts and started campaigns that developed and continued
long after the gathering. This CD-ROM archive documents the rich and diverse
results of the Hybrid Workspace.
The model has
been considered a useful organizational innovation, and a follow-on project is
now in preparation in Helsinki for the newly opened national museum's digital
media centre. This model presents an interesting approach toward knitting
together, in a production context, the interests of local groups, new entrants
to the field of media-production, and a diverse range of international/visiting
theorists and practitioners. We return to its potential for longer term,
systemic impact in IT capacity development in a later section.
Summary Table
This table displays the name of each
institution, network or project surveyed, against columns defined as follows
1
date founded
2 mode of operation: S = on Site,
i.e., at the main studio-lab location
D = Distributed, i.e., multiple sites cooperating
T = Touring, i.e., works often are co-commissioned and toured to other centres
3. typical manner of teamwork (pairings of
artist/scientist, small teams, common platform)
4,
"lead" tendency. Left pointing 3 means, mainly "art-driven"
Right pointing8means, mainly
"science-technology driven"
The table is indicative only, meant to provide an overview of the very
distinctive models presented by the cases selected.
3. Discussion Themes
Instruments and the Imagination
One fruitful way to think
historically about the kind of techno-cultural creativity manifest in the
studio atories just surveyed is to recall the role that instruments have long
played on the margins between science, art, magic, entertainment, and
philosophy. Citing science historian Thomas Hankins: "To understand actual
scientific practice, we have to understand instruments, not only how they are
constructed, but also how they are used, and more important, how they are
regarded". Hankins does just this in a book about curious, mostly
forgotten instruments from the 18th and 19th centuries --
ocular harpsichords, animal automata, stereoscopes and magic lanterns -- which oscillate between demonstration, entertainment, magic,
and measurement. The crucial point that Hankins makes is that even such
"objective" devices as the telescope, microscope or air pump were the
subjects of controversy in their time; just as the photograph later was in the
19th century, and today, digital processing of images makes the
veracity of any picture questionable. "We choose", Hankins writes, "how to represent the natural world to
ourselves"[44]. Instruments are a way of "questioning nature", a
"language of inquiry"; and the historical examples retold with verve
in Hankins' book suggest a way of considering today's investigators -- artists
and scientists -- in the spirit of those "natural philosophers",
whose "instruments move easily between natural science and other human
activity".
Media technology as boundary object
A striking set of examples where
today's investigators specifically designate technology as a shared medium of
joint exploration is available from the Xerox artist-scientist pairings. Each
case indicates the medium taken as point of departure, and the contrasting way
in which they were regarded and employed by scientist and artist respectively:
The PARC commentators refer to the medium (or
"experimental document" in their corporate jargon) as a common
language, but a more apt metaphor is perhaps that of the boundary object. This
is a term introduced by sociologist of science S. Leigh Star, describing
"scientific objects which both inhabit several intersecting social worlds
and satisfy the informational requirements for both of them" [45]. Through
a radically opposed dialogue about the STM, one PARC researcher recounts, a new
line of questioning grew about how the senses are extended through instruments:
"Are there untapped sensory channels for interacting with the unseeable
which enable powerful conceptualization?"[31]
Similar conceptualizations of the
sensorium characterized the collaborations during the 1960s between AT&T
Bell Labs researchers in vision and perception, and the varied artists --
musicians and filmmakers, mainly -- who worked with them. In the words of
vision researcher Bela Julesz: "Visual perception is historically a common
area for both the artist and scientist, a common intersection where there is no
gap or artificial bridge. The same kinds of things can be artistic or
scientific; the only difference is the motivation..the
artist is searching for an artistic truth, an intimate truth he wants to
convey, and I am searching for scientific truth, which is testable and very
defined. "[17] The activities of these teams tended to focus around the
digital computer, which was constructed as a tool for understanding human
perception, and at the same time, as a potential new medium for artistic
expression. Bell researchers tended, in the main, to locate the artistic added
value in the unique ways in which artists could train themselves to perceive,
and thereby, shape, images or sounds. John Pierce, director of the
Communication Sciences Division, acknowledged that in seeking to program
computers to produce intelligible speech, "one of the most important human
faculties is that of being able to judge qualities even when we cannot measure
them. Here the ear of the trained musician may be as valuable as the digital
computer."
Today, similar cases abound; entire
labs, like the Chicago Electronic Visualization Laboratory, operate on the
basis of the heterogeneous shaping of a common medium which
can prod new disciplinary insights. In some cases, the "uncertainty"
of the object's identity has declined over time, becoming, much as Hankins
described some of the pre-scientific instruments of "natural magic",
more or less stabilized at one or another of its poles of attraction. Such, it
could be argued, is the case of scientific visualization at the EVL: to the
extent that the aesthetic shaping of the immersive simulations developed there
is confined to the usual "non-essential" parameters of color, form,
or texture, the object has settled at the scientific side of the margin.
As we have previously seen, one area
where the boundaries today are notably blurred is the field of "artificial
life", attracting artists with interests and background in biology and
computation to create evolutionary digital systems. Broadly speaking, ideas
from genetics have begun to shape the way many computational artists conceive
the inter-relationships between their formal materials. In the simplest
manner, style can be characterized in terms of traits, and as objects --
drawings, or melodies, for example -- replicate, they change form according to
programmed rules of reproduction and mutation. Artificial life extends
evolutionary metaphors even further, in the work of the team Christa Sommerer
and Laurent Mignonneau, who develop artificial-life installation works as
researchers at ATR corporation in Tokyo. They build
imaginary eco-systems which evolve and mutate as
artificial virtual worlds, but are able also to react to observers' gestures
when provided.
A scientific colleague at the same
lab, computational biologist Tom Ray, illustrates well the instability of
borders between artificial-life artists and scientists, when he calls for a
"new aesthetics", based on "free evolution in the digital
medium". Interestingly, he argues this evolution need not be
"inherently visual or auditory in nature, and would not be recognized as
conventional artistic creations". He seems to be describing a kind of
computational beauty inherent to the digital medium, with "richness
comparable to what [evolution] has expressed in the organic medium".[46]
The Musical Instrument as Interface
Metaphor
There is one special case of the
projection of human imagination through skilled instrumental performance:
musical instruments have long served as metaphor and analytical model for
philosophers (think of Heraclitus or Confucius), mathematicians (Pythagoras or
Galileo), and in our own time, computer scientists and interface designers.
From the earliest years of personal
computing, a controversy has simmered about the trade-offs in designing systems
that are easy-to-use but quite general in their scope, or more challenging to
master, but with greater depth and power. Alan Kay, credited with conceiving
the personal computer as a portable "Dynabook" (and later helping
Xerox to implement one of the first "personal workstations"), was also
influential in promoting the notion of computer use as a medium for creative
thought. In their 1977 paper on "personal dynamic media", Kay and
Goldberg [47] explained their design goals as wanting to combine both
the broad, standard-model usability of inflexibly mass-produced items like cars
and TV sets, with the plastic, moldable, open-endedness of tangible media like
paper or clay. The key, Kay argued in 1977, is learning to use a high-level
programming language, inspired by Seymour Papert's artistic approach towards
teaching children to program.
In the meantime, the trajectory that
actually became locked-in once personal computing took off in the 1980s is
based not a style of programming, but rather a graphical means of manipulating
and selecting surface icons -- the ubiquitous "graphical user
interface". Far from Kay's subtle, even dialectical conception of fluency
within a dynamic medium, most computer use could be characterized as brittle,
fault-intolerant, and closely coupled with proprietary software
"solutions" -- packaged applications -- that offer only minimal room
for user-programmed extensions or variation.
In a forthcoming book about Douglas
Engelbart and his Palo Alto research group, Bardini sharply pinpoints the
actual losses entailed in the "lock-in" of the PC in its present
form. [48] Early researchers, like Engelbart during the 1960s, thought of the
user as acquiring progressively more powerful kinesthetic and motor skills; in
effect, operating interfaces with greater instrumental virtuosity to keep pace
with the mental scope and expressive boundaries set by the user's intellect.
The idea of learning to "play" a piano-like key-set, in order to
navigate conceptually through information space, may seem like science fiction;
but this is what Engelbart himself built and mastered, and arguably, its
originality is such that it deserves to be considered a more profound
interaction paradigm than the "mouse" with which he is actually
credited.
Alan Kay, meanwhile, who is himself a
skilled musician, has tended to be ambivalent about how literally to base human
computer interaction on a metaphor of musicianship. Younger theorists already
describe "interface" as the characteristic art form of the 21st
century, with much the same kind of historical determinism driving their
arguments that pertained during Henri Bergson's time when cinema was widely
welcomed as the 20th century's defining art form[49].
To have a glimpse today at what this prediction might look like in 10 to 20
years, it is likely more suggestive to extrapolate from the more speculative,
3D or installation-based creation of current artists and design engineers, than
to look at the incremental variations coming from software vendors. Much of
this work begins with something like a musical notion of the machine interface,
using bodily motions, breathing, movement, gesture to shape the art-work's
responses in a way that is, at least in principle, amenable to personal nuance.
Turning back towards what might be
dubbed the more "cognitive" pole of the mind-body continuum, it is
still worth recalling how Kay and Goldberg had envisaged the system design of a
"dynamic personal medium" two decades ago:
"Our design strategy, then, divides the problem.
The burden of system design and specification is transferred to the user. This
approach will only work if we do a very careful and comprehensive job of
providing a general medium of communication which will
allow ordinary users to casually and easily describe their desires for a
specific tool. We must also provide enough already-written general tools so
that a user need not start from scratch for most things she or he may wish to
do".[47]
Creative Users in IT Design and Diffusion
"User innovation" has
become a commonplace term of late, indicating the importance of the user
(customer, client) as a partner in the innovation process. Von Hippel explains
the benefits of turning users into designers as "faster and better and
cheaper learning by using" [50]. Advanced firms, he argues, are changing
the very economics of design, by investing in software-based
application-specific toolkits that "transfer a capability to design truly
novel customized products and services to users". His examples come from
manufacturing (custom-designed circuits and software), and he stresses that the
design tool-kit reduces the iterations and flow back and forth between users
and designers.
Consider these points in a
non-manufacturing case now, the software used by artists to make movies, music,
or multimedia -- all dynamic, time-based expressions which technically
challenge the computer's capacity to synchronize and co-ordinate various kinds
of audio-visual representations. Software applications have been widely
available for some 15-20 years that permit artists to create more-or-less
independently from the system programmers on whom they formerly depended if
they wanted to use computers without learning to program. As a class, software
for animation or music abstracts [51] some aspects of the craft of
movie-making or composition, mechanizing them into modules much like the
"already-written" generic tools Alan Kay thought all users would
likely call on in his SmallTalk system. But what about
support for individual expressiveness, corresponding to the distinctive traits
of an artists' style or signature? Recalling Simon Penny's present-day
concern about artists' practices being re-shaped to conform to the restrictions
of their computer-based tools, it is evident that the ability to design novel
capacities beyond the base mechanisms embedded in common applications remains
elusive.
As has been shown by the successive
diffusion of desktop publishing, image processing, music composing, and now
multimedia/animation software, the distinctive appeal of such programs lies in
the way they facilitate for new classes of users a degree of creativity that
formerly required a specialists' craft training. The issue of boosting the
general user's media fluency is of less interest to this discussion, however,
than to look in greater depth at the way in which new types of creative
possibilities get embedded in software in the first place.
To do this, we will here present a
précis of the results of part of a full case study about the emergence of the
creative user of computer animation. In the mid-1960s, when computers were
completely intractable to all but engineers, the very idea of applying digital
calculation to the intensely artisanal production of animated film was by no
means obvious. A host of contrasting, often conflicting interests existed from
the start of computer graphics, and the earliest encounters between artists,
system designers and programmers reveal a fascinating, and in some ways
instructive story about the conditions under which creative users enter into
productive relationships with designers. Another way of saying this is that
between the 1960s and mid-1980s, the computer itself was constructed as
a medium for making movies, within a wide and sometimes contested zone of
interpretive flexibility, to use the phrase of Dutch sociologist of technology
W. Bijker [52].
Artists as Lead Users of Early
Computer Animation Systems
The base technologies for interactive
computer graphics were largely developed in U.S. military research programs,
often closely aligned with key universities like MIT, and supported by the
Pentagon's aggressive funding of fundamental information processing research.
By the mid-1960s, development of civilian applications was underway as well,
notably in aviation, architecture, scientific communication.
Many of the same organizations also experimented with artists as lead users of
early mainframe animation systems. Broadly speaking, two design approaches
towards computer animation were pursued: picture-driven, and language-based.
The latter specified visual images and their continuity using traditional
textual computer programming languages; they depended on the ability to
describe visual phenomena mathematically. Picture-driven approaches aimed to
assist aspects of the hand-crafted art of animation, permitting the
non-specialist artist to draw and ink the cels serving as key-frames, using the
computer to coordinate the images and calculate the transitions between them
(in-between) images. [53]
The study looks at similarities and
differences between the way in which this field
developed in various parts of North America; in particular, close attention is
being given to the conditions of innovation which led to an unusually dense
concentration of firms, researchers, and electronic media artists in Canada.
Beginning in the mid-1960s, researchers at the National Research Council (NRC)
and the National Film Board (NFB) -- both federally-funded
agencies -- began to investigate the potential for using computers in
film-making. The approaches taken, in each case, differ markedly from those of
the American research sites. In both cases, the Canadian investigators were
scientific and technical followers, not leaders, and they had very restricted
budgets for equipment and personnel. They began their research by intensively
studying everything the Americans had done to date.
To start with, the NRC researchers
chose film-making as an application domain through
which to study the problems of the man-machine interface. Besides computer
animation, they also began an equally important program in computer-assisted
music composition. Their goal was general understanding, ultimately to better
support the use of interactive computing in science and engineering. But it was
by no means irrelevant to their choice that the NRC was already a kind of
studio-laboratory, supporting in the same Radio and Electrical Engineering
department the groundbreaking research of a physicist-cum-composer on
electronic musical instruments. By modeling the user as a creative artist,
an original outlook resulted which at the time of its formulation in 1969 was
notably different from the U.S. corporate or university labs [54]:
"Up to this point, it has been assumed that the
best possible way to design the computer would be to make it transparent. That
is to make it look to the user as though it were not even present, so whatever
idea occurred to him, it could be rapidly formed into a final creation. This is
not necessarily true"
Constraints, argued researcher Ken Pulfer, are
crucial to the creative process, giving examples such as conventions for
drawing in architecture, or scales and notational conventions in music. By
supporting the use of such conventions, the user is given a more meaningful
starting point than the abstract 'blank slate' of total generality.
"Most computer languages now available ...are
unsatisfactory either because they are mathematically oriented, or because they
result in cumbersome and slow programs. As a result we are usually left with
the situation where an artist-programmer team is formed, the artist uses the
system without having intimate control over the functions of the blocks he
uses, and the programmer builds blocks without fully appreciating the needs of
the artists."
Pulfer and his team chose therefore to develop a
system in which:
"at no time [was] it
necessary for the user to learn how to program the computer, or in fact even to
know how to operate it other than through making some choices from names
presented to him on the screen... he can proceed to learn the 'language' by
trial and error."
Crucial to the implementation of this design was the
just-published research of the first graphical user interface published in 1968
by Douglas Engelbart [55] -- interestingly, as a system for "augmenting
the human intellect". The NRC team considered the results produced by the
U.S. "artist-programmer" teams to lack validity for their purposes;
for this reason, they chose to work only with professional filmmakers (or
composers) who could teach them something about movie-making (or music
composition).
Technical Innovation at the Canadian
National Film Board
The National Film Board of Canada,
founded in 1939 as the Government Film Office, was home to a world famous
tradition in documentary film and experimental animation. A strong technical
research and cooperation department maintained a watch on the global
development of motion picture technology, and this group too had a
well-established tradition of technical innovation. In 1951, under the
direction of the award-winning animator Norman McLaren, it had produced the
first stereoscopic animated film, presented to stunned crowds at the Festival
of Britain; during the mid-1960s, another team of filmmakers and technicians
developed a unique multi-screen projection and camera system for the Labyrinth
pavilion which was soon thereafter transformed and commercialized as IMAX
wide-screen format. An electrical engineer who had previously worked in the
telecommunications industry on the application of the computer to digital
signal switching brought a disciplined bench-marking
approach to the analysis of the computer as a tool for motion pictures. This
quickly produced an intensive learning program in which the NFB had received
visits from and in most cases, pursued in-depth dialogues with all of the key
U.S. players; it also conducted tests using borrowed equipment.
Within the strong technical
culture the Film Board, there was strong resistance to "solutions"
from outside experts being applied to creative problems. (Indeed, an early
proposal from AT&T Bell Labs to "solve" an animation need for
special effects was flatly refused.) This culture was strongly shaped by the
model of McLaren, whose creative vision was sharply opposed to the
assembly-line factory approach towards commercial animation typified by Disney
Studios. He summarized his method, in 1948, as [56]:
With this disposition towards the close
interpenetration of idea and technique, the film board animators of the 1960s
looked with some skepticism at the results of the art + technology experiments
coming from such well-resourced U.S. centres as MIT, Bell Labs, IBM. The
computer was imagined richly as a creative, administrative, and
mechanical-control resource, but always in terms of a very concrete set of
ongoing work practices.
Space does not here permit a
comparable outline of these American studio laboratories. Suffice it to say
that in these settings, the computer was mainly a scientific instrument, an aid
to studying perception, or a modeling tool for the production of simulations.
Links with artists tended to be far more "experimental", and it seems
that where aesthetic considerations were important, these tended to equate
artistic creation with the discovery of new forms of expression (rather than
supporting a more known range of what users might already want to create). Only
in a few cases did the scientific investigator think reflectively about what
the user brought to the computer as a potential contributor to system design.
It must be remembered that computing
in the late 1960s, was formidably expensive, and software development a
labor-intensive enterprise beyond nearly all non-technical users. After
developing an internal knowledge base about the technical as well as aesthetic
possibilities of computer animation, the NFB decided to look outside for
compatible partners with which it could enter the field through
"real" production, not just technical tests. This was arranged to
take place with the National Research Council's system, which by 1970 had
developed further by implementing a system for keyframe interpolation, the first
which allowed the artist to communicate graphically
with the computer. [57] (This accomplishment was recognized, some 25 years
later, with a Scientific and Technical Academy Award).
The NFB rigorously evaluated the NRC
system before the production period began; a series of improvements were made,
all geared towards making it conform more closely to the mental models of a
creative animator. These exchanges were documented, and a pattern of mutual
accommodation developed between the NRC researchers (a team of three) and the
NFB's French Animation studio. A set of criteria outlined what kind of
film the NFB should aim to make. It should be one suited to systems' quite
limited capacities, but also, it should be chosen to push the medium enough to
yield "generalizable" results applicable beyond the single instance.
The NFB producers found a suitable
candidate in Peter Foldes, who had previously proposed a full animation
treatment of a scenario that required extensive use of metamorphosis between
shapes. The artist would spend a few weeks at a time working with the system in
Ottawa; in the intervening periods, improvements were made based on what had
been learned in production. The film that was released in 1973, Hunger,
was recognized immediately as an artistically convincing character animation;
it was nominated for an Academy Award and won numerous festival prizes.
The accomplishment of Hunger in
matching an artist's vision to the still very intractable computer of the day
can be interpreted in a number of ways. For the present purposes, it will
suffice to note that the technique of linear keyframe interpolation was still
far too primitive and mechanical to be used for what one critic has called the
"anthropomorphic" style of the big-budget feature animation studios
like Disney. While it promised to save costs by automating the intensive human
labor of the artist drawing the intermediate frames, given its technical
awkwardness, it could only be put to creative use by an artist willing to shape
his or her vision to its still rather mechanical constraints. Indeed, this
"machinic" interpolation, which in other contexts would have been a
defect, gave the film its expressive signature, and the impact of the film
proved to be far reaching. It proved that convincing artistic
films could be produced by computer, at a time when Hollywood was
only using it for title sequences or special effects. As well, it had a major
influence in the technical community, attracting, especially in Canada, young
people to the field of computer engineering precisely to further the
possibilities of artistic animation.
Summarizing the lesson of this early
episode of productive collaboration between two studio-laboratories, both were
small, under-resourced, and unable to make further progress without the
contributions of the others. None of the researchers identified strongly with
(nor necessarily even knew) the way things "ought to be done" in
computing. From the outset, both had something of a hybrid character -- the
NFB, a cultural organization with a strong technical research group, skilled at
absorbing and re-purposing new techniques; the NRC, a government research
institute with an intellectual work culture friendly to artistic practice. Many
of the individuals were cognitively open-minded and sympathetic to a an
approach toward creativity as:
"a process
involv[ing]trial and error, with the creator modifying the mental image of his
creation as it takes place. He interacts with his creative medium...in a
conversational way, learning the 'language' in which he can express himself as
he goes along" [54]
This "heuristic" approach to computing was
poles apart from the comparable, extremely influential theorization of
computer-supported creativity by Negroponte in terms of artificial intelligence.
[58]
Foldes, whom we can consider the
"lead user" of the NRC system, realized how unusual was his
opportunity, when he later commented:
Disons que l'ordinateur américain a des yeux et l'ordinateur canadien une main. Les Amérciains ont des
impératifs commerciaux, un souci de rentabilité. Les
Canadiens du CNRS sont beaucoup plus désintéressés et subventionnent la
recherche pure.[59]
One could say
the American computer has eyes, and the Canadian computer, a hand. The
Americans have commercial pressures, a concern for profitability. The Canadians
at the NRC are much more disinterested, and finance pure research.
Constructing Canadian Animation Culture
In fact, the long-term outcomes of
the early Canadian scenes of innovation in computer graphics and animation
proved to be economically significant. Nearly all the successful producers of
animation software, whose products are used around the world in the animation,
multimedia, and CAD industries, were descended from or assisted by the people,
ideas, systems jointly formed at NFB and NRC. An ongoing study traces the
diffusion of ideas, innovation, systems, and skills up to the foundation of
these companies.
Early results support an
interpretation that the Canadian innovators shared a linked set of values about
the interplay between creators and engineers, or what art historian Caroline
Jones has called the "machine in the studio". It is tempting to think
of these values in terms of what Paul Edwards, writing about computers and the
"politics of discourse" during the Cold war, has called "the
closed world" discourse. This term for Edwards signifies a:
"linked ensemble of metaphors, practices, institutions and
technologies, elaborated over time according to an internal logic and organized
around the Foucaultian support of the electronic digital computer". [60]
Canada is widely known as a communication-saturated
state, and the homeland of Marshall McLuhan. As political scientist Arthur
Kroker puts it : "Canada's principal contribution
to North American thought consists of a highly original, comprehensive, and
eloquent discourse on technology"[61] One aspect of this discourse,
previously mentioned, was McLuhan's aphoristic, elliptical way of thinking
about new media of communication as art forms. Initially, new media are
invariably understood in terms of old (the message is the old medium);
the new medium is only "freed" from its reliance on the old through
creative -- artistic -- experimentation (the new medium is the message).
McLuhan's deterministic way of compelling media along their "destiny"
toward "maximal" realization can be maddening to some, but it should
not mask his basic insight about how communication media reveal their
possibilities through use. Can this discourse about media innovation be linked,
as Edwards does convincingly for the computer in relation to Cold-War politics,
to the "heuristic" system development approach taken by NRC and NFB
innovators?
What can be said at this stage with
certainty is that different cultural constructions of the computer as a
creative medium help to shape different development paths. Canada's
"success story" in computer animation shows how niche strengths in
high-tech industry can grow in diverse settings, and that the way user knowledge
is expressed and cultivated with and through technical communities can play a
key role in seeding and nurturing that growth.
Beyond
the Access Paradigm
The preceding section demonstrated
how creative users linked to the innovation process over a several decade
period contributed not only to cultural enrichment in the uses of technology,
but also to the growth of an important sector of a regional information
economy. From the standpoint of the worsening inequities between the
information haves and have-nots, showing how a strong cultural informatics
capacity grew up at the figurative doorstep of Hollywood might not at first
glance seem all that pertinent. However, there is also a long tradition of
analyzing Canada as a "borderline case" -- "the "hidden ground
for the big powers" , as McLuhan
characteristically quipped[62], with elements of both "first" and
"third" world countries.
Recasting the Canadian case slightly,
it can be seen as one pathway to the building of local cultural distinctiveness
in a situated set of informational practices. "Situated", in this
context, leads us to consider the challenge of cultural diversity in the age of
globalization. Much culturalist thought on this topic is still stuck in a
"mass-media" mindset, like post-colonial theorist Edward Said who has
railed:
"The threat to independence in the late
twentieth century from the new electronics could be greater than was
colonialism. The new media have the power to penetrate more deeply into a
'receiving' culture than any previous manifestation of Western
technology." (quoted in [63])
To be sure, corporate concentration in the media and
entertainment fields continues its rampant increase. As the Economist magazine
observed tartly: "What will the digital revolution do to the entertainment
industry's emerging global oligopoly? Probably boost it"[64] .
Said obviously overlooks the myriad
ways new media have been used by opposition groups, NGOs, identity-formations
of all sorts; it is striking indeed that he appears to grant no power to the
"backchannels" available through digital media. This movement goes
alongside the fusion of internet, multimedia and
computer games with "the entertainment economy", and so far, it is
anyone's guess the degree to which pessimistic Frankfurt-School type
predictions of imperialist cultural hegemony will prevail.
Cultural policy makers have not, for
the most part, helped matters much by their willingness to concede a limited
role for culture as compensation against the loss of national identity through
economic globalization. This lack of vision and advocacy often gets translated
into a heritage-based conception of identity, grounded in the irreproachable
values of restoration, preservation, and conservation. For those approaching
cultural development from a more active technological perspective, policies
emphasizing heritage priorities channel inordinate resources towards
information projects concerned with inventory management, data retrieval, and
classification standards. Unquestionably, the librarian's, curator's, or
conservator's professional skills are crucial to delivering effective access
to cultural heritage. But these objectives need not be in conflict with
broader issues of creativity and innovation in the cultural use of
digital media. As Stuart Hall has said, "identity is not in the past to be
found, but in the future to be constructed" (quoted in
[65]).
In a recent book about information
technology for sustainable development, Robin Mansell stresses the role of
information cultures in shaping "people's ideas about how they should be
concerned with media, technologies, the advantages/or not of information
access, tele-learning, telework" [66]. Drawing on the work of Ursula
Mier-Rabler, an Austrian scholar, she lists four such cultures, each followed
here by a sketch of the values implied by each label:
1. Protestant-enlightened information culture (U.S.A)
- competitiveness, transparency, ICT's a basic instrument of economic action
2. Social democratic-liberal information culture (Scandinavia)
- enhanced
knowledge about civil society is beneficial to individuals, and ICT
central to political emancipation
3. Catholic-feudal information culture
- information is hierarchically organized, and transmitted from the
"info-rich" to others; no consensus on individual information rights
4.
Centralist-socialist information culture (former Eastern bloc)
- precise information gathered and fed from the periphery to central
organizations
As Mansell notes, none of these is a pure form. How
they are configured is a factor in determining "whether there will be a
demand for access to information via advanced Information and Communication
Technologies".
As we have been developing in
different ways throughout this report, another important information culture
might be identified, defined less in terms of political or ideological
alignments, than its tactical grasp of the pragmatics of media. We will call
this, partly tongue-in-cheek, the "art-hacker" information culture.
This culture rejects any rigid separation of form and content; communication is
never passive reception, but invariably entails some more or less actively
expressed response. Response is not confined, furthermore, to the pre-figured
options that might shape a system. If the occasion demands it, new extensions
can always be added to make it possible to think "outside the box" or
"jam the channels". A certain parodistic reflexivity prevails in this
ethos, as the adbusters or culture jammers play with and undermine the
communication flows of their opponents.
On a more theoretical level, this
information culture has a deep suspicion of what Berkeley linguist George
Lakoff identifies as "the conduit metaphor", a deeply engrained
linguistic habit in which "ideas are taken as objects and thought is taken
as the manipulation of objects [and] that memory is storageIdeas are objects
that you can put into words, so that language is a container for ideas, and you
send ideas in words over a conduit, a channel of communication to someone else
who extracts the ideas from the words".[67] The conduit metaphor for
communication, like the "linear model" of innovation previously
critiqued, is deficient because of its inability to cope with complex systems.
The metaphor is widespread and pervasive, contributing to the common way in
which "content" or "content services" are seen to be made
of separate stuff from software and hardware, to which people are given
"access" or not, through more or less transparent or affordable interfaces
or channels.
The art-hacker culture pervades the
practices of the various studio-laboratories already discussed; here we wish to
consider the way it drives a particular approach to socio-technical
development. Two main aspects typify this approach: first, a preference for the
"open source" philosophy of development. This ethos, which stems in
part from the earliest hacker culture of the 1960s, has now acquired serious
corporate respectability as a credible alternative to proprietary, hierarchically
managed development of software and hardware systems. In place of hierarchy,
many artisans contribute components within open, standards-defined frameworks,
freely sharing improvements and benefiting jointly from the collective rising
tide. The second aspect of this culture is a style of heterogeneous teamwork,
typically assembled around temporary, socially-specific
projects or campaigns. Geert Lovink, the Dutch media theorist and co-organizer
of Hybrid Workspace at Dokumenta, formulates a framework for cooperative action
as:
" a radical pragmatic
coalition of intellectual and artistic forces-- forces that, so far, have been
working in different directions. It is time for dialogue and confrontation
between media activists, electronic artists, cultural studies scholars,
designers and programmers, media theorists, journalists, those who work in
fashion, pop culture, visual arts, theatre and architecture."[63]
The tactical media orientation uses all modes of
media, old and new, and in particular looks for ways of combining the virtual
world of digital media with community based media practices. Lovink and
colleagues have been closely aligned as technical and creative advisors to the
Soros foundation, in setting up internet access centres, media art research labs,
and training in the former Eastern bloc. They now are turning their attention
to Asia, developing links in China, India, Indonesia.
An apparent spinoff of these
developing links between the Euro-socialist-art-hacker information culture and
the developing world is the recently announced Sarai-- the first independent
media culture centre in India. Sarai is a joint initiative of the Centre for
the Study of Developing Societies, Delhi, Raqs Media Collective, Delhi, in
collaboration with The Society for Old & New Media, the Waag, Amsterdam. Sarai is conceived:
1. As a public access driven, de-centralized
constellation of a variety of
research, creative practice and education initiatives in all aspects of
the new and old media landscape.
2. As an
alive and integral part of the new urban culture and emerging
civic consciousness of the city of Delhi/New Delhi. As a
major player in
the shaping of the urban culture and political imagination of the city of
Delhi/New Delhi in the future.
3. As a place
where young and old people, academics, scholars, activists,
technicians and artists can interact amongst themselves and with others
through old and new media, through a variety of programs that are
designed primarily to be low -cost or no-cost. This includes, terminals
for free public Internet access, ISP services, offline/dial up
connectivity for those who cannot afford personal internet accounts,
publication, outreach and education programs and a variety of open
public events.
4. As a hub
of networking amongst new/old media activists, a centre for
creating and exhibiting original work and as a clearing house for
innovative ideas in the South Asian/Asian region.
5. As an
equal partner of new media initiatives at an international level,
and as a contributor to the content of emerging/new media cultures across
the world.[68]
Sarai is still in the earliest stages of
establishment. As a model, it suggests a possible structural approach towards wider
development of active media and information capabilities. The stress on local
self-direction, combined with globally sophisticated cultural partnerships,
bodes well for its future. Some possible pitfalls can be anticipated: too heavy
reliance, for example, on what worked well for the European partner. It is
likely, for instance, that training programmers to think about creative users,
or artists how to program, may require a completely different approach in the
Indian context, than has worked in Western or Eastern Europe.
Cultural
Critique, Reflexivity and Innovation
In the main, humanists have had
considerably less to do with the kind of co-operative development of
technologies undertaken between artists, engineers and scientists. One
thoughtful commentator has summed up the usual interests of humanists in
information technology as follows:
The author of this passage, Phoebe Sengers, is a rare
case of a computer scientist with equal background in cultural theory [70]. Her
own original contribution is a widened conception of what she terms
"cultural informatics",
" a practice of
technical development that includes a deep understanding of the relationship
between computer science research and broader culture. This means understanding
computing as a historical, cultural phenomenon, including, for example,
analysis of metaphors that shape technical approaches, discovering prejudices
in the Heideggerian sense that cause us to look at problems in one way to the
exclusion of others, finding unconsciously held philosophical difficulties that
leak their way into technical problems. These insights are used as a basis to
change underlying metaphors, prejudices, philosophy, resulting in changes in
technology. Cultural informatics integrates a broad humanist perspective with
concrete interventions in technology and technical practices."
As a term in English, "informatics" is
preferred by some scholars to designate the disciplines usually called
"computer science or engineering". The preference is not incidental.
Nor is it without adherents from the computer science community too, and for
similar reasons. Yale professor David Gelernter has called for a complete
re-thinking of the training of "computer people", though not emphasizing
cultural theory but an in-depth knowledge of history of art, design and
aesthetics. "Software programming should be taught in studios, like
art", Gelernter writes [71]. Far less stress should be placed on
correctness, and more on elegance.
What Gelernter is pleading for is a
higher standard of design in digital media, a balance of form and function that
goes far beyond the usual "requirements-based" conception of
user-centred design. To convey that extra measure of aptness, of conviviality
past mere usability, elegance accounts only for what might be seen as the
"surface design" elements. Taking seriously Sengers' proposal to
consider computing as a humanist discipline actually pushes at the
intersections between deep system-level design, philosophy, and social science.
It is hardly surprising that this agenda is, so far, little understood in the
academy.
At the Banff Centre's Art and Virtual
Environments project (1991-94), a deliberate plan was made to precede a period
of active technology-art development with a formative symposium organized to
critically examine the concept of virtuality. This was carried out in a 10 week residency, involving not only artists and technology
developers, but philosophers, cultural theorists, art historians. Virtuality
here is understood:
"... as an expression
of social discourses that are already in place. One of the intentions of the
residency is to address the broader context of socio-cultural shifts that are
both the cause and symptom of technological changes."[72]
The goal was to develop a set of alternative
conceptions -- metaphors, scenarios, speculative
designs -- that could inform the development team through the actual
implementation phase. In fact, few linkages were made at so functional a level.
The actual experience revealed the very wide gaps separating the world-views of
critical theorists and those of engineers and programmers (much less so, most
of the artists). As noted by one of the participants self identified as
"theorist":
"While the majority of artists appear to have
been theoretically and practically ill-equipped to deal with this new
technology at the level of its technical organization, those involved in
developing its hardware and software are equally ill-equipped to deal with its
social and cultural dimensions as well as its political implications."
Yet, as was proved in the subsequent implementation
phase, the artist-developer teams were eminently capable of developing,
at a project level, cooperative strategies sufficient to produce what one
commentator has since termed "projects that would permanently extend the
tools we have for seeing and hearing"[73]. But what remained
under-realized in this project was precisely the kind of conscious integration
of what Sengers called "humanist perspective" in an ongoing technical
practice. The Banff technical group disbanded after the project, and the
cumulated expertise and software capability dispersed among the participating
artists and researchers.
Within the context of the European
Union I3 research networks, several ethnographers, sociologists and
anthropologists have been carrying out field studies of contemporary
technological art installations, aiming thereby to inform subsequent system and
design practice. In an ethnography of visitors to the
ZKM Media Museum, investigators chose to analyze media art works sociologically
as "breaching experiments". With a technical goal to devise protocols
for interoperability between different virtual environments, they studied
"the sense of presence experienced by museum visitors", to better
understand their "intersubjective organization".[74]
These early results do not indicate whether or how findings would lead into the
design phase.
Also in the past year,
interdisciplinary humanities seminars have been held on "Computing science
as a human science" at the University of Chicago, and on "Virtual
reality, past and present", at Cornell. These seminars are intended to
engage with the technical community, but do so still within the usual framework
of critique. A newly announced program sponsored by Microsoft Corporation at
Carnegie-Mellon University illustrates a more active model.
This pilot fellowship program will connect three
established artists and a critic-historian-curator to the robust
science-technology resources at Carnegie Mellon. The artists will:
1) engage contemporary science-technology as it provides tools,
media, and content to their work,
2) assume leadership roles in generating and implementing
complex, collaborative projects, and
3) connect the process of the projects and its results to the
larger community. (www.cmu.edu/studio/)
Applied research combined with critical perspectives
has been termed "critical technical practice" - another term, like
"cultural informatics", that aims to create a new space for
heterogeneous activity. [75] Still, very little of this community seems to be
connected to or even aware of the potential resources and talents of the
electronic art"community. This is a point we will return to in the report's
conclusion
Broadening
Public Awareness of Techno-Science
In an informal evaluation of the
Wellcome Trust's Sci-Art program, Cohen noted the deep sense of urgency
expressed by many of the applicants, that they felt
the need to look outside the limitations built into their careers and
institutions. "It may be too strong to say that they felt some kind of
moral imperativeit is rather that they appeared to feel that the boundaries of
their discipline were (and indeed are) weakening at the edges, that people from
outside were doing work similar to their own, and that by moving outside the
discipline, they may be rewarded by a new perspective and new ways of thinking
about their subject"[76].
If this type of program has indeed
struck a nerve, it would be worth considering how it might be made more
accessible beyond the U.K. While the outcomes of such collaborations can
clearly be very broad, here it is worth underlining the potential contribution
to public discourse about scientific and technological issues.
Two final points to close this
discussion: As we have seen previously, artists are increasingly attracted to
the horizons of bio-medical and evolutionary computation. The ethical
quandaries arising from these fields may perhaps be as well articulated and
illustrated through the kinds of expressive collaborations with scientists that
are nurtured through schemes like the Wellcome Trusts Sci-Art. Second,
providing a more variegated sense of the so-called "hard" professions
of science and technology, might influence young
people to conceive of these professions in new, more nuanced ways than tends to
be the case. To close with an anecdote: one of the most gifted female computer
graphics systems programmers began her higher education at art school in Canada.
After seeing the early computer animated film "Hunger", she decided
to train in computer science, in order to create better tools for artists.
4. Conclusion
This report has attempted to present
a multi-perspective framework from which to view the rising density of
communication between the worlds of art, technology, and science. Designating
the "site" of this hybrid activity as the studio-laboratory, the
first section traced the development of such organizations historically,
compared their dynamics to that of "transdisciplinary" knowledge
production in science and technology, and argued that they foster incremental,
radical and systemic innovation. By its boundary-spanning nature, a good deal
of this activity stretches the limits of established paradigms, whether these be considered from the techno-economic, social or aesthetic
standpoint.
The survey of current studio-labs
revealed a number of commonalties with Gibbons description of "mode
2" knowledge production. The assembly of scientist-artist-engineer teams
usually takes place in a specific context of application, which can range
widely from art commission to teams of more or less equal artist-scientist
researchers. In many cases, the crucial collaborative communication still takes
place in face-to-face encounters, as a rule laboratory or production rather
than seminar/theoretical settings. Where distant teams work on common projects,
periods of intensive "residential" development are interspersed with
tasks still often divided by discipline. This makes particular sense for
cyclical, iterative projects, like system design and development, where the
learning by using can only go on so long before major overhauls are needed. The
temporary media lab notion is the most lightweight version of the contingent
manner of organizing the conjuncture of artists, programmers, and theorists; it
contrasts with the high-overhead, large-permanent staffing of the centres like
the ZKM or IRCAM.
With the price to performance ration
of commodity hardware continuing to decline, specialized equipment is becoming
less critical to the studio-lab than the range of collaborative dynamics they
can accommodate. Individual artists are, more and more, acquiring effective
home-based studios which even five years ago were rare outside high end labs or
commercial facilities. What we have learned through our survey, however, is
that much of the innovation emerging from both the older and more recently
founded structures takes place in the flesh, within particular settings,
whether these be temporary special events, industrial labs, cultural centres,
or universities.
How the specificities of particular
studio-labs relate to the "system of innovation" in which they
function is a rich subject for further study. As we have seen, a dialogue is
already occurring in the E.U. between the arts/cultural sector, industry, and
university researchers, and new mechanisms are being devised to turn that
dialogue to action. In North America, there are no large
scale public-oriented studio-labs operating with the kind of ongoing
government sponsorship found in Europe, or corporate sponsorship as in Japan.
But the tremendous dynamism of the U.S. information/media sectors generates
lots of "studio-lab" activity which could
not be addressed in this report; for instance, Intels support for artists
working in a variety of university labs, or Disney Corporations now very
substantial scientific research department. In the specific U.S. setting (and
to a lesser degree in Canada), the difficulty seems to be less about attracting
corporations to finance educational facilities with hardware/software; the more
important dilemmas arise over the strings attached to such sponsorship. For
this reason, the key question in the North American context will turn on how independent
media labs can be sustained, whether on campuses, through enlightened
corporate programs like Xerox, or, what has been less attempted on this
continent, building onto existing cultural infrastructures like museums or
theatres. Clearly, this particular discussion will need to be framed broadly
enough to bring industry, artist/designers, technology researchers and
social/cultural theorists around the same table.
In our look at the studio-lab
phenomenon, we have stressed that place still matters, perhaps even more now
that communication is so deceptively ubiquitous. We have also made clear that
the range of innovations coming from these sites falls into all four of the
classes described by Freeman. What is less clear, from a policy standpoint, is
whether all should be equally supported, or greater efforts be
concentrated towards a few. This question will, naturally, be answered
differently in the developing world, where the incremental integration of
digital with older, locally-specific forms of media may
be the soundest way to start building up a broadly based innovative capacity.
Also, from a policy perspective, it
is important to think of the cultural shape of future digital media in terms of
the accumulation of expressive traditions: ancient and modern, individual and
collective, purely informational and materially embodied. Support for
"projects", valuable as they will invariably be, should nonetheless
be understood in these larger terms. From this assumption, though, arises yet further questions: what models of studio labs fit
best into which national innovation context?
The third chapter examined this
framework through the prism of five discussion themes. Using the figure of Instruments
of the imagination, the cybernetic art work was
likened to previous representational dispositifs -- mediating devices or
boundary objects between the sensorium and a "natural" world ever
more saturated by artifice. Creative users extends the much-studied
user-producer relationship to consider the artist as a kind of user-to-come, a
necessary extension where the field of innovation is a fast-evolving symbolic
environment. Seeing the artist as a cognitive pioneer only, we suggest, weighs
too heavily on the side of theory; learning through using is how artists have
always fashioned their poised balance between form and content, technique and
idea.
Access, it was suggested,
has become a leaky portmanteau term -- carrying all freight but delivering
little. Besides measures based on hardware, price, and intellectual coherence,
access entails a new kind of fluency with the medium-specific traits of the
computer; the build-up of such fluency may be less an individual trait, and
more a function of networks (programmer, designer, artist, user). Reflexivity thematizes technical practice as socially situated. The distance between
the worldviews of cultural and social theory, and those of the
designer-engineer-artist, remains large but there are promising indications
that insights between them are growing. Finally public awareness about techno-science may be enriched through more extensive art-science
collaborations. Benefits include improved conceptual articulations and
re-shaping of the image of professional practices.
Necessarily, a report of this nature
leads more to openings than to prescriptions. More knowledge is needed about a
host of issues and questions, a partial list of which includes:
Art historian Erwin Panofsky, writing about the
Renaissance, attributed the flowering of the arts and the birth of
observation-based science to new "transmission belts" that
re-connected theory and practice, art and science, instrumentation and
sense-perception.[77] At least as much may be at
stake, five hundred years later, as we face the challenge of continually
re-humanizing our technological world.
References
1. Jones, C.A., The Machine in the
Studio. Constructing the Postwar American Artist. 1996, Chicago: University
of Chicago Press.
2. Wilson, E.O., Consilience: The
Unity of Knowledge. 1998, New York: Alfred A. Knopf.
3. Soja, E.W., Thirdspace: Journals
to Los Angeles and other real-and-imagined places. 1996, Cambridge, MA:
Blackwell.
4. Paik, N.J., Media Planning for
the Post-Industrial Society, in The Electronic Super Highway. Travels with Nam June Paik.
1976 (1997), The Carl Solway Gallery: Cincinatti, OH.
5. Douglas, S., Amateur Operators
and American Broadcasting: Shaping the Future of Broadcasting, in Imagining
Tomorrow: History, Technology, and the American Future, J. Corn, Editor.
1986, MIT Press: Cambridge.
6. Penny, S., The Virtualization
of Art Practice. Body Knowledge and the Engineering Window. Art Journal,
1997. Fall.
7. Castells, M., The rise of the
network society. Information age ; v. 1, ed. M.
Castells. 1996, Malden, Mass.: Blackwell Publishers. xvii,
556.
8. Licklider, J.R.C. and R.W. Taylor,
The Computer as a Communication Device, in Science and Technology.
1968.
9. Hayles, N.K., How we became posthuman : virtual bodies in cybernetics, literature, and
informatics. 1999, Chicago, Ill.: University of Chicago Press.
10. Ostry, S. and R.R. Nelson, Techno-Nationalism
and Techno-Globalism. Conflict and Cooperation. Integrating National Economies.
1995, Washington, D.C.: The Brookings Institution.
11. Gibbons, M., et al., The
New Production of Knowledge. The Dynamics of Science and
Research in Contemporary Socieites.
1994, London: Sage.
12. Eco, U., Poetics of the Open
Work, in The Role of the Reader. 1984, Indiana University Press.
13. Norman, S.J., Transdisciplinarité
et genèse de nouvelles formes artistiques, . 1997, Délégation aux arts plastiques, Ministère de la Culture de
France. http://www.culture.fr/culture/mrt/bibliotheque/norman/norman.rtf.
14. Sommerer,
C. and L. Mignonneau, Art@science. 1998, Wien ;
New York: Springer.
15. Jones, C.
and P. Galison, Picturing Science, Producing Art. 1998, New York:
Routledge.
16. E.A.T., Experiments
in Art and Technology Proceedings, No. 9. 1969, New York:
E.A.T.
17. Bell-Telephone, Art and Science: Two Worlds Merge, in Bell
Telephone Magazine. 1967.
18. Spiegel,
L., The Early Computer Arts at Bell Labs, . 1999, http://www.dorsai.org/~spiegel/.
19. Boulez,
P., Le modèle du Bauhaus, in Points de repère. 1985, Éditions
Seuil: Paris.
20. Roetzer,
F., Aesthetics of the Immaterial?Reflections on the Relation between the Fine Arts and the New Technologies, in Artware.
Kunst und Electronik.
1989, CeBIT Exhibition: Hannover.
21. Brown,
J.S., Introduction, in Art and Innovation, C. Harris, Editor.
1999, MIT Press: Cambridge.
22. Moser,
M.A., D. MacLeod, and Banff Centre for the Arts., Immersed
in technology : art and virtual environments. 1996, Cambridge, Mass.: MIT
Press.
23. Freeman,
C., Innovation, changes of techno-economic paradigm and biological analogies
in economics, in The Economics of Hope. Essays on
Technical Change, Economic Growth and the Environment. 1992, Pinter: London.
24. Kuhn, T., Comment on the Relations of Science and Art, in The essential tension :
selected studies in scientific tradition and change. 1977, University of Chicago Press: Chicago.
25. David,
P., Clio and the Politics of QWERTY.American
Economic Review, 1985. 75(2).
26. Gomart,
E. and A. Hennion, A sociology of attachment: music amateurs, drug users,
in Actor Network Theory and After, J. Law and J. Hassard, Editors. 1999,
Blackwell/Sociological Review: Oxford.
27. Callon,
M., Variety and irreversibility in networks of technique conception and
adoption, in Technology and the Wealth of Nations. The
Dynamics of Constructed Advantage, D. Foray and
C. Freeman, Editors. 1993, Pinter/with OECD: London and New
York.
28. Castells,
M., The Information age (three volumes), ed. M. Castells. 1996-98,
Malden, Mass.: Blackwell Publishers.
29. Shaw, J.,
Interview, (Director, Image Institute, ZKM) December 1-2, 1998,
Karlsrühe.
30. Century,
M. and T. Bardini, Towards a Transformative Set-Up: A Case Study of the Art
and Virtual Environments Program at the Banff Centre for the Arts. Leonardo, 1999. 32(4).
31. Harris,
C., ed. Art and Innovation. 1999, MIT Press: Cambridge.
32. Itoh, T.,
ICC (InterCommunication Centre): The Matrix of Communication and Imagination,
in Art@science, C. Sommerer and L. Mignonneau, Editors. 1998, Springer: Wien ; New York. p. 330.
33. Nakatsu,
R., Image/Speech Processing Adopting an Artistic Approach - Toward
Integration of Art and Technology, in Art@science, C. Sommerer and
L. Mignonneau, Editors. 1998, Springer: Wien ; New
York. p. 330.
34. Leonard,
D., Wellsprings of knowledge : building and
sustaining the sources of innovation. 1995, Boston, Mass.: Harvard Business
School Press. xv, 334.
35. Benton,
S., Interview, (Holography researcher, director of the Media Lab's
academic program) 1988, Cambridge.
36. Brand,
S., Creating Creating, in Wired. March/April,
1993.
37. Maeda,
J., The South Face of the Mountain, in Technology Review. July/August, 1998.
38. Moed, A.,
European Union's I3 (Intelligent Information Interface) networks, in If/Then,
J. Abrams, Editor. 1998, Netherlands Design Institute: Amsterdam.
39. Wejchert,
J., Not just building highways. I3 Magazine, 1999.
04(March).
40. Stikker,
M., A European Cultural Backbone, in New Media Culture in Europe,
C. Brickwood and et. al.,
Editors. 1999, De Balie and the Virtual Platform: Amsterdam.
41. Crowley,
A., Interview, (Director, Australian Network for Art and Technology)
March 14, 1999, Amsterdam.
42. The
Wellcome Trust, Sci-Art: Partnerships in science and art. 1998, The
Wellcome Trust: London.
43.
Gulbenkian, Annual Report, . 1998, Calouste
Gulbenkian Foundation (UK): London.
44. Hankins,
T. and R.J. Silverman, Instruments and the Imagination. 1995, Princeton:
Princeton University Press.
45. Star,
S.L. and G.J. L., Institutional Ecology: Translations and Boundary Objects:
Amateurs and Professionals in Berkeleys Museum of Vertebrate Zoology, 1907-39.
Social Studies of Science, 1989. 19.
46. Ray, T., Evolution
as Artist, in Art@science, C. Sommerer and L. Mignonneau, Editors.
1998, Springer: Wien ; New York. p. 330.
47. Kay, A.
and A. Goldberg, Personal Dynamic Media. IEEE Computer, 1977. 10(March
1977): p. 31-41.
48. Bardini,
T., The Personal Interface: Douglas Engelbart, The Augmentation of Human
Intellect, and the Genesis of Personal Computing. 2000, Palo Alto: Stanford
University Press.
49. Johnson,
S., Interface culture : how new technology
transforms the way we create and communicate. 1st ed. 1997, San Francisco:
HarperEdge.
50. Hippel,
E.v., Toolkits for User Innovation: The Design Side of Mass Customizatio.
1999, MIT Sloan School of Management: Cambridge (MA).
51.
McCullough, M., Abstracting craft : the practiced
digital hand. 1996, Cambridge, Mass.: MIT Press. xvii,
309.
52. Bijker,
W.B., Bicycles, Bakelite and Bulbs: Toward a theory of sociotechnical change.
1995, Cambridge: The MIT Press.
53. Wein, M.
and N. Burtnyk, Computer Animation, in Encyclopedia of Computer
Science and Technology. 1976, Marcel Dekker: New York.
54. Pulfer,
J.K., Man-Machine Communication in Creative Applications. International Journal of Man-Machine Studies, 1971. 3:
p. 1-11.
55.
Engelbart, D. and W. English. A research centre for
augmenting human intellect. in Fall
Joint Computer Conference. 1968.
56. McLaren, N.,
Film Animation, in Documentary Film News, Norman, Editor. 1948.
57. Burtnyk,
N. and M. Wein, Computer-generated Key-Frame Animation. Journal of SMPTE, 1971. Vol. 80(No. 3,
March).
58.
Negroponte, N., The Architecture Machine. 1970, Cambridge: MIT Press.
59. Foldes,
P., Interview, in Écran. January, 1973.
p. 56-58.
60. Edwards,
P.N., The closed world : computers and the politics
of discourse in Cold War America. 1996, Cambridge, Mass.: MIT Press.
61. Kroker,
A., Technology and the Canadian Mind. Innis/McLuhan/Grant. 1984,
Montréal: New World Perspectives.
62. McLuhan,
M., Canada: The Borderline Case, in The Canadian Imagination, D.
Staines, Editor. 1977, Harvard University Press:
Cambridge.
63. Lovink,
G., Radical Media PragmatismStrategies for Techno-social Movements.
1998, Nettime internet lis.
64. Duncan,
E., Wheel of Fortune. Technology and Entertainment Survey, in The Economist. 1998.
65. Cubitt,
S., Digital aesthetics. 1998, London ; Thousand
Oaks, Calif.: Sage.
66. Mansell,
R. and E. Uta Wehn, eds. Knowledge Societies:
Information Technology for Sustainable Development.1998, Oxford University
Press: Oxford.
67. Lakoff,
G., Body, Brain and Communication, in Resisting the Virtual Life. The
Culture and Politics of Information, J. Brook and I.A. Boal, Editors. 1995,
City Lights: San Francisco.
68. Society
for Old and New Media., Personal communication,
April 28, 1999,
69. Sengers,
P., Computing as a Humanist Discipline. submitted,
1999.
70. Sengers,
P., Anti-Boxology: Agent Design in Cultural Context.
PhD thesis, 1998, Carnegie Mellon University:
Pittsburgh.
71.
Gelernter, D.H., Machine beauty : elegance and the
heart of technology. 1998, New York: Basic Books.
72. Richards,
C. and N. Tenhaaf, eds. Virtual
Seminar on the Bioapparatus. 1991, Banff Centre for the Arts: Banff.
73. Heim, M.,
Virtual Realism. 1998, New York: Oxford.
74. Buescher,
M., J. O'Brien, and J. Hughes, Interaction and Presence in Shared Electronic
Environments: fieldwork at ZKM. 1998, Lancaster
University.
75. Agre,
P.E., Computation and Human Experience. 1997, Cambridge: Cambridge
University Press.
76. Cohen,
C., Sci-Art: An Evaluation1998, Division
of Management Studies, Brunel University.
77. Panofsky,
E., Artist, Scientist, Genius. Notes on the Renaissance Dammerung, in The
Renaissance: A Symposium1952, The Metropolitan Museum of Art: New York.