\begin{align*}
The symposium will discuss the new music technologies as a process / practice / relationship that involves social and technoscientific transformations in view of music, science, philosophy, community of people, non-humans and life-world as a whole. It is not anymore a myth or urban legend, advanced AI technologies do challenge current practices of creative practitioners and offer a new perspective that redefines the relation between humans and AI. What does this say about the nature of AI and its ability to be part of the mutual incorporation? What “social connections” these AI creative agents build up in music practices, which leads to emerging aesthetics and meanings to appear that would not have been possible otherwise.
The symposium will discuss the new music technologies as a process / practice / relationship that involves social and technoscientific transformations in view of music, science, philosophy, community of people, non-humans and life-world as a whole. It is not anymore a myth or urban legend, advanced AI technologies do challenge current practices of creative practitioners and offer a new perspective that redefines the relation between humans and AI. What does this say about the nature of AI and its ability to be part of the mutual incorporation? What “social connections” these AI creative agents build up in music practices, which leads to emerging aesthetics and meanings to appear that would not have been possible otherwise.
Alphabetically in surname
Currently serves as an Assistant Professor at the Faculty of Arts, Autonomous University of Baja California (UABC) Campus Ensenada.... His work has been published in the New Instruments for Musical Expression (NIME) international conference, Computer Music Journal and Critical Studies in Improvisation / Études critiques en improvisation Journal. In 2018 he was recipient of the Art, Science and Technology (ACT) grant awarded by the Mexican Secretary of Culture, the National Fund For Culture And The Arts and theNational Autonomous University of Mexico. He is co-founder and leader of the Arts and Technology Laboratory (LATe-UABC).
Adnan Marquez-Borbon holds a PhD from the Sonic Arts Research Center (SARC) at Queen's University Belfast, Northern Ireland. His areas of interest are sound art, interactive audiovisual system design, human factors in performer-computer interaction, processes and practices of improvisation, learning processes and technologically mediated pedagogy of the arts.
Georgina Born OBE FBA is Professor of Anthropology and Music, University College London. Earlier, she worked as a musician with ...avant-garde rock, jazz and improvising groups. Her work combines ethnographic and theoretical writings on music, sound, television and digital media. Her books include Rationalizing Culture: IRCAM, Boulez, and the Institutionalization of the Musical Avant-Garde (California, 1995), Western Music and Its Other (California, 2000), Music, Sound and Space (Cambridge, 20013), Interdisciplinarity (Routledge, 2013), and Improvisation and Social Aesthetics (Duke, 2017). She directed the European Research Council funded research program ‘Music, Digitization, Mediation’ and has been a visiting professor at UC Berkeley, UC Irvine, and McGill, Hong Kong, Oslo and Aarhus Universities.
I am a Professor at the UAL Creative Computing Institute. My students, research assistants, and I work on a variety of projects developing ...new technologies to enable new forms of human expression, creativity, and embodied interaction. Much of my current research combines techniques from human-computer interaction, machine learning, and signal processing to allow people to apply machine learning more effectively to new problems, such as the design of new digital musical instruments and gestural interfaces for gaming and accessibility. I am also involved in projects developing rich interactive technologies for digital humanities scholarship, and machine learning education. I am the creator of the Wekinator tool for real-time interactive machine learning and teach the Machine Learning for Musicians and Artists course on Kadenze.
Owen Green is an improviser, composer, performer, and systems-maker....He does unspeakable things with cardboard and machine listening technologies, as well as more speakable things alongside other humans, including the groups RawGreenRust (with Jules Rawlinson and Dave Murray Rust) and Sileni (with Ali Maloney). Owen has worked as a Research Fellow in Creative Coding at the University of Huddersfield on the Fluid Corpus Manipulation project, which aims to help other people do things with machine listening.
Michael Gurevich’s highly interdisciplinary research employs diverse methodologies to explore new aesthetic and interactional ...possibilities that can emerge in music performance with real-time computer systems.His recent research has focused on the technological mediation of human relationships around music creation and performance, incorporating technologies including telematics, mechatronic, and motion capture to examine concepts of gesture, skill, style, and instrumentality. His creative practice explores many of the same themes, through experimental compositions involving interactive media, sound installations, and the design of new musical interfaces. His book manuscript in progress is focused on documenting the cultural, technological, and aesthetic contexts for the emergence of computer music in Silicon Valley. He is Associate Professor of Performing Arts Technology at the University of Michigan’s School of Music, Theatre and Dance, where he teaches courses in physical computing, electronic music performance, and interdisciplinary collaboration. He holds a Bachelor of Music with high distinction in Computer Applications in Music from McGill University in Montréal, Canada, as well as an M.A. and Ph.D. from the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University. Professor Gurevich is an active author and editor in the New Interfaces for Musical Expression (NIME), computer music, and human-computer interaction (HCI) communities.
Laurens van der Heijden studied art history and musicology at Leiden University and Utrecht University and continued his studies with ...musicologist Reinhold Brinkmann at the Harvard University music department. He entered the record industry and held positions in marketing and sales of the classical, jazz and worldmusic repertoire. He worked for Dutch national public radio and television (NPS) as editor/editor-in- chief, was jazzproducer of the Metropole Orkest and travelled through the African continent producing documentaries on ethnic folklore. Subsequently he held the position of music director/general manager at a large theatre and concerthall organization. Over the last decade Laurens van der Heijden has lectured at the Academy of Music of ArtEZ University of the Arts. Besides teaching cultural philosophy and philosophy of sound in the Master’s programme ‘The Sound of Innovation’, he lectures on the history of jazz and popmusic and provides introductory seminars on ethnic folklore. At present he is preparing his dissertation with philosopher of technology Peter-Paul Verbeek (University of Twente), exploring digital music technology from the perspective of postphenomenology.
Anna Xambó is a Senior Lecturer in Music and Audio Technology at De Montfort University (DMU), a member of Music, Technology ...and Innovation - Institute for Sonic Creativity (MTI2), and an experimental electronic music producer. Her research and practice focus on sound and music computing systems looking at novel approaches to collaborative, participatory, and live coding experiences. She has been the Principal Investigator of the EPSRC HDI Network Plus funded project "MIRLCAuto: A Virtual Agent for Music Information Retrieval in Live Coding" and part of the Future Research Leaders Programme 2021/22 at DMU. Since 2016, she has taken proactive roles in organisations for improving the representation of women in music technology. https://annaxambo.me
Koray Tahiroğlu is a musician, Academy Research Fellow and lecturer in the Department of Art and Media, Aalto University School of ARTS. ... He is the founder and head of SOPI (Sound and Physical Interaction) research group, coordinating research projects with interests including embodied approaches to sonic interaction, new interfaces for musical expression, deep learning and artificial intelligence (AI) technologies with audio. Since 2004, he has been also teaching workshops and courses introducing artistic strategies and methodologies for creating interactive music. Tahiroğlu has performed music in collaboration as well as in solo performances in Europe, North America and Australia. His work has been presented in important venues, such as Ars Electronica, AI x Music Festival, STEIM, TodaysArt and Audio Art Festival. In 2018, he was awarded a 5-year Academy of Finland Research Fellowship.
This talk will reflect critically on the aspirations and outcomes of a recent five-year musical techno-scientific project ... called Fluid Corpus Manipulation, and use this reflection as a springboard to thinking about the nature of the publics that Music Technology research addresses. This project's focus was on putting signal processing and data scientific tools into the hands of creative coding musician-researchers, and so the impulse is to frame thinking about it in terms of its techno-scientific productivity: what tools, artifacts, theories, and so on came out of the research. What happens if instead we think of it as an episode of cultural production, specifically musical cultural production? I'll argue that from such a framing what emerges a renewed impression of Music Technology's incoherence as a research field, and that one dimension of this incoherence is in who the 'publics' for this research could be, and that confronting such a question offers possible escapes from tendencies towards techno- and market-fatalism, as well as a richer basis through which to think about our accountability as researchers.
Live coding can be seen as an improvisational practice that uses code to express ideas.... The potential of using AI in live coding is promising but the consequences are still unclear. This talk will reflect on the possibilities of AI in live coding as well as the potential role of live coding in the age of AI based on the lessons learned from using MIRLCAuto, a constrained live-coding environment in development by the author. MIRLCAuto works as a customisable sampler of crowdsourced sounds empowered with machine learning. The talk will focus on discussing the relevance of adopting HCI strategies to help understand the system’s behaviour. Three salient aspects will be considered: the interactional space between human agency and machine agency; interactive machine learning connected to ownership and transparency of the system; and the live-coding practice as a research tool.
Chamber musicians use a variety of bodily movements to coordinate their performance and evoke meanings for audiences,... ranging from subtle expressive gestures to large-scale synchronization cues. In an ongoing practice-led project involving a series of design experiments and public workshop performances, we are attempting to efficiently sense, encode, transmit, and display relevant movement features using 3-dimensional mechatronic displays to support musicians performing telematically—in disparate geographical locations using high-quality audio streamed over the Internet. The unfolding process of developing performances with these systems has shed light on musicians’ ability to form fluid and flexible relationships with mechatronic systems, which can be considered on a continuum between passive kinetic sculptures and embodied robotic avatars. This process has prompted intriguing questions about the degrees of agency, autonomy, and anthropomorphism we ascribe to materially embodied representations of ourselves and others.
The identification of human-technology relations (re)shapes our understanding,... activities and interactions we develop with digital musical instruments. In that vein, postphenomenology broadens our comprehension of more advanced configurations of digital musical instruments as agents or actors in music performances from a situated and embodied perspective. This presentation explores and questions how we can think about musical instruments -and playing them- in terms of relationalities. In line with this attitude of observation, digital musical instruments and musical instrument with artificial intelligence will be reflected. The fundamental tactile relationship with the instrument through the hands of the performer and the embeddedness in AI of the relationship between instrument, performer and sound lay bare fascinating multi-layered modes of ‘otherness’ (alterity) touching upon crucial questions on musicianship and transformative musical practices. Postphenomenology intends us to understand, beyond the dichotomy of the human and non-human, the relationship between humans, technology and the world. Thinking from the assumption that (technical) instruments -having become part of our perception- mediate how we relate to the world, a next step would be to understand musical instruments in a likewise manner. Navigating between the affordance of the materiality of interfaces, manual manipulation and extended creativity, these new musical instruments invite us to reflect on creating new sounds with AI in between these tactile hands and the aspect of giving out of hand of the musical process.
The notion of intelligent entities, agency and autonomy that is implemented in artificial intelligence systems... used in music performances today does not imply super-powered omnipotence of a technology, but, as the title suggests, simply appears to be a part of the complex, social, mutual rather than an object of technology that is removed from the flow of the musical activity. In my talk I will discuss how co-creation practices in music make mutual dependence and interdependence relationships more explicit between human and non-human actors. To explore this co-creation concept, I will take as my starting point the studio sessions that we recorded live in March 2022 with an artificial intelligence musical instrument. These studio sessions present a way of composing and performing in which musicians have been directed to explore and re-construing their own experience in the relationship with each other and with the autonomous musical instrument. This presentation will suggest that such artificial intelligent entities become present to be agents or actors on the means of their fundamental creative acts in music.
Machine learning algorithms are, fundamentally, tools for identifying and using patterns in data.... Conventionally, data is understood as capturing something “true” about the world, and ML models become a way to understand or harness that truth, for instance to make better decisions, or to generate new content conforming to complex conventions. Consequently, bias in data is seen as an undesirable corruption of the truth, and models capable of capturing complex patterns in bigger datasets are understood to be more useful. However, I will argue that it is useful to think about “data” not as something “true”, but as a sort of interface between people and machines. Data embodies and communicates a set of decisions about what one would like a computer to do, what content or behaviours are important, what resources are worth expending, and more. I’ll describe how this recognition can shape how we build interactions with machine learning, design user interfaces for working with ML and data, and reckon with concepts such as bias, scale, and generality in ways that are meaningful and distinct for practitioners in music and the arts.
In this talk, I present some examples of local technological-based artistic practices emerging from different parts of Latin America... reliant on the reuse and re-appropriation of materials, as well as improvisation for their problem-solving. Specifically, I will discuss how sociocultural and economic contexts constrain such artistic activities, as exemplified by the Brazilian practice of gambiarra. I will further show how electronics recycling not only has become a necessity for sustainability, but has existed as a long-standing reality within these regions. As a result, this perspective has become ingrained within these cultures. Finally, I will suggest how mainstream music and arts technology practices can benefit these approaches emerging from Latin America.
In this conceptual, ground-clearing paper I will take a number of perspectives on the vital question of human and...material or machine agency from anthropology, philosophy, and science and technology studies and hold them up against the new challenges posed by machine learning. The aim is to clarify, by systematically questioning, whether these well-known paradigms – inter alia, Latour and Barad, Ingold and Suchman – continue to have insights or reach their limits, and in what ways, when it comes to conceptualising our relationship with ML. I do not know in advance how this exercise will fall out, as befits the emergent nature of our knowledge of ML. But the questions are ones, it seems to me, that we need to ask. Music will be a vehicle for this process of questioning.