New Technologies: Pushing the Boundaries of Art

New Technologies: Pushing the Boundaries of Art

Source: Orange.

Technology shares the progress made by science in the form of machines, products and processes from industry. Art, on the other hand, conveys ideas and emotions, which sustain our society and our conversations. Art and technology have always been linked.

As Marshall McLuhan said, “The medium is the message.” The technical framework we live in informs our creativity.

Yet this framework is changing, as ideas expressed through computer code are becoming increasingly common in our daily lives. The ongoing digitalization of our societies is creating a new playground for art, through virtual worlds and new methods of trading. It is also changing the rules of the game, shaking up the very notion of an artist in the age of networks.

Artistic creation and hybridization in the age of the Internet

The rise of the internet and web technologies has had a deep impact on artistic creation, which includes much more collaboration, appropriation, and participation.

In the 1990s, the development of the internet gave rise to Net art, a term referring to works conceived “by, for, and with the Internet”. Net artists use the “network of networks” as a delivery media, as an artistic production tool, and as a living space for works, all at once. They take inspiration from its distinctive characteristics to invent original works of art, but also to experiment with new creative processes.

The impact of the internet on art today goes far beyond the Net art community. Its various technological evolutions are shaping new generations of artists, whereas the culture and values that governed its creation continue to transform creation modes and shake up the relationships between artists, works, and the public.

Free-culture and artistic collaboration

The history of the internet is closely linked to the free software culture, based on the opening up and sharing of software source-code and the voluntary contribution of researchers and programmers from a variety of backgrounds. Most of the internet building blocks are based on free or open-source software (the two terms may differ but they both refer to the notion of an open code): everybody can access the source code, use it, copy it, modify it, and redistribute it.

For example, in 1993, the World Wide Web software, which had just been invented by Tim Berners-Lee and Robert Cailliau, was put in the public domain, then under free license, which facilitated its rapid and massive distribution.

For artists who use computer code as an instrument, open source represents both a means of breaking free from proprietary software that is often costly and inflexible, as well as a more open and collaborative mode of creation.

As is explained by a “VICE Magazine journalist”, the open-source principles of openness, sharing, and collaboration clearly contrast with the traditional world of art (in which artists are reluctant to share their way of working) and the image of the lone genius.

Artists who work with open-source software make up a community of users who help each other to solve technical and artistic problems. They meet on discussion forums and on GitHub (a software hosting and development service that offers social network functionalities) to share their work and discuss their creative processes as well as the technical devices they use. Some of them also contribute actively to the improvement of creative commons tools, of which Processing and openFrameworks are noteworthy, these are two development environments developed for visual arts by coder-artists.

Consequently, the works produced, the fruit of multiple contributions, are the result of a collective effort. Some of them come under a free license (such as Creative Commons or Free Art License). For example, the 2D animated movie “ZeMarmot”, produced entirely with free software (GIMP, Blender, Ardour, etc.) is to be distributed under Free License. Its scriptwriter, Jehan, has been one of the main contributors to graphics software GIMP since 2012.

Mash-up: artistic appropriation in the age of the internet

Mash-up is a composite art that consists in reusing existing sounds, images, videos, or texts to obtain a new creation. Generally, the process adds very few original elements to the final work, just enough to combine the various components harmoniously. A well-known example of a mash-up is “The Grey Album” by musician Danger Mouse, which takes “a cappella” parts of “The Black Album” by rapper Jay-Z and puts them onto samples (preexisting sound extracts) of the Beatles’ “White Album”. Whether they are aiming for parody, homage, or the repetition of a pattern, many “mash-uppers” manage to transcend the original material to give it new meaning and find their own creative expression.

Although mash-up originated well before the internet boom, in artistic appropriation and music sampling in particular, it truly developed as a form of artistic expression with the evolution of internet technologies. Digital file or peer-to-peer exchange hosting platforms provide internet users with a bounty of source material, and YouTube has become an inexhaustible source of extracts of music and films from all times and from any country. As for the distribution of new tools (production and mixing software), it facilitates working on this raw material.

But mash-up is not just part of a technological context, it is also part of a cultural context that is specific to the internet, which prescribes open – and often free – access to information and, by extension, to cultural content available on the network.

Thus, musical mash-up has spread due to the combined action of the invention of the MP3 format, which enabled a major reduction in the size of audio data, thus facilitating its transfer and storage; of the fundamental idea of free and open file exchange online; and of practices that started to develop at the end of the 1990s on peer-to-peer data exchange platforms such as Napster.

Web 2.0 and participatory art

In 2013, contemporary artists Ai Weiwei and Olafur Eliasson launched their shared project Moon. Moon is a virtual collaborative space, presented as a moon divided into thousands of blank plots on which everyone was invited to leave their mark. This project, which brought together over 80,000 contributions between 2013 and 2017, and transcended borders and cultural differences, is a fine example of a participatory artistic experiment.

Just like computing and the internet, the advent of Web 2.0 and social networks, both as technologies and as culture, has had a deep impact on artistic creation. Providing artists with new material means of including the public more in the life of their works, Web 2.0 is defined by its participatory dimension – hence its name of “participatory web”. It enables users to interact and produce content much more easily than before.

Interactive art, which is participatory by definition, already exploited new technologies to enable the public to explore and influence works. Sensors, interfaces, and algorithms play the role of intermediary between the public and the work, enabling the public to act and the work to react, all in real time.

Web 2.0 enables artists to go even further and develop higher degrees of interactivity with what French researcher Jean-Paul Fourmentraux calls “dispositifs à contribution” (contribution devices).

These devices enable internet users to take action on a virtual or physical installation by transforming it or by providing new data (in the case of Moon, these were drawings and text). They take part, sometimes according to predefined rules, sometimes not, in real time or non-real time, in the emergence of a collective piece of work of which they become co-authors.

Thanks to the web, the public can also contribute to a work of art’s life through participatory funding, or “crowdfunding”, which facilitates the linking up of project leaders – artists, institutions, etc. – with a new kind of sponsor through online platforms (general platforms like KissKissBankBank and Ulule, or specialist ones such as Proarti).

Over the last decade or so, many artistic and cultural projects have seen the light of day thanks to this new means of funding. Although film and music were the first areas to benefit from this, all fields of creation are now concerned: from the visual arts to live entertainment, through literature.

When art inspires technological research

Technological innovations have had an influence upon creativity and artistic expression. But this is not a one-way relationship. Researchers and engineers are also turning to art to attempt to solve scientific and technological problems.

What does 19th century Russian literature have to do with probability theory? In 1913, Russian mathematician Andreï Markov, who devoted most of his work to the study of phenomena characterized by chance, decided to analyze the succession of Cyrillic letters in Alexander Pushkin’s novel in verse “Eugene Onegin”. He noticed that the appearance of a letter depends on the letter that precedes it, with a certain probability distribution.

Out of this observation arose Markov chains,  a random process describing a sequence of events within which the probability of each future event depends only on the state of the present event. At the root of many algorithms, used for example in meteorology, communication network modeling, or traffic forecasting, the development of Markov chains is just one example to illustrate that the relationship between art and technology is not one-way.

Cyberpunk and virtual reality research

Many authors have investigated the links between science-fiction (sci-fi) and technology, and more specifically the influence of the former on the latter. In a reference article on the subject, English scientific author Jon Turney explains that technological research always implies storytelling. “Every technology begins in the imagination, and needs a description of what it will achieve […]. Every patent tells a story. Make this device, or follow this process, and certain things will be possible – things not seen before.”

Anticipation can provide this storytelling, as was the case for example of the novel “The World Set Free”. Written by H.G. Wells in 1913, it was sent by physicist Leó Szilárd, conceptor of the nuclear chain reaction and contributor to the Manhattan Project, to potential investors to help them visualize his idea. As for Jeremy Bailenson, a psychology researcher at Stanford University, he believes that many questions raised in 1980s cyberpunk literature have become virtual reality research themes.

Cyberpunk is a genre of sci-fi that depicts a technologically advanced society, where technology has invested all areas of life and colonized the human body. According to Bailenson, the world of virtual reality researchers has always been intimately interwoven with that of cyberpunk authors.

There are two reasons for this: firstly, the two groups collaborate, as is shown by the relationship between Jaron Lanier, one of virtual reality’s pioneers, and William Gibson, author of “Neuromancer”, iconic novel of the genre. Secondly, cyberpunk works are considered serious sources in classes on virtual reality and digital human interaction. As a result, these works of sci-fi shape the way in which virtual reality researchers approach certain concepts, such as the avatar, presence, or social interactions within virtual worlds.

This theory may not have full consensus, but it is obvious that the difference between sci-fi and foresight (the science aiming to predict the future evolution of societies) is sometimes tenuous. This is evident in the Red Team initiative, a team of sci-fi authors set up by the French Ministry of Armed Forces and whose mission is to imagine the conflicts of tomorrow.

Jazz improvisation and man-machine communication

The MUSICA (MUSical Interactive Collaborative Agent) project aims to develop an artificial intelligence (AI) that can play jazz and improvise alongside human musicians. The system is trained from a database containing thousands of transcriptions of musical performances by big names in jazz with the help of machine learning techniques. This enables it to analyze the transcriptions to identify musical models, then use this knowledge to compose and play live music.

Unlike one may think, MUSICA is not financed by a major label or a computer-assisted music startup, but by the DARPA, the US Defense Advanced Research Projects Agency, in the scope of a wider program called “Communicating with Computers”.

The purpose of MUSICA is not to develop a jazz-playing robot, but to make progress in the area of natural language processing. This work suggests that computer modeling of improvised jazz music can help to improve robots’ ability to communicate and collaborate with humans.

“The clear implication is that jazz improvisation is so paradigmatically representative of more general modes of human interaction that its technological replication would have some kind of military value going beyond its intellectual or aesthetic meaning”, writes Brian A. Miller, a music researcher at Yale University.

Improvisation and conversation have many similarities. They both rely on the understanding of a multitude of signs (linguistic signs, gestures, facial expressions, etc.), on a mixture of creativity (enabling adaptation of our actions to the reactions of our partners) and rules, or on experience. Virtually all conversations include a part of improvisation, and jazz improvisation is always the result of a conversation among musicians, including elements of verbal communication as well as elements of non-verbal communication.

The emerging field of robotic choreography

As mobile robots begin to appear in our everyday lives, humans and robots will be sharing the same space more and more. It is therefore essential that the latter be capable of navigating human environments in a safe, efficient, and socially acceptable manner. In effect, delivery robots and other service robots will need to not only follow an itinerary and avoid collisions with obstacles and people, but also to respect a certain number of social conventions: not invading personal space, not interrupting a group conversation, moving around without bothering pedestrians, etc. This is known as “socially-aware navigation”.

Many robotics specialists believe that choreography, the art of writing dance, can help to program robots with “social awareness”. This idea led to the field of robotic choreography, “choreorobotics”. A priori both worlds are rather far apart. Choreography consists in inventing motions and gestures with an artistic intention, usually seeking to maximize the aesthetic value. In robotics, motion planning aims to calculate trajectories, enabling an autonomous system to go from one place to another in an optimal manner, avoiding obstacles.

However, both fields look at the body’s movements in a given environment. “Despite the historical divide [between dance and robotics], it is perhaps not too great a stretch to consider roboticists as choreographers of a specialized sort, and to think that the integration of choreography and robotics could benefit both fields”, a “Wired” journalist writes.

De facto, collaboration between roboticists and choreographers has existed for a long time. Some researchers are in fact at the crossroads of both fields, like Catie Cuan, a professional dancer and choreographer, and a Ph.D. candidate in mechanical engineering at Stanford University.

In an article written for “Scientific American” magazine, she explains that understanding and predicting human motion is a difficult problem in social navigation. Yet the role of choreographers is precisely to give directions to dancers, who apply them according to factors linked to space and to the other people around them. “Choreographers not only sequence motions together but place different agents’ motions in a relative context […].” In order to achieve this, they use a certain number of techniques that could inspire robotics programming software in complex environments.

Art and science are often pitted against each other. Yet, art seeks also to describe the world, and science contains a creative and imaginative part. Art in all its forms injects new ideas, which inspire researchers and engineers. Film, literature, music, etc. show us new possibilities both from the point of view of the technologies themselves as well as from that of their uses and impacts on society, they often harbor solutions to solve very real engineering problems.

When Science Fiction Predicts the Technology of Tomorrow

Science fiction literature and cinema has been the stuff of dreams for a long time now. The genre was pioneered by authors such as Jules Verne, who were already imagining tomorrow’s technologies and uses, inspiring scientists to make the leap from fantasy to reality. An overview of the technologies imagined in the arts.

Holograms — Jules Verne

In his novel “The Carpathian Castle”, published in 1892, Jules Verne writes about an opera singer who continues to perform after her death using a projection. Many years later, in 1948, a Hungarian engineer named Dennis Gabor actually invented the hologram. Once the laser was invented in 1961, this method could be perfected and was presented by University of Michigan researchers Emmett Leith and Juris Upatnieks in 1964.

Connected Watches — Dick Tracy

In his 1990 film, director Warren Beatty played Dick Tracy, the heroic policeman from a comic strip that has run since 1931. However, long before he made his debut on screen, detective Dick Tracy wore a radio on his wrist that sent and received messages. This was the first time that readers had heard of a connected watch, as they would not reach the market until long after in the 1990s with the Seiko Receiver, the first watch that could receive messages like a pager.

Intelligent Voice Assistants — 2001: A Space Odyssey

Stanley Kubrick’s 1968 iconic science fiction film “2001: A Space Odyssey” was particularly visionary. In addition to the Discovery spacecraft having a smart onboard computer named HAL, which foreshadowed future smart assistants (such as Siri), the feature film also has video-calling scenes just like the video calls we now make using software such as Skype.

Cell Phones — Star Trek

In the early 1970s, the Star Trek series inspired the genius creation of the cell phone. The sci-fi series sees Spock and Captain Kirk using a strange flip device — their pocket-sized communicator whose design is reminiscent of the first flip phones.

Augmented Reality — Back to the Future 2

In Robert Zemeckis’ 1989 film, a giant shark almost swallows Marty McFly at the beginning of the film. Not a real shark, but an augmented reality image superimposed at the movie theater entrance promoting the film “Jaws 19.” Of all the future technologies imagined in this film, augmented reality is the one that has become the most prevalent in our daily lives in the past few years.

Self-Driving Cars — Total Recall

Like in many science fiction films at the time, flying cars appear in Paul Verhoeven’s 1990 film. However, Total Recall features another futuristic vehicle, the self-driving car, something that 5G has made possible today.

Touch Screens — Minority Report

Adapted from a 1956 Philip K. Dick story, “Minority Report” is a Steven Spielberg film that was released in 2002. In this futuristic film, Tom Cruise uses gesture-controlled interfaces. By moving a finger or a hand, the screen stops, zooms in and so on, a little like the multi-touch screens offered on Apple iPhones in 2007.

Metaverse — Ready Player One

Released in 2018, Steven Spielberg’s film follows the adventures of a young man in a virtual world that runs in parallel to the real world. Using a virtual reality helmet and haptic sensors, the protagonist Wade Wyatts travels through the OASIS and lives a full life there as a way to escape the dystopian imagined world of 2045. This bears a strong resemblance to the metaverse that has been marketed by Meta, particularly Mark Zuckerberg, since 2021.

Artist-algorithms are shaking up our conception of creativity

Algorithms are now capable of creating original works of art by taking inspiration from thousands of images and inventing new artistic styles. Here follows an overview of these “artist-algorithms” that are shaking up the art world and questioning our conception of creativity.

One of the first examples of algorithmic art – art generated by algorithms – dates back to 1973, when English painter Harold Cohen wrote a computer program called AARON that could produce original drawings. American artist Jean-Pierre Hébert drew the outlines of this artistic movement twenty years later and invented the term “algorist”. An artist is an algorist when they create a work of art from an algorithm that they have designed themselves. The act of creation is in the writing of the code, which becomes an integral part of the final work.

Advances made in artificial intelligence (AI) are questioning this definition and bringing about a new generation of models. Thanks to machine learning, algorithms no longer simply follow a set of pre-defined rules by the programmer-artist. Fed with a large amount of data, they assimilate the aesthetic characteristics of artistic corpora and become ever more autonomous in the production of content. Since the 2010s, many families of algorithms have been used to explore new practices and keep on pushing back the boundaries of “artificial creativity”.

Composers have taken hold of Markov chains

Musicians have been forerunners where algorithmic art is concerned. Very early on they looked into using computer programs to compose music. As early as 1957, two Americans, composer Lejaren Hiller and mathematician Leonard Isaacson, were programming supercomputer ILLIAC I to generate musical suites using .

Today, Markovian models are trained via machine learning, using existing pieces of music. They analyze the musical characteristics (rhythm, tempo, melody, etc.) and the sequences of notes to determine the probability of one note following another. They can then generate new pieces of music in the same style as the original corpus. Markov chains are used in jazz improvisation for example.

GANs become the figurehead of creative algorithms

With the highly publicized auction of the “Portrait of Edmond de Belamy” at Christie’s in 2018, generative models became the figurehead of algorithmic art. Indeed, this piece by arts-collective Obvious was generated by a “Generative Adversarial Network” (GAN).

Introduced by American researcher in machine learning Ian J. Goodfellow in 2014, GANs are a class of unsupervised machine learning algorithms where two artificial neural networks do battle: the generator and the discriminator. The system is fed with a database made up of works of art, for example thousands of images of early twentieth century cubist paintings. The generators must produce new paintings by imitating cubism. As for the discriminator, it must try to spot the difference between genuine works and those generated by its opponent. Depending on the result, the generator presents ever more convincing new images until the discriminator can no longer distinguish between genuine and fake.

The artist plays a more or less active role in this process. Failing actually building the generative algorithm (this was not the case of the members of Obvious who borrowed the code from another programmer-artist), they select it, change it to obtain the desired result, and they run it. The artist gathers entry data (with the help of a “scraping” tool, a technique for automatically extracting data from websites), selects it (pre-curation), then sorts the content generated by the machine (post-curation). In “Fall of the House of Usher II” (2017), English artist Anna Ridler chose to create her own dataset by producing over 200 drawings.

Hence, artist and machine work together to cocreate a piece of art.

CANs invent new artistic styles

In 2017, researchers from the Art and Artificial Intelligence laboratory of Rutgers University in the United States suggested a new method for generating original art, inventing creative GANs, Creative Adversarial Networks (CANs).

On the assumption that GANs are limited in their creativity due to the way in which they have been designed (their aim being to imitate existing works of art from a specific style as well as possible), they changed the process to make them capable of generating creative art by maximizing deviation of the system from established styles.

CANs pursue three goals. They must generate works that are new (1), but without being too much so, i.e. swaying too far from the entry data, to avoid creating dislike (2). The work generated must also increase stylistic ambiguity, meaning it is difficult to classify in a particular style (3). Just like GANs, CANs are made up of two adversarial networks. The discriminator uses a large set of labelled works of art to learn the difference between styles (Renaissance, baroque, impressionism, expressionism, etc.). The generator produces the work from a random entry. However, unlike GANs, it receives two signals. The first tells it if the discriminator thinks the work presented is art or not, and the second whether the discriminator has been able to classify this work into an established style.

These two signals act as opposing forces as the first pushes the generator to emulate art whereas the second penalizes it if the discriminator manages to classify its style. This pushes the generator to explore creative space and to create works that, according to the Rutgers University researchers, not only misled humans but were also better ranked than the original works.

Evolutionary algorithms imitate creative thought

Less widely-publicized, evolutionary algorithms are also used to generate credible works of art. Inspired by Charles Darwin’s theory of the evolution of species, they are based on the three fundamental principles of natural selection. According to these principles, there are differences between individuals of a same species (principle of variation). Some traits are more advantageous than others and enable the individuals that have them to survive and reproduce better than their counterparts (principle of adaptation). These traits are passed on from one generation to the next (principle of heredity). The idea behind creative evolutionary algorithms is to reproduce the intellectual approach of the artist, who imagines, tests, and selects new ideas. This means modifying entry data randomly and in a variety of ways, selecting the best-adapted variant or variants, and repeating the process until a satisfactory idea emerges. During this iterative process, the artist intervenes to choose the most aesthetic variations of a generation, but it is also possible to automate this step.

The evolution of creative algorithms may have gone in the direction of increased autonomy in the production of works of art, but has this made them more creative? Are they destined to replace artists, or will they stay confined to the role of tools at the service of augmented creativity? These questions are the topics of debates. One thing is certain, the transmission of creativity, a notion intrinsically linked to human nature, to machines, is a huge challenge for machine learning!

Theory, practice: digital is democratizing art

Original museum experiences thanks to immersive technologies, art popularization via social media, the emergence of new production modes with digital creation tools… New technologies are contributing to the distribution of art and culture to an ever-wider audience.

In the early 2000s, digital arts made their way into museums, and cultural institutions started to explore the use of new technologies to rethink the showcasing of their collections. The stakes: to increase museum and art gallery attendance – places still too often perceived as being restricted to a handful of insiders – and spread their action beyond the hushed space of quiet exhibition rooms out onto the web.

In parallel, the democratization of digital creation tools led to the emergence of new amateur practices and to new modes of artistic production, also helping to make art accessible to a wider audience.

Reinventing cultural mediation: virtual visits and immersive technologies

Digital has done away with geographical distance. Museums and exhibitions can now be explored remotely, from the comfort of the home. Launched in 2011, in partnership with seventeen cultural organizations, such as MoMA in New York or Tate Britain in London, Google Arts & Culture provides internet users with the possibility of browsing different museum and world heritage sites, and of visualizing tens of thousands of works in high definition thanks to Street View technology.

Today, the Google platform brings together the largest art collection in the world, but many museums have followed its example and are offering their own online exhibitions and 360° visits.

The popularization of art must also include a reinvention of museography and the establishment of a link between visitors and works of art. To attract a wider, younger audience, several cultural institutions no longer hesitate to grasp technologies and experiment with new forms of mediation.

It is no longer enough to expose works on a wall, works that visitors contemplate passively. It is necessary to offer new experiences enabling them to interact with these works, to live them from the inside and to familiarize themselves with them actively.

To dive into the heart of a canvas to explore the smallest details, visit the workshop of a sculptor, wander through places that disappeared long ago, discover a living scene of life in the olden days… These new experiences are possible thanks to the combination of several interactive and immersive technologies such as video mapping, augmented reality (AR), virtual reality (VR), multisensory interfaces, or holograms. The new generations of mobile networks that multiply data speeds and reduce latency, such as 5G, make them accessible from a smartphone or tablet.

For example, in Paris, the Grand Palais offered to explore in AR and VR the Pompeii Garden House before the eruption of Mount Vesuvius and in the present day, thanks to 3D reconstitutions and very high-resolution photographs of the site. Imperial War Museum in London is scheduling a holographic exhibition that visitors will be able to discover from their home, without expensive technology, thanks to Desktop AR technology. Developed by startup Perception, this AR system turns an ordinary computer screen into a volumetric display, making 3D objects appear in front of the screen simply using a webcam and standard anaglyph glasses (using a different color filter for each eye).

For their part, Orange and production company Amacilio are offering a virtual reality tour of Notre-Dame de Paris cathedral. This extremely immersive experience makes it possible to discover the history of this iconic monument and to wander freely around as if one were there.

Popularizing art: MOOCs and social networks

Major efforts are also being made to spread knowledge of the arts. Digital is used as a medium for sharing artistic popularization content, making art accessible to all.

The Louvre, the Centre Pompidou, the Grand Palais, and plethora of cultural and academic institutions throughout France and abroad, are producing more and more podcasts and MOOCs (Massive Online Open Courses). This is also the case of the Orange Foundation, which is offering cultural MOOCs linked with exhibitions taking place in France.

These customizable courses, which often offer various fun educational activities, are given online on specialized platforms such as “France université numérique” (France digital university) or Coursera.

Social networks have had a deep impact on the democratization of art. On YouTube or Instagram, many creators – art students, experts, or simple enthusiasts – offer original, educational, and interesting content. They share a less academic and more personal outlook on the various artistic fields. They dust off the history of art and decode complex subjects or popular works of art, making parallels with current topics that are of interest to their audience.

Well aware of this trend, cultural institutions are turning to social networks more and more, producing content specifically for these distribution channels and working together with influencers. For example, in December 2020, many of them accepted the invitation from TikTok, who were organizing a special operation in France to enable its (young) community to attend tours and shows live, and to go behind the scenes of prestigious French museums and theatres. 

Facilitating artistic practice: the example of music and DAWs

In parallel, innovations in digital creation tools have opened artistic training and production up to a broader public.

For example, the evolution of computing power, combined with the multiplication of “consumer” software that is user-friendly, easy-to-use, and overflowing with ever more functionalities, has helped making music become more accessible. From composing to mastering (which aims to perfect the audio signal quality on all devices) and distributing works, through musical training and playing instruments, the whole chain of creation is involved.

Like the digital synthesizer before them, “all-in-one” music creation software, DAW (Digital Audio Workstation), has given rise to a new generation of artists by giving them access to possibilities that were previously confined to specialists. Indeed, software such as GarageBand, Ableton, or Logic Pro make it possible to record and produce sounds, apply various effects to them, and mix them relatively easily and with little means.

Built around these DAWs, to which physical and software equipment can be added, the home studio has lifted the barriers to music production (access to traditional recording studios, the need to sign with a label, etc.) and fostered the emergence of self-production.

Over time, the various building blocks of the home studio have become easier to use and less expensive. Today, many amateur musicians or those wishing to start a professional career, equipped with just a computer, compose and arrange their music at home then share it on audio distribution platforms, such as SoundCloud.

Broadly speaking, the arrival of artificial intelligence and machine learning onto the creative tool scene should push the democratization of artistic practices to a climax. The promise? To help amateurs, with no previous training in art or even knowledge of computer programming, to produce images, sounds, and texts from the analysis of huge data pools. Will these amateur artists be able to rival with professional artists? Although AI makes it possible to imitate styles and artistic compositions, even complex ones, the question of originality, sensitivity and artistic vision remains… Abilities that only humans possess and that are grown from experience.

Beyond the hype, NFTs are stimulating the art market

Placed in the spotlight in 2021, NFTs (non-fungible tokens), have become highly popular with art dealers, collectors, and even museums. This blockchain-based technology is transforming the art market and opening up new perspectives for artists.

In March 2021, a collective named “Burnt Banksy” burnt an original Banksy print live on Twitter, then “reincarnated” it as a digital piece of work associated with a non-fungible token (NFT). A sacrilege for some, revolutionary for others, this action propelled the art world into a new universe, that of crypto art.

The term “crypto art” refers to works of art, most often digital, accompanied by NFTs, which can in fact be associated with any kind of digital object (virtual trading cards, video game objects, music files, etc.). An NFT is a type of unique cryptographic token, meaning it is not interchangeable (unlike a cryptocurrency), stored in a blockchain. It points to a digital file representing a work of art and containing a certain amount of information aimed at potential buyers: its title, its creation date, the author’s name, a description, or even its ownership history.

NFTs can be created – the word “minting” is used, a process consisting in converting a digital file into an NFT, i.e. a digital asset stored in the blockchain – and sold on specialized platforms such as OpenSea, Rarible, or SuperRare. The buyer acquires the property rights to the original work included in this NFT via a smart contract. The artist does however conserve their intellectual property and reproduction rights. They can also include a resale right in the contract, enabling them to receive a percentage of each resale of the NFT.

The crypto art gold rush

The first artistic NFT was created in 2014 by American digital artist Kevin McCoy. However, it wasn’t until 2017 that the use of NFTs started to become popular, with the launch of CryptoPunks, a series of 10,000 unique characters generated by an algorithm. In March 2021, the sale, by Christie’s auction house for 69.3 million dollars, of the virtual collage “Everydays: the First 5000 Days” by American artist Beeple, marked the beginning of the true crypto art “gold rush” into which many artists and collectors from a wide range of backgrounds have entered.

According to Primavera De Filippi, a research fellow at the CNRS and Harvard University, crypto art is not a new form of art as NFTs are only a tool enabling the sale of digital works saved in the blockchain. “The true revolution of NFTs is not linked to the new artistic practices that they generate, but rather to their repercussion on the art market”, the researcher states.

NFTs are boosting digital art

Thanks to the properties of blockchain, NFTs act as tamper-proof digital certificates of authenticity, written and signed by the artists. These technologies bestow uniqueness upon digital files which are, by definition, infinitely reproducible.

They guarantee that buyers own both an authentic digital work and an original “print” – which was previously impossible. These two parameters that are essential to collectors, give financial value to the artwork and enable it to be (re)sold like a traditional piece of art.

Consequently, digital arts – which were still struggling to find their place on the art market, in particular due to the difficulty of monetizing the work – are now attracting the attention of art dealers. Following the highly mediatized sale of Beeple’s collage, many private art galleries and auction houses have started to include crypto art works in their catalogues.

Museums too are looking into NFTs, which could be a potential source of income for them. In late 2021, the British Museum in London, in a partnership with French startup LaCollection, put up for sale over 200 digital reproductions of emblematic prints by Japanese painter Hokusai during an exhibition dedicated to the artist.

Underlying trend or passing fad?

It’s too early to know if NFTs are here to stay and if cultural institutions are truly convinced of their worth. For want of technological maturity, they still have flaws that are incompatible with the specific requirements of the world of art stakeholders.

Over the past few months, several scams have emerged with the sale of stolen or counterfeit works by individuals posing as well-known artists, or who have retrieved and “tokenized” (meaning turned into NFTs) works without the authors’ permission.

What’s more, works of crypto art are stored off-chain, which makes them vulnerable to the link rot phenomenon, which invalidates hyperlinks. Only the NFT is stored in the blockchain. The work of art’s very large digital file is retrievable on a traditional webpage that is accessible via a URL.

A chance for artists?

A priori the arrival of NFTs onto the art market provides many advantages for artists. By enabling them to monetize their creations more easily, NFTs can provide them with the means to live better from their work.

In a world where art critics, curators, dealers, and collectors play an essential role in artistic recognition as well as in the evaluation of the aesthetic and market values of works, NFTs also appear as disruptive agents. They make it possible to bypass traditional circuits and cut out the middleman, calling into question the traditional codes of the art world. They give artists the possibility to be freed from intermediates so as to distribute their work to a wider audience.

De facto, crypto art platforms enable everyone to own virtual works of art. Up until now, they have mainly encouraged new buyers (tech and finance personalities, young celebrities, long-time holders of cryptocurrencies, etc.) to enter the world of art.

However, NFTs are the subject of harsh criticism, in particular from artists, many of whom refuse to use them because of their environmental footprint or due to their belief that NFTs contribute to the financialization of art.

Some also believe that, far from protecting their creations, the crypto art market and the appetites it whets make them more vulnerable to theft and appropriation. Several works published under free license have thus been retrieved and turned into NFTs without permission from their authors. This process, which is legal when the license provides for commercial use, can violate artists’ moral rights.

Finally, the creation and sale of NFTs being associated with numerous fees (minting fees, listing fees, withdrawal fees, percentage of sale price requested by platforms, etc.) the transaction can turn out to be disadvantageous for artists.

If NFTs are to become a long-term part of the artistic picture, the startups and platforms that are riding the crypto art wave will have to meet several challenges: guarantee the durability of NFT works, improve their environmental footprint, fight against fakes, or imagine models that are truly profitable for artists.

Technology is helping to protect masterpieces

3D laser scanners, AI, Big Data, robotics, 3D printing, etc. Digital technologies are increasing the means available to professionals involved in the preservation of cultural heritage.

Digitization, artificial intelligence, 3D-modelling, and robotics are helping to conserve and restore key works of art over time, to reveal their secrets, and to add value to heritage.

3D-Digitization of art

The digitization of original works of art makes it possible to limit their handling and to archive them, thus guaranteeing their long-term availability. Based on ever more sophisticated techniques, this has become common practice for museums and libraries, many of which partner with specialist companies to conserve their masterpieces in digital format.

In 2018, Tate Modern in London and Arius Technology signed a partnership to digitize and reproduce around ten master paintings. The Canadian company works with the museum’s curators and historians to capture 3D scans thanks to proprietary ultra-high-resolution art capture technology, which records the color and geometry of paintings with an extremely fine level of detail without touching the surface of the painting.

Ten years earlier, the European Commission (EC) had launched a project aiming to give access to the digital objects and collections of Member States. Today, the Europeana platform brings together over 50 million digital documents (books, audiovisual material, photographs, archive documents, etc.) provided by over 3,000 cultural institutions across Europe.

Travelling back in time thanks to AI and Big Data

Time Machine is another project supported by the EC. Its aim is to create a huge distributed information system, fed by the digitization of the collections of European museums and libraries, and which exploits new technologies to explore the history and vast cultural heritage of our continent.

Time Machine provides for the building of a 4D engine, enabling spatiotemporal simulations thanks to the “megadata of the past”. This technology will be used as a basis for developing “local time machines”, to travel to different sites at different times (such as the lost ports of Belgian city Bruges). It will also be used to create “mirror worlds”, true digital twins of our cities.

AI is to play a key role at every step of this ambitious project, from digitization resource planning to the interpretation of documents, through fact-checking and authorship attribution (meaning the acknowledgement of paternity of a specific artist). “AI and Big Data, when paired with human expertise, opens the possibility of critically reconsidering existing historical interpretations”, can be read on the project’s page. “For instance, last year, an AI-based document reading system, when applied to several hundred thousand art history documents, identified more than a thousand artworks with conflicting attributions.”

Larger than life facsimiles

Digitization constitutes the first step in the making of facsimiles.

Established in 2001, Factum Arte is known for its particularly realistic replicas of Egyptian tombs or of famous paintings. The studio mixes traditional craftsmanship with “homemade” cutting-edge technologies, such as the Lucida 3D scanner that they developed for the digitization of paintings and bas-reliefs. This close-range non-contact system captures high-resolution data of the surface and texture of works of art – without color – by projecting a moving red line onto the surface. The distortions of light due to the relief are captured by two video cameras, saved as a black and white video, and processed by software integrated into the scanner so as to produce a 3D model and a digital image of the data. Color data that is retrieved thanks to a photographic process is than added.

Once it has been digitized, the work of art can be reproduced with the help of several techniques, particularly 3D printing.

Lazergrammetry to the rescue of architectural heritage

3D digitization techniques are also useful in architecture, for restoration in particular. They make it possible to better tackle historical monuments as a whole and in detail, as well as to recreate lost elements.

One of these techniques, lasergrammetry, represents a major technological innovation. Benefiting from a fast evolution, with ever better performing machines and processing software, it uses 3D laser scanning to measure an object as a whole by recreating the three-dimensional geometry as a point cloud.

Fast and precise, this technique can be combined with the analysis of photographs taken from different viewpoints (photogrammetry), and it makes it possible to acquire large volumes of data, which can then be used to perform architectural surveys of complex works.

In April 2019, the public authorities called upon French company Art Graphique & Patrimoine (AGP), who specialize in architectural surveys, to meet the needs of the teams involved in securing Notre-Dame de Paris Cathedral following its fire. The mission: to make a precise 3D survey of the building post-fire in a day, using lasergrammetry and photogrammetry, so as to make a diagnosis of the damage.

AGP, who had been working on 3D digitization of Notre-Dame for several years, had access to hundreds of millimeter-precision scans of the roof and the spire, elements that were lost in the blaze. As a second step, the surveys carried out in the immediate aftermath were paired with this data to produce the technical documentation necessary for planning the reconstruction work.

3D-modeling for studying heritage

Digitization and 3D laser scanner surveys can be used as a basis for creating virtual facsimiles. Digital twin of Notre-Dame de Paris, virtual replica of the Lascaux cave… examples abound. It is about using software to rebuild an edifice, or part of one, in a browsable or handleable format based on surveys, iconographic documents, historical studies, etc.

This process paves the way for many possibilities in terms of research. It makes it possible to recreate lost parts of a building, to explore it in detail, or to replace it in its context, which helps to better understand the choices made by the architects and builders of the time.

Imaging and AI for revealing the secrets of canvases

Imaging techniques (X-ray, infrared, spectral scanning, etc.) combined with AI make it possible to analyze works in depth and to reveal elements that are undetectable to the naked eye. Indeed, many painters change the effect of their original composition by adding or removing elements (pentimento), or even cover up their painting completely. Also sometimes, all or part of a canvas is reworked by another painter (repainting).

The study of these transformations enables experts and art historians to identify the materials and techniques used by the artist, to improve attributions, or even to discover new masterpieces. This was the case with “The Lonesome Crouching Nude” by Picasso, hidden under another of the artist’s paintings. Revealed by X-rays, it was recreated in 2021 by researchers at University College London thanks to AI and 3D printing. A deep learning algorithm was trained with Picasso paintings from his blue period so that it could learn his style and reproduce it. Once it had been recreated, the painting was printed onto canvas.

A robot for assembling archeological artefacts

Robotics is also involved! The European RePaIR (“Reconstructing the Past: Artificial Intelligence and Robotics”) project aims to develop a robotics system boosted by AI that is capable of recreating shattered artefacts, such as amphora or frescos. The idea is to build a robot equipped with mechanical arms, which scans fragments, recognizes them, and assembles them, handling them with care thanks to advanced sensors.

The first to benefit from this new method is none other than the archeological site of Pompeii in the south of Italy. Two world-renowned frescos, thousands of pieces of which are currently in storage, are soon to be restored.