1. INTRODUCTION
⌅Emilio Ros-Fábregas, IMF-CSIC
As part of the European project COST Action «A new ecosystem of early music studies» (EarlyMuse, CA21161), we organized at the Institución Milá y Fontanals of Research in Humanities (IMF-CSIC) in Barcelona a so-called Short-Term Scientific Mission (STSM) that brought together four prestigious scholars.1
The experience I have had with the creation and development since 2013 of the digital platform Books of Hispanic Polyphony IMF-CSIC (BHP: https://hispanicpolyphony.eu/), in open access since 2017, has been very useful to maintain a direct contact with the challenges of digital technology, as well as with the opportunities such platform offers to introduce students to the world of Digital Humanities and to develop collaborative musicological research through practicums and research stays at our institution.4
The objective of the STSM is to examine the possibility, challenge, interest and impact of integrating, within digital music edition projects, layers of music analysis annotations, as well as, music performance annotations and audio. This would make music editions richer and oriented to broader audiences (musicologists, performers, music and musicology students, general public), allowing interactions between the fields of editions, historical musicology, music analysis, and performance practice. This project needs experts in the fields of (1) music publication platforms and interfaces, (2) computational music analysis, and (3) encoding and visualization of music, music analysis and audio. The ultimate goal is to imagine and put the first steps towards a prototype of an interface/framework that integrates the visualization of music editions, music analysis annotations, performance practice annotations and audio…. As a case study, [the discussions could include] a practical assessment about the shortcomings, potential, and possible technological development of any aspect of the digital project BHP (Books of Hispanic Polyphony) in combination with the printed collection Monumentos de la Música Española at the host institution.5
Our proposal was in line with the general objectives of the COST Action, as well as with the specific «Grant Agreement Period Goals», in particular GAPG 4 (related to Working Group WG3—Publications): «Imagining innovative publication models to promote the dissemination of research in all its diversity (historical, performance, digital, etc.)»; see below the four contributions. We were able to fulfill also GAPG 3, related to Working Group WG2—Sources: «Describe ways of identifying and describing endangered music sources (including in Ukraine)», since one of the grantees is from Ukraine with direct knowledge about the problem; see below Mazurenko’s contribution. For the organizers of this STSM was also unexpected to find out that two of the four grantees, Lartillot and Mazurenko, had a particular interest in traditional music (of Norway and Ukraine, respectively), which connected very nicely with our project Fondo de Música Tradicional IMF-CSIC (FMT: https://musicatradicional.eu/home), now undertaking a complete renovation of its website.6
The discussions about the relationship between technology and music heritage in a broad sense, encompassing early music and oral tradition, were very fruitful.7
In the following four sections, the presentations are based on very advanced specific projects and practical experience of their authors, which constitute a rare occasion for the non-expert reader to explore, in plain language, the latest developments and challenges of digital musicology related to the edition, annotation and analysis of music heritage. I thank them for their generosity, and leave as concluding remarks of this article Mazurenko’s conclusions, explicit enough about the sense of urgency concerning music heritage at risk of destruction.
2. CONSIDERATIONS FOR NEW MODELS OF DIGITAL MUSICOLOGY PUBLISHING
⌅Kevin Page, University of Oxford
In reflecting the state of the art for digital musicology, its publication and dissemination, one might observe that «systems» —by which it is meant here assemblies of sources, human expertise, and technologies— are often most successful and sustainable when they are designed and refined towards a clear and specific use. This is particularly the case for digital music editions, where any solution must pay close attention to the nature of the sources, their form and format, the need for analysis, visualization or presentation, and the audience to which any publication —in the broadest sense— is aimed.
Since our music cultural heritage is large, rich, and complex, there are naturally a multitude of uses to consider, including many already recognized and understood, alongside other anticipated as having great potential.8
The following three recommendations, while by no means exhaustive, are offered as an initial framework through which we might begin to assess and structure such considerations.
1. New models of publication should reflect the iterative and incremental nature of music scholarship. As in any academic research there are multiple stages, which sometimes occur sequentially (but not always), sometimes repeat, and often feed knowledge back into preceding stages. Technologies exist to support each of these stages (sometimes several), however for the digital support for each stage to be most effective it should be selected to match requirements specific to that stage. In other words, a recognition that different stages of research have different needs, reflected in use. This is illustrated in Figure 2, which we approximate here to only three stages: sources, analysis, and publishing.
For example, during an exploration of sources a researcher will wish to search, identify, and refine items from music collections to take forward for analysis. The researcher is likely to desire the widest possible breadth of indexed content for discovery, alongside faceting by criteria they consider relevant (perhaps composer, period, or melodic patterns in incipits). Services supporting these needs are exemplified by RISM Online, DIAMM, the Internet Archive Live Music Archive and LUX; see Background section, below.
Whereas the data (derived from sources) and tools used in the analysis stage of research must be aligned with the specific scientific investigation being undertaken. Selected examples given below include the Delius Quartet Performance Annotation tool, and CALMA.
For publication (in the broadest sense), the priority is communication and dissemination to a particular audience, entailing a combination and potentially derivation of outputs from previous stages, made consistent and coherent through an editorial process. Selected examples given below include the Delius Catalogue of Works, and the Lohengrin Digital Companion.
2. New models of publication should reflect different needs when publishing «editions» versus the sharing of «data» —and potentially at intermediate points along this spectrum. By «edition» we broadly include all intellectually driven interpretations such as traditional academic formats (monographs, journals, etc.), but also curated websites. In comparison, by «data» we refer to the representations of underlying information upon which scholarship is performed.
For example, an edition designed for public engagement, such as the Lohengrin Digital Companion, has an interface organized around a strong narrative theme. Conversely, the CALMA dataset provides full access to all computational analyses of audio and tools to programmatically explore and analyze that data (but no predetermined graphical discovery interface).
These distinct needs require different technologies as appropriate solutions; where an ecosystem of technologies has been developed it can support the transition from data to edition(s). For example, International Image Interoperability Framework (IIIF) provides the underlying access to data publications; which in turn provides for the development of common visual explorers such as Mirador; mirador can be incorporated in general discovery interfaces such as Digital Bodleian; and also within research tools such as the BitH Annotator App. Through Quire, IIIF images are automatically transferred into digital editions.
We should also recognize the operational priorities of the individuals or institutions publishing data. A library catalogue will have been primarily designed and structured to support the operational needs of that library or special collection; when it is made available as data, the Application Programming Interface (API)11
For both editions and data, citation is crucial, yet the granularity will be different and should be considered alongside the format in which each is encoded. Other important considerations are those of data protection (ethical sharing of collections; copyright; attribution and credit), urgency (heritage at risk), and sustainability of service provision (minimizing complexity and costs where the service would be unsustainable if higher).
3. New models of publication should enable increased articulation and recognition of all contributory roles in the musical cultural heritage ecosystem. There is expertise and scholarship throughout musical cultural heritage. The skills and judgement of a cataloguer in a music library is equally crucial to the production of a future academic publication as the subsequent analysis of sources. In giving visibility of this value we better incentivize collaboration. Taking, as an example, a technologist who develops an algorithm to help segment ethnographic recordings into separate tunes which is a necessary and precursory step to symbolic analysis by a musicologist. Granting musicological recognition to the importance of this step aids the justification of this technological research, so supporting its ongoing funding. Similarly for the music librarian. Publication is a means to articulate and recognize this value. New publication models must therefore reflect and encapsulate this value in its representations of data and editions, throughout the iterative and incremental research lifecycle. Technologies can support this within publications. Ubiquitous application of globally unique identifiers to all contributions, not only sources, can enable their tracing and recognition. Common data and access standards (e.g. MEI, IIIF, Linked Data) often already incorporate facilities to this end within their specialized remits; these can be leveraged and connected.
In summary, the considerations above are the result of inspiration and elucidation arising from many interdisciplinary discussions with learned colleagues. In particular, reflecting upon the tools, projects, and scholarship included in the background below, alongside the other rich contributions described in the further sections of this article. Through such exchanges of experience and ideas, we might hope to move towards a more coherent lens by which we can understand existing digital infrastructure for, and around, music cultural heritage, access to it, and scholarship of it. In doing so, we aim to bring shape and direction to considerations for new digital efforts: in music publication platforms and interfaces; computational music analysis; and encoding and visualization of music, music analysis and audio. Plus, most crucially, the interactions between these elements which can be realized in the future.
Background context follows, with a focus on the personal research of the author of this section. The tools and projects here are complemented by those introduced in the remaining sections of this article and in the Appendix 1.
The Delius Catalogue of Works (DCW) is the first thematic catalogue of the composer’s works, and the first to be digital. It is a collaboration between the University of Oxford and the British Library, and uses the MerMEId software originally written by the Royal Library Denmark (for the Catalogue of Carl Nielsen’s Works). Notably, MerMEId provides a catalogue editing/authoring environment, and a separate publication platform, which create and export information according to the MEI Metadata module. This illustrates the additional flexibility of MEI for applications beyond notation.
MELD (Music Encoding and Linked Data) is a flexible software platform suitable for creating new forms of «publication», developed at the University of Oxford e-Research Centre and since used by several academic research projects to combine digital representations of music (such as, but not limited to, audio and notation) with contextual and interpretive knowledge in the Semantic Web. Technically, it combines use of MEI XML and the Web Annotation data model (in JSON-LD), often with the SVG XML output of Verovio and IIIF compatible image sources. Combining such clearly scoped standards, each refined to a particular purpose, through a consistent and extensible framework enables the assembly of many different «apps» using some (but not necessarily all) of these technologies. The breadth of possible MELD uses can be seen in apps variously supporting: the (manual) annotation of a live string quartet performance,12
Linked Art was introduced as a new standard under development to share collections data describing art through a common model and API, intended as a complementary technology to IIIF, and following the community model which has made the IIIF standards so successful. LUX is a compelling example of a unified discovery system, built using Linked Art, across the collections of Yale University.
CALMA (Computational Analysis of the Live Music Archive)15
3. WORKING TOWARD INTEROPERABILITY: EVALUATING RESEARCH OUTCOMES IN DIGITAL MUSICOLOGY
⌅Mark Saccomano, University of Paderborn
In order to provide an example of how several digital standards, digital tools, and digital sources can be brought together to support a fully digital musicology research project, the EarlyMuse Short-Term Scientific Mission in Barcelona featured a detailed presentation of the Annotator App, a product of a collaborative UK-German-Swiss research project titled Beethoven in the House.16
The Annotator App was developed to access and compare remote digital resources and assist in performing routine musicological research tasks such as browsing collections, assembling work sets, and comparing and annotating sources. The app is a combination of established, standardized methods for encoding and exchanging music data, advanced technology in music encoding and music information retrieval, and specially developed, project-based tools that enable manipulation and annotation of digital materials. Research reports released by the project not only explained the project team’s development efforts, but were also intended as an illustration of the complex layers of structure at work beneath the application's interface —and how those layers contributed to the interoperability and sustainability of the project’s research output.
At the core of the application is a suite of independent tools for creating and working with digitized music sources: MEI (Music Encoding Initiative), Verovio, MELD, Cartographer, and Edirom. MEI is a widely used standard in libraries and archives for music encoding;17
To make sure that users were not limited to selecting and commenting on strictly notational elements in a score (such as notes and measures), a new and flexible model was created, one that allowed musical structures to be targeted at various levels of scale and at various levels of abstraction (such as motives and themes).22
Thus, the Annotator App exemplifies a repeated theme of the Barcelona STSM: prioritizing the use of established standards and formats whenever possible. The benefits of using widely accepted standards are manifold. Chief among these is the possibility for sharing digital resources, including the products of research. Shareability and reusability of data are important considerations in any digital scholarly project and help to maximize efforts to generate new knowledge. Because of the wide adoption of standard formats like MEI and IIIF, there is ample support for their use, making them appealing choices for smaller scale projects with limited resources.
The project’s annotation structure also accommodated the formal inscription of the workflow and software utilized to arrive at a study's claims and conclusions.23
The integration of datasets in this manner was a topic that came up often in STSM discussions. It is easy to imagine a future where music resources can be instantly located and —through a shared vocabulary of metadata terms, instituted across all known digital collections, held by any cultural institution— displayed with links to encoded music files, digital scans of manuscripts and scores, variants in different collections and different editions, plus analyses, recordings, arrangements, secondary literature, and performance history. Linked Data is often considered a means to achieve this goal, and is thus an exciting topic for researchers interested in gaining access to obscure or physically remote sources, or simply in assembling large datasets for analysis.
This vision of a future musicology was a fundamental impetus behind the Beethoven in the House project, and an aspect of the project that might be missed at first glance. Researchers wanted not only to formulate practices that would assist in the preservation and access of data and analysis, but also to contribute to the realization of an international, interoperable framework for the preservation and access of research artifacts at every stage of the research cycle. These are ambitious goals, and when a fledgling initiative is seeking guidance for infrastructure development and best practices, it is wise to try to tease out some of these underlying goals when seeking appropriate solutions from project reports. For smaller, independent endeavors, a careful consideration of costs and benefits of implementing technologies such as Linked Data is warranted.
An alternative to implementing state of the art tools is to adhere to the principles of Minimal Computing, which recommend matching tools not only to the job at hand, but also to the resources at hand.24
Another important consideration when reviewing project reports is the use of dependent technologies, as these can seriously impede progress in any digital project. For the Beethoven in the House project —with its goal of setting up an all-digital, all Linked Data workflow— some of the dependencies and platforms used were of necessity leading-edge technologies. Reliance on new tools and frameworks, however, can lead to significant challenges for both implementation and reuse of the software, as became evident in the Beethoven in the House project. For storing and sharing annotations, the project's Annotator App relied on a protocol for a distributed storage platform called Solid, and a free community-supported service provider that hosted cloud-based storage containers called Solid Pods.27
Community software, however, is only as strong as the community that supports it, and throughout the life of Beethoven in the House and afterwards, activity in the Solid community at large steadily decreased. As a result, there was not a strong developer network that could be consulted for answers to operational questions, nor did the community seem to make much progress developing and stabilizing the platform or the protocol. This ultimately made Solid Pods less than ideal for the continued use of the project software and its associated workflow, although no one could have foreseen this at the time the project began.
Importantly, however, the pods did not distort the data they handled. The data was still expressed as RDF statements that could be exported as a JSON file, a standard format for data interchange on the web. This is perhaps the most important consideration when it comes to research data management: the use of standard (hence, sustainable) formats. No system should bind users to a particular framework in order to use its data. So, in the end, it did not «cost» the project anything to test out these pods, while their use by the developers could still serve to promote the notion of decentralized storage with customizable access controls.
The successes and difficulties experienced by Beethoven in the House are typical of any research software project, and their lessons are part of the research findings. Fortunately, the existence of professional networks among participants in the project meant that outside expertise could be drawn upon to solve some of the implementation problems. Those same networks also helped to disseminate software and methods that could be useful for other research teams. The existence of such collegial networks, while often taken for granted, are crucial for the success of any scholar or team. Not only can they contribute experience and expertise that are not available in published reports and documentation —making for more efficient progress and more reliable software— they can also help ensure the widest reach and greatest utility of the project’s research and resources.
As is clear from the foregoing, some of these tools and techniques do not easily combine with one another —Minimal Computing and Linked Data, for example. Even though the general concept is easy to grasp, creating records in Linked Data can be exceedingly difficult to implement. It requires an understanding of how a dataset can be expressed as RDF (Resource Description Framework) —the Subject-Predicate-Object model that is used in Linked Data to identify and represent relationships among elements. It also requires sufficient knowledge of semantic web initiatives to create or adapt data models for the kinds of objects being cataloged. Ideally, this should be done with technicians and stakeholders who have an overview of what models are available and which identifiers have wide purchase in the field. And because of phenomena like «data friction» —the inevitable difficulties that impede the smooth transfer of data between formats, systems, and institutions— scholars skilled in digital librarianship are often needed as well.28
This informal cooperation, in fact, has been identified as the antidote to data friction.29
4. COMPUTATIONAL SEGMENTATION, MUSIC TRANSCRIPTION AND ANALYSIS OF FOLK MUSIC
⌅Olivier Lartillot, University of Oslo
My approach to computational music analysis encompasses both audio and score-based methods. The approach, aimed towards a «Comprehensive Modelling Framework for Computational Music Transcription and Analysis», seeks to automate music analysis across a wide range of dimensions, using either score or audio (through automated transcription). One goal is to develop computational tools to preserve and enhance the cultural significance of music repertoires, with a present focus on Norwegian folk music. This is being done through a cooperative project with the National Library of Norway related to their collection of Norwegian folk music. This collection, funded in 1951, includes audio recordings of solo performances on various instruments (such as the normal and Hardanger fiddle, langeleik, guimbard, harmonica, clarinet and various flutes or psalmodicon), as well as vocal performances, in the form of singing and joik. By 1970, it contained 1,327 reel-to-reel tapes corresponding to 750 hours of music.
As part of the collaboration with the National Library in Oslo, the project initially focused on digitizating the tape recordings, concentrating on a specific collection of 343 tapes that are over 50 years old, due to copyright considerations. The challenge was to transform unstructured audio tapes into a systematic dataset of melodies, resulting in 20,000 pieces of music processed. To achieve this, we developed a specific tool called AudioSegmentor.31
This tape segmentation has enabled the individual tune recordings to be made available online in the form of a digital catalogue, hosted by OSF (the Open Science Framework).33
Besides initial tape transcription, the core activity as part of this collaboration involves automatically transcribing the audio recordings into scores. The focus has been on the music played on the Hardanger fiddle.34
The subsequent task is to detect the underlying beat to allow rhythmical transcription. For music with a regular beat, an AI tool such as Madmom37
A longer-term objective is to design tools to dissect the musical corpus, aiming to extract key features of each tune. This initiative serves as an alternative to incipits and provides novel metadata formats, increasing usability connectivity within its content and with other databases. Another ambition is to develop advanced tools for browsing the music collection with interactive visualization and sonification. This would make the music and analyses accessible to the general public. One idea is to visualise the distribution of the entire catalogue based on intrinsic content, guided for instance by stylistic clustering, to help listeners of any expertise appreciate the richness of these catalogues.
In music analysis, the project with the National Library of Norway addresses issues of musical style, focusing on the intertextuality of music. One objective is to detect automatically the correspondences between pieces in the catalog (e.g. quotations, imitations, and other mimetic phenomena), to find patterns related to particular genre, dance, musician, region or period. For example, in the Norwegian folk music database, the project aims to detect melodies related to the same families or old religious songs. Advanced thematic catalogs could also be created, allowing visualization of the music collection and highlighting links between tunes. The idea is to enable an advanced music browser with hyperlinks that highlight possible links with other tunes and associated categories while visualizing and playing one tune. This would enable retrieval technology tailored to musicological queries, allowing musicologists to automatically find specific pieces of music based on their requests (e.g. a given theme or musical characteristic).
During the last twenty years of work on computational music analysis, I have been working on a comprehensive framework encompassing a wide range of music dimensions and various music representation formats, such as audio recordings, score and MIDI files; I have developed a well-known toolbox, MIRtoolbox,38
The Short-Term Scientific Mission (STSM) in Barcelona has fostered deep discussions on the pros and cons of different approaches in computational musicology. One key message I transmitted is that, while designing tools designed for specific use cases can be useful, general computational music analysis frameworks are also essential, particularly if they provide advanced analytical tools such as those developed in the Comprehensive Modelling Framework. Though specific music cultures, repertoires, or forms of music expression require particular analytical frameworks, my hypothesis is that these diverse musicological visions all rely on common cognitive mechanisms that can be adjusted through specific parameters. It must be acknowledged that this vision remains prospective, related to new developments in AI. The limited musicological usefulness of current AI tools and the skeptical assessment of AI potentials were discussed, enabling a balanced opinion concerning AI’s ideals and potentials in computational musicology. The technological advancement in text-based AI compared to music AI has been addressed, considering the contrasting difference in research efforts and datasets between those two domains. This can also be attributed to the particular difficulty of parsing music compared to text.
One initial objective of the STSM was to integrate layers of music analysis annotations into digital music edition projects, particularly Books of Hispanic Polyphony (BHP). One suggestion is to use the MEI music encoding format, leveraging existing music display technologies, and possibly tweaking them for music analysis display purposes. More importantly, this involves imagining the general principles of possible MEI customization for music analysis to foster the development of music analysis display from MEI visualizers.
The meeting in Barcelona has led to the establishment of ideas for collaboration in computational analysis of folk music, particularly towards: a) segmentation of field recordings of folk music of the Fondo de Música Tradicional IMF-CSIC (FMT) into songs; b) automated music transcriptions, encoding, and analysis of traditional music repertories (FMT and Ukrainian songs in particular); c) automated analysis of early music (BHP); and d) envisioning of technologies to visualize pieces and corpora, and to enable browsing based on music characteristics. Concerning Ukrainian songs, it is also considered the automated assessment of stanza pattern types. One concluding idea is to join European folk music collections into a common framework.
5. URGENCY AND CHALLENGES FOR UKRAINIAN FOLK AND EARLY MUSIC COLLECTIONS: THE CORRESPONDENCE OF SYSTEMATIC APPROACHES AND NEW TECHNOLOGIES
⌅Anastasiia Mazurenko, Slovenian Academy of Science and Arts, Ljubljana
5.1. Introduction
⌅The digitization, preservation and publication of music collections in Ukraine has been an important research topic in recent decades, which has gained urgency in the last two and a half years (since winter 2022) owing to the ongoing full-scale war phase and the risks to carriers and data in conflict situations.41
5.2. Current status of the collections being digitized
⌅When processing the collections, new technologies for cataloging, preservation and publication must be used that meet the conditions of urgency and other problems due to martial law in the country (e.g. backups, protection of performers’ personal data, especially when it comes to sensitive data —ethnic identity, political views, religion— stored in the fieldwork records, etc.). Minimization and anonymization of data, as well as other methods of data protection should be implemented in accordance with Ukrainian and international law and research ethics. Another point that increases the relevance of such publications, especially those endowed with national value, is the growing national identity of Ukrainians and the need to manifest it in performative practices, such as the revival of Ukrainian folk music and early sacred music, which requires open access to the publications.
The collections of early music (old prints, manuscripts, tablatures, etc.) are largely digitized with the support of donors —mainly foreign institutions, organizations and various initiatives for short-term projects (usually lasting two to three months), e.g. through the provision of equipment and advice from foreign experts.42
5.3. The systematic approach in ethnomusicology and principles of data analysis and categorization
⌅The basic approach to the analysis of folk music in Ukraine is a systematic musicological approach, which manifests itself in the methods of rhythmic typology and melodic geography. The foundations of these methods were laid by 19th and early 20th century European researchers such as Ilmari Krohn, Béla Bartók45
The method of rhythmic typology is based on the idea that folk songs (especially ritual songs of the oral tradition in villages) are based on rhythmic and strophe patterns that do not depend on the structure of the melody, but on the structure of the verse (lyrics) structure in combination with melodic rhythms. After the definition of strophe patterns, the next step of this method is to track the spread of them across the territory (mapping or melodic geography method), which ultimately (using big data) reveals the boundaries of regional musical dialects and styles. The principle of retrieving rhythmic and strophic types/patterns is called modeling. Based on the statistical data of the retrieved patterns, the information is entered into the collection’s database along with other analytical and metadata (such as genre, place and time of recording, place of origin, melodic ambitus and structure, verse incipit, type of performance, technical data of the recording, etc.). For Ukrainian ritual songs, the rhythmic and strophic type/pattern is the decisive category on which further research theories and hypotheses are built. The data is stored in spreadsheet format (mostly in Microsoft Excel, which made it possible to add new categories and filter them at the same time), and then this database is used as a research source. There have been several attempts to switch from spreadsheet format to a database environment, but as it was not possible to agree on a common structure of such a database, the researchers continue to use Microsoft Excel, being aware about all the limitations of this tool for such purposes.
5.4. The collection(s) in Ukrainian laboratories of ethnomusicology: risks, challenges and attempts at preservation and publication
⌅In general, each ethnomusicological scientific unit (institute, department, laboratory) has its own collection of recordings and documents, regardless of whether it has the official status of an archive or not. As a rule, there are a number of recordings made with the support of the institution, but also invested private collections and the collections (copies) of other researchers that are shared by their will. In some cases, these are inherited recordings from former employees and collections that have been moved from the occupied territories or hot spots of the country to safer locations. Initial problems in processing collections include the different origins of the media, the conditions and principles of collecting by different collectors, missing data for individual recordings, copyright issues, the lack of connection between video and audio recordings and documents (photos, field notes), different file formats and technical parameters, among others. In the case of physical carriers, there is also the aspect of the condition of the carrier and the problem of its digitization. At the same time, given the value and uniqueness of the collections, the urgency of their processing and the risks associated with the war, it is necessary to use the latest technologies and possibilities for further publications.
5.5. Digital solutions for endangered collections
⌅As most technologies have been developed primarily for Western art music collections, folk music collections and early music collections require special approaches. For instance, when using music encoding for early music publications, MEI, MELD and Humdrum technologies should be preferred as they are better adapted to the needs of research. For folk music, this method may be irrelevant as the majority of recordings are not supported by staff notation. The question of whether these technologies are suitable for this type of music is still under consideration, but could be used in cases where the research needs require it. Linked data tools such as Verovio, Linked Art, image publications —IIIF, catalogs and collections for preservation and publication— Arches, European Commission Cultural Heritage Cloud, MINIM, FRBR, and others could be discussed as possible tools for publications.
The requirements for digital tools for endangered collections include the following:
- Technologies should be affordable under conditions of economic, political, and social crisis.
- They should provide urgent solutions, should be accessible to different users and should treat the data with great care.
- No-coding methods can be used (e.g. Coda, Airtable, etc.) as they require less training of specialists (compared to coding tools) and allow fast and free publications.
- The approaches of scientific practice/tradition should be taken into account. For instance, Ukrainian ethnomusicology implies a systematic approach to categorization (distinguishing parameters such as genres, area of origin, pitch structure, rhythmic and strophic patterns, types of performance and many others), using the algorithms of rhythmic modeling for statistical work.
- The policies of research and collecting institutions, as public organizations, need to be more open to the public (audience, communities). This can be done through publications and the use of the most appropriate technological solutions. The establishment of research repositories with open data is one of the possible cases here. For example, the Scientific Research Center of the Slovenian Academy of Sciences and Arts (ZRC SAZU) is currently working on building repositories for digital collections (e.g. the publication of Slovenian folk songs on the Internet) and intends to implement TEI technology.48
See «Slovenske ljudske pesme na spletu», accessed October 9, 2024, https://slp.zrc-sazu.si/index.html. - The experience of long-term projects with prolonged support for the storage of digital collections (support and maintenance of servers and web resources) should be implemented in Ukraine, with the support and application of the experience of foreign national and international projects in research and collection institutions. International standards for digitization, storage and publication of archives should be applied equally to all collections (e.g. IASA standards).49
International Association of Sound and Audiovisual Archives (IASA), Special and Technical Publications, TC 03: The Safeguarding of the Audiovisual Heritage: Ethics, Principles and Preservation Strategy (web-edition), 4th edition (2017), accessed October 9, 2024, https://www.iasa-web.org/tc03/ethics-principles-preservation-strategy.
5.6. Conclusions
⌅The new socio-political and economic realities of countries mean that some collections of cultural heritage artifacts, such as collections of early music editions or collections of folk music recordings, are at risk of destruction on the one hand and arouse the interest of a wider audience on the other, which requires their immediate digitization and online publication. Such tasks require researchers to find and apply the latest technologies that would satisfy both the needs of research and the community. At the same time, the publication of information also contributes to its uncontrolled dissemination. Therefore, so when publishing, it is necessary to build such an access system, that ensures the security of personal data and the non-dissemination of sensitive information. Measures for backup and long-term provision and maintenance of data storage (servers) should also be considered when digitizing and publishing collections.