Text size
  • Small
  • Medium
  • Large
Contrast
  • Standard
  • Blue text on blue
  • High contrast (Yellow text on black)
  • Blue text on beige

    Semi-automatic Semantic Annotation Tool for Digital Music

    iUBICOM ’11: The 6th International Workshop on Ubiquitous and Collaborative Computing

    In conjunction with the 25th BCS Conference on Human Computer Interaction (HCI 2011)

    Northumbria University, Newcastle, 4 July 2011

    AUTHORS

    Fazilatur Rahman and Jawed Siddiqi

    ABSTRACT

    The Worldwide Web/Internet has changed the music industry by making huge amount of music available to both music publishers and consumers including ordinary listeners or end users. The Web2.0 tagging techniques of music items by artist name, album title, musical style or genre (technically these are termed as syntactic metadata) have given rise to the generation unstructured free form vocabularies. Music search based on these syntactic metadata requires the search query to contain at least one keyword from that vocabulary and it must be an exact match. The semantic Web initiative by W3C proposes machine process-able representation of information but does not stipulate how that can be applied to music items specifically. In this paper we present a novel approach that details a semi-automatic semantic annotation tool to enable music producers to generate music metadata through a mapping between music consumers’ free form tags and the acoustic metadata that are automatically extractable from music audio. The proposed annotation tool enables onotology guided annotation process and uses MPEG-7 Audio compliant music annotation ontology represented in dominant semantic web standard OWL 1.0.

    PAPER FORMATS

    PDF filePDF Version of this Paper  (320kb)