• DocumentCode
    3382458
  • Title

    Rendering Models for Immersive Voice Communications within Distributed Virtual Environment

  • Author

    Que, Ying Peng ; Boustead, Paul ; Safaei, Farzad

  • Author_Institution
    Telecommun. & Inf. Technol. Res. Inst., Wollongong Univ., Wollongong, NSW
  • fYear
    2005
  • fDate
    21-24 Nov. 2005
  • Firstpage
    1
  • Lastpage
    6
  • Abstract
    This paper compares three possible rendering models for the provision of immersive voice communications (IVCs) in distributed virtual environments (DVEs) such as multiplayer online games. The common aim of these three rendering models is to create a personalised auditory scene for each listening avatar, consisting of a mix of the surrounding avatars´ voices, positioned according to their positions in the virtual world. The first two rendering models are based on amplitude panning localisation and HRTF-based binaural localisation respectively. The computation cost of the latter is deemed too large to meet the identified processing power constraints. A computation reuse scheme was introduced in the third rendering model which, as shown in our simulation results, reduces significantly the computational cost of providing IVC using HRTF-based binaural localisation.
  • Keywords
    avatars; rendering (computer graphics); voice communication; HRTF-based binaural localisation; distributed virtual environment; immersive voice communications; rendering models; Avatars; Bandwidth; Computational modeling; Context modeling; Distributed computing; Internet; Layout; Power system modeling; Scalability; Virtual environment;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    TENCON 2005 2005 IEEE Region 10
  • Conference_Location
    Melbourne, Qld.
  • Print_ISBN
    0-7803-9311-2
  • Electronic_ISBN
    0-7803-9312-0
  • Type

    conf

  • DOI
    10.1109/TENCON.2005.300940
  • Filename
    4085205