Home Conference theme

Conference theme

Applications on mobile devices in the field of oral communication pose challenges which have become more and more compelling in the linguistic field. Speech-To-Speech Translation (STST) has pushed the boundaries of linguistic research in the computational field higher, in yet to be discovered areas which are deeply rooted in the linguistic and paralinguistic domain. In addition, there is a strongly felt need for the creation of brand new mathematical models that are capable to face challenges posed by spoken dialogues. The possibility to combine Machine Translation (MT) to obtain multilingual dialogues, which may be enriched by the analysis of facial expressions - nowadays almost a reality, will in the near future endow Speech Synthesis with the possibility to associate more adequate intonation and tone of voice to linguistic forms generated in the target language. It goes without saying that the same type of multilingual spoken information processing will be possible in absence of human interlocutors, with intelligent responders, and in the last resort, with robots collaborating with humans.
Main theme of this conference is Dialogue, and the role played by differences at multilingual level, including the role of facial expressions in the communication process. This theme is intended both as analysis of oral production in spontaneous dialogues, but also and foremost as generation by means of automatic tools for MT. In particular, MT will play a central role, in so far as it is finalized for dialogue, a field which still suffers the lack of parallel corpora of adequate size: this situation being not easy to mend, given the nature of the data to be collected and the requirements imposed by statistical modeling. For this reason, more adequate technologies call for the union of mathematical model for sparse datasets with annotations that encode semantic and pragmatic information. In all these contexts formal descriptions of spoken dialogues at semantic and pragmatic level are needed, that accompany data produced by ASR and improved by experimental phonetics. An interesting role in this context can also be played by research carried out in the field of sign language.

Last Updated on Wednesday, 03 October 2012 13:48