A neural network approach to audio-assisted movie dialogue detection

Kotti, M, Benetos, E., Kotropoulos, C. & Pitas, I (2007). A neural network approach to audio-assisted movie dialogue detection. Neurocomputing, 71(1-3), pp. 157-166. doi: 10.1016/j.neucom.2007.08.006

[img]
Preview
PDF
Download (252kB) | Preview

Abstract

A novel framework for audio-assisted dialogue detection based on indicator functions and neural networks is investigated. An indicator function defines that an actor is present at a particular time instant. The cross-correlation function of a pair of indicator functions and the magnitude of the corresponding cross-power spectral density are fed as input to neural networks for dialogue detection. Several types of artificial neural networks, including multilayer perceptrons, voted perceptrons, radial basis function networks, support vector machines, and particle swarm optimization-based multilayer perceptrons are tested. Experiments are carried out to validate the feasibility of the aforementioned approach by using ground-truth indicator functions determined by human observers on 6 different movies. A total of 41 dialogue instances and another 20 non-dialogue instances is employed. The average detection accuracy achieved is high, ranging between 84.78%±5.499% and 91.43%±4.239%.

Item Type: Article
Uncontrolled Keywords: dialogue detection, indicator functions, cross-correlation, cross-power spectral density
Subjects: Q Science > QA Mathematics > QA76 Computer software
Divisions: School of Informatics > Department of Computing
URI: http://openaccess.city.ac.uk/id/eprint/2046

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics