An entropy model for artificial grammar learning

Pothos, E. M. (2010). An entropy model for artificial grammar learning. Frontiers in Psychology, 1(JUN), pp. 1-13. doi: 10.3389/fpsyg.2010.00016

[img]
Preview
PDF
Download (590kB) | Preview

Abstract

A model is proposed to characterize the type of knowledge acquired in artificial grammar learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test item is from the training items, the more likely it is that this item should be selected as compatible with the training items. The predictions of the entropy model are explored in relation to the results from several previous AGL datasets and compared to other AGL measures. This particular approach in AGL resonates well with similar models in categorization and reasoning which also postulate that cognitive processing is geared towards the reduction of entropy.

Item Type: Article
Additional Information: This Document is Protected by copyright and was first published by Frontiers. All rights reserved. It is reproduced with permission.
Subjects: B Philosophy. Psychology. Religion > BF Psychology
P Language and Literature > P Philology. Linguistics
Divisions: School of Social Sciences > Department of Psychology
URI: http://openaccess.city.ac.uk/id/eprint/1745

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics