City Research Online

Items where City Author is "Keramati, Mehdi"

Up a level
Export as [feed] RSS 2.0 [feed] RSS
Group by: Type | No Grouping
Number of items: 13.

Article

Moran, R., Keramati, M. ORCID: 0000-0002-1120-5867, Dayan, P. and Dolan, R.J. (2019). Retrospective model-based inference guides model-free credit assignment. Nature Communications, 10(1), 750.. doi: 10.1038/s41467-019-08662-8

Shahar, N., Hauser, T., Moutoussis, M., Moran, R., Keramati, M. ORCID: 0000-0002-1120-5867 and Dolan, R.J. (2019). Improving the reliability of model-based decision-making estimates in the two-stage decision task with reaction-times and drift-diffusion modeling. PLOS Computational Biology, 15(2), e1006803. doi: 10.1371/journal.pcbi.1006803

Sezener, C. E., Dezfouli, A. and Keramati, M. ORCID: 0000-0002-1120-5867 (2019). Optimizing the depth and the direction of prospective planning using information values. PLOS Computational Biology, 15(3), e1006827. doi: 10.1371/journal.pcbi.1006827

Afsardeir, A. and Keramati, M. (2018). Behavioural signatures of backward planning in animals. European Journal of Neuroscience, 47(5), pp. 479-487. doi: 10.1111/ejn.13851

Hertz, U., Bahrami, B. and Keramati, M. (2018). Stochastic satisficing account of confidence in uncertain value-based decisions. PLoS One, 13(4), e0195399. doi: 10.1371/journal.pone.0195399

Lak, A., Nomoto, K., Keramati, M., Sakagami, M. and Kepec, A. (2017). Midbrain Dopamine Neurons Signal Belief in Choice Accuracy during a Perceptual Decision. Current Biology, 27(6), pp. 821-832. doi: 10.1016/j.cub.2017.02.026

Keramati, M., Durand, A., Girardeau, P., Gutkin, B. and Serge, A. (2017). Cocaine Addiction as a Homeostatic Reinforcement Learning Disorder. Psychological Review, 124(2), pp. 130-153. doi: 10.1037/rev0000046

Lee, J. J. and Keramati, M. (2017). Flexibility to contingency changes distinguishes habitual and goal-directed strategies in humans. PLoS Computational Biology, 13(9), e1005753. doi: 10.1371/journal.pcbi.1005753

Keramati, M., Smittenaar, P., Dolan, R. J. and Dayan, P. (2016). Adaptive integration of habits into depth-limited planning defines a habitual-goal–directed spectrum. Proceedings of the National Academy of Sciences, 113(45), pp. 12868-12873. doi: 10.1073/pnas.1609094113

Keramati, M. and Gutkin, B. (2014). Homeostatic reinforcement learning for integrating reward collection and physiological stability. Elife, 3, e04811. doi: 10.7554/eLife.0481

Keramati, M. and Gutkin, B. (2013). Imbalanced decision hierarchy in addicts emerging from drug-hijacked dopamine spiraling circuit. PLoS one, 8(4), e61489. doi: 10.1371/journal.pone.0061489

Keramati, M., Dezfouli, A. and Piray, P. (2011). Speed/accuracy trade-off between the habitual and the goal-directed processes. PLoS computational biology, 7, e1002055. doi: 10.1371/journal.pcbi.1002055

Book Section

Keramati, M. and Gutkin, B. S. (2011). A reinforcement learning theory for homeostatic regulation. In: Shawe-Taylor, J., Zemel, R. S., Bartlett, P. L., Pereira, F. and Weinberger, K. Q. (Eds.), Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Advances in Neural Information Processing Systems (24). (pp. 82-90). Neural Information Processing Systems ( NIPS ). ISBN 9781618395993

This list was generated on Sat Jan 25 04:36:58 2020 UTC.