City Research Online

Items where City Author is "Keramati, Mehdi"

Up a level
Group by: Type | No Grouping
Number of items: 14.

Moran, R. ORCID: 0000-0002-7641-2402, Keramati, M. ORCID: 0000-0002-1120-5867 & Dolan, R. J. (2021). Model based planners reflect on their model-free propensities. PLOS Computational Biology, 17(1), article number e1008552. doi: 10.1371/journal.pcbi.1008552

Moran, R., Keramati, M. ORCID: 0000-0002-1120-5867, Dayan, P. & Dolan, R.J. (2019). Retrospective model-based inference guides model-free credit assignment. Nature Communications, 10(1), article number 750. doi: 10.1038/s41467-019-08662-8

Shahar, N., Hauser, T., Moutoussis, M. , Moran, R., Keramati, M. ORCID: 0000-0002-1120-5867 & Dolan, R.J. (2019). Improving the reliability of model-based decision-making estimates in the two-stage decision task with reaction-times and drift-diffusion modeling. PLOS Computational Biology, 15(2), article number e1006803. doi: 10.1371/journal.pcbi.1006803

Sezener, C. E., Dezfouli, A. & Keramati, M. ORCID: 0000-0002-1120-5867 (2019). Optimizing the depth and the direction of prospective planning using information values. PLOS Computational Biology, 15(3), article number e1006827. doi: 10.1371/journal.pcbi.1006827

Afsardeir, A. & Keramati, M. (2018). Behavioural signatures of backward planning in animals. European Journal of Neuroscience, 47(5), pp. 479-487. doi: 10.1111/ejn.13851

Hertz, U., Bahrami, B. & Keramati, M. (2018). Stochastic satisficing account of confidence in uncertain value-based decisions. PLoS One, 13(4), article number 4. doi: 10.1371/journal.pone.0195399

Lak, A., Nomoto, K., Keramati, M. , Sakagami, M. & Kepec, A. (2017). Midbrain Dopamine Neurons Signal Belief in Choice Accuracy during a Perceptual Decision. Current Biology, 27(6), pp. 821-832. doi: 10.1016/j.cub.2017.02.026

Keramati, M., Durand, A., Girardeau, P. , Gutkin, B. & Serge, A. (2017). Cocaine Addiction as a Homeostatic Reinforcement Learning Disorder. Psychological Review, 124(2), pp. 130-153. doi: 10.1037/rev0000046

Lee, J. J. & Keramati, M. (2017). Flexibility to contingency changes distinguishes habitual and goal-directed strategies in humans. PLoS Computational Biology, 13(9), article number 9. doi: 10.1371/journal.pcbi.1005753

Keramati, M. ORCID: 0000-0002-1120-5867, Smittenaar, P., Dolan, R. J. & Dayan, P. (2016). Adaptive integration of habits into depth-limited planning defines a habitual-goal–directed spectrum. Proceedings of the National Academy of Sciences, 113(45), pp. 12868-12873. doi: 10.1073/pnas.1609094113

Keramati, M. & Gutkin, B. (2014). Homeostatic reinforcement learning for integrating reward collection and physiological stability. Elife, 3, article number e04811. doi: 10.7554/eLife.0481

Keramati, M. & Gutkin, B. (2013). Imbalanced decision hierarchy in addicts emerging from drug-hijacked dopamine spiraling circuit. PLoS one, 8(4), article number 4. doi: 10.1371/journal.pone.0061489

Keramati, M., Dezfouli, A. & Piray, P. (2011). Speed/accuracy trade-off between the habitual and the goal-directed processes. PLoS computational biology, 7(5), article number e1002055. doi: 10.1371/journal.pcbi.1002055

Keramati, M. & Gutkin, B. S. (2011). A reinforcement learning theory for homeostatic regulation. In: Shawe-Taylor, J., Zemel, R. S., Bartlett, P. L. , Pereira, F. & Weinberger, K. Q. (Eds.), Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Advances in Neural Information Processing Systems (24). (pp. 82-90). Neural Information Processing Systems ( NIPS ).

This list was generated on Thu Oct 3 02:37:08 2024 UTC.