City Research Online

Items where City Author is "Child, C."

Up a level
Group by: Type | No Grouping
Number of items: 21.


Aden, I., Child, C. H. T. ORCID: 0000-0001-5425-2308 & Reyes-Aldasoro, C. C. ORCID: 0000-0002-9466-2018 (2024). International Classification of Diseases Prediction from MIMIIC-III Clinical Text Using Pre-Trained ClinicalBERT and NLP Deep Learning Models Achieving State of the Art. Big Data and Cognitive Computing, 8(5), article number 47. doi: 10.3390/bdcc8050047

Osudin, D., Denisova, A. & Child, C. ORCID: 0000-0001-5425-2308 (2024). Non-Euclidean Video Games: Exploring Player Perceptions and Experiences inside Impossible Spaces. IEEE Transactions on Games, doi: 10.1109/tg.2024.3386816

Child, C. H. T. ORCID: 0000-0001-5425-2308, Osudin, D. & He, Y-H. ORCID: 0000-0002-0787-8380 (2019). Rendering Non-Euclidean Space in Real-Time Using Spherical and Hyperbolic Trigonometry. Computational Science – ICCS 2019, 19th International Conference, Proceedings, Part V, 11540, pp. 543-550. doi: 10.1007/978-3-030-22750-0_49

Basaru, R. R., Child, C. H. T., Alonso, E. & Slabaugh, G. G. (2018). Conditional Regressive Random Forest Stereo-based Hand Depth Recovery. 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 614-622. doi: 10.1109/ICCVW.2017.78

Basaru, R. R., Slabaugh, G. G., Alonso, E. & Child, C. H. T. (2018). Data-driven Recovery of Hand Depth using Conditional Regressive Random Forest on Stereo Images. IET Computer Vision, 12(5), pp. 666-678. doi: 10.1049/iet-cvi.2017.0227

Ollero, J. & Child, C. H. T. (2018). Performance Enhancement of Deep Reinforcement Learning Networks using Feature Extraction. Lecture Notes in Computer Science, 10878, pp. 208-218. doi: 10.1007/978-3-319-92537-0_25

Child, C. H. T. & Dey, R. (2013). QL-BT: Enhancing Behaviour Tree Design and Implementation with Q-Learning. Computational Intelligence in Games (CIG), 2013 IEEE Conference on, pp. 275-282. doi: 10.1109/CIG.2013.6633623

Child, C. H. T. & Stathis, K. (2005). The Apriori Stochastic Dependency Detection (ASDD) algorithm for learning Stochastic logic rules. Lecture Notes in Computer Science: Computational Logic In Multi-Agent Systems, 3259, pp. 234-249. doi: 10.1007/978-3-540-30200-1_13

Child, C. H. T. & Stathis, K. (2005). SMART (Stochastic Model Acquisition with ReinforcemenT) learning agents: A preliminary report. Lecture Notes in Computer Science: Adaptive Agents and Multi-Agent Systems II, 3394, pp. 73-87. doi: 10.1007/978-3-540-32274-0_5

Conference or Workshop Item

Child, C. H. T. ORCID: 0000-0001-5425-2308, Koluman, C. & Weyde, T. ORCID: 0000-0001-8028-9905 (2019). Modelling Emotion Based Reward Valuation with Computational Reinforcement Learning. In: Proceedings of the 41st Annual Conference of the Cognitive Science Society. Cogsci 2019, 24-27 Jul 2019, Montreal, Canada.

Basaru, R. R., Child, C. H. T., Alonso, E. & Slabaugh, G. G. (2017). Hand Pose Estimation Using Deep Stereovision and Markov-chain Monte Carlo. Paper presented at the International Conference on Computer Vision Workshop on Observing and Understanding Hands in Action, 23 Oct 2017, Venice, Italy.

Slabaugh, G. G., Child, C. H. T., Alonso, E. & Basaru, R. R. (2016). HandyDepth: Example-based Stereoscopic Hand Depth Estimation using Eigen Leaf Node Features. Paper presented at the International Conference on Systems, Signals and Image Processing, 23-25 May 2016, Bratislava, Slovakia.

Slabaugh, G. G., Basaru, R. R., Child, C. H. T. & Alonso, E. (2015). Quantized Census for Stereoscopic Image Matching. Paper presented at the Second International Conference on 3D Vision (3DV 2014), 08-12-2014 - 11-12-2014, Tokyo, Japan.

Child, C. H. T. & Trusler, B. P. (2014). Implementing Racing AI using Q-Learning and Steering Behaviours. In: Dickinson, P & Geril, P (Eds.), 15th International Conference on Intelligent Games and Simulation. GAMEON 2014 (15th annual European Conference on Simulation and AI in Computer Games), 09-09-2014 - 11-09-2014, University of Lincoln, Lincoln, UK.

Hadjiminas, N. & Child, C. H. T. (2012). Be The Controller: A Kinect Tool Kit for Video Game Control - Recognition of Human Motion Using Skeletal Relational Angles. Paper presented at the 5th Annual International Conference On Computer Games, Multimedia And Allied Technology (CGAT 2012), 2012, Bali, Indonesia.

Child, C. H. T., Parkar, S., Mohamedally, D. , Haddad, M. & Doroana, R. (2010). Development of a Virtual Laparoscopic Trainer using Accelerometer Augmented Tools to Assess Performance in Surgical training. In: 19th International Pediatric Endosurgery Group (IPEG) Congress, 8 - 12 Jun 2010, Hawaii, US.

Child, C. H. T., Stathis, K. & Garcez, A. (2007). Learning to Act with RVRL Agents. Paper presented at the 14th RCRA Workshop, Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion, Jul 2007, Rome, Italy.

Child, C. H. T. & Stathis, K. (2006). Rule Value Reinforcement Learning for Cognitive Agents. In: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems. Fifth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS`06), 8 - 12 May 2006, Hakodate, Hokkaido, Japan. doi: 10.1145/1160633.1160773


Child, C. H. T. (2012). Approximate Dynamic Programming with Parallel Stochastic Planning Operators (TR/2012/DOC/03). City University London.


Child, C. H. T. (2011). Approximate Dynamic Programming with Parallel Stochastic Planning Operators. (Unpublished Doctoral thesis, City University London)

Working Paper

Child, C. H. T. ORCID: 0000-0001-5425-2308 & Georgeson, J. (2016). NPCs as People, Too: The Extreme AI Personality Engine. City, University of London.

This list was generated on Thu Jul 25 03:05:57 2024 UTC.