Project suggestions
Below are some brief project suggestions that can be used to inspire and inform the direction of a project proposal. This is not an exhaustive list, and you are not required to select from these projects or supervisors.
- Self-supervised neuromorphic learning
Supervised by:James Knight (Informatics), Thomas Nowotny (Informatics)
The cost of training ever-larger Artificial Neural Networks has led researchers to look to biology for better answers. Spiking Neural Networks (SNNs) are inspired by the power efficiency of biological neurons and recent advances [1,2] make it possible to train them to competitive performance using supervised learning. However, like in ML more generally, lack of labelled data is becoming a problem. Self-supervised approaches are an exciting and competitive alternative [3,4,5]. In this PhD you will combine SNNs and self-supervised learning to solve real-world tasks of your choice using our GPU-accelerated SNN simulation framework [6,7].
References
1) Wunderlich & Pehle (2021), Event-based backpropagation can compute exact gradients for spiking neural networks, https://doi.org/10.1038/s41598-021-91786-z 2) Nowotny et al. (2024). Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks, arXiv. http://arxiv.org/abs/2212.01232 3) Chen et al. (2020), A simple framework for contrastive learning of visual representations, http://proceedings.mlr.press/v119/chen20j.html 4) Illinger et al. (2020), Local plasticity rules can learn deep representations using self-supervised contrastive predictions, http://arxiv.org/abs/2010.08262 5) Halvagal & Zenke (2023). The combination of Hebbian and predictive plasticity learns invariant object representations in deep sensory networks. Nature Neuroscience, 26(11), 1906–1915. https://doi.org/10.1038/s41593-023-01460-y 6) Knight et al. (2021), PyGeNN: A Python Library for GPU-Enhanced Neural Networks, https://doi.org/10.3389/fninf.2021.659005 7) Knight, J. C., & Nowotny, T. (2023). Easy and efficient spike-based Machine Learning with mlGeNN. Neuro-Inspired Computational Elements Conference, 115–120. https://doi.org/10.1145/3584954.3585001
Keywords:Neuromorphic, spiking neural networks, self-supervised learning
- Robotic partner for robot-aided surgery
Supervised by:Dr. Yanpei Huang, Department of Engineering, Dr. Carlo Tiseo, Department of Engineering
Traditional surgical robots relies on direct manual teleoperation, which can lead to errors from surgeon fatigue or limited experience. Shared control between human and robot can reduce workload and enhance performance by assisting with routine operations while the surgeon retains control of complex tasks. We will develop AI- and model-based shared control modes for human-robot collaborative surgery. Robot will learn from expert human using reinforcement learning approaches. A human-robot shared control framework will be developed that control authority will dynamically shift between the surgeon and robot, allowing the robot to autonomously manage certain tools with real-time sensory data.
References
[1] Z. J. Hu, Z. Wang, Y. Huang, A. Sena, F. Rodriguez y Baena and E. Burdet, ""Towards Human-Robot Collaborative Surgery: Trajectory and Strategy Learning in Bimanual Peg Transfer,"" in IEEE Robotics and Automation Letters, vol. 8, no. 8, pp. 4553-4560, Aug. 2023. [2] H. Zheng, Z. J. Hu, Y. Huang, X. Cheng, Z. Wang and E. Burdet, ""A User-Centered Shared Control Scheme with Learning from Demonstration for Robotic Surgery,"" IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 15195-15201. [3] Tiseo, Carlo, Quentin Rouxel, Martin Asenov, Keyhan Kouhkiloui Babarahmati, Subramanian Ramamoorthy, Zhibin Li, and Michael Mistry. “Achieving Dexterous Bidirectional Interaction in Uncertain Conditions for Medical Robotics.” IEEE Transaction on Medical Robotics and Bionics [Accepted] (preprint https://arxiv.org/abs/2206.09906 ) [4] R. Wen, Q. Rouxel, M. Mistry, Z. Li and C. Tiseo, “Collaborative Bimanual Manipulation Using Optimal Motion Adaptation and Interaction Control: Retargeting Human Commands to Feasible Robot Control References,” in IEEE Robotics & Automation Magazine.
Keywords:Robotic surgery, shared control, teleoperation, reinforcement learning, human-robot collaboration, autonomous agents
- AI Driven Robotic Skin Cancer Early Diagnostics
Supervised by:Dr. Carlo Tiseo, Department of Engineering, Dr. Yanpei Huang, Department of Engineering, Dr Claudia Degiovanni (Dermatologist) UNIVERSITY HOSPITALS SUSSEX NHS FOUNDATION TRUST
Specialist clinical consultations are becoming more challenging to deliver to accommodate the increasing demand generated by the ageing population of developed countries. Dermatoscopy is the main tool in mapping and monitoring skin lesions (e.g., moles) to identify malignant tumours (e.g., melanomas) early. This project will investigate machine learning methods to develop semiautonomous diagnostic tools capable of collecting the skin lesions mapping independently and performing a preliminary assessment that is directly shared with a dermatologist for diagnostic confirmation.
References
[1] Tiseo, Carlo, Quentin Rouxel, Martin Asenov, Keyhan Kouhkiloui Babarahmati, Subramanian Ramamoorthy, Zhibin Li, and Michael Mistry. “Achieving Dexterous Bidirectional Interaction in Uncertain Conditions for Medical Robotics.” IEEE Transaction on Medical Robotics and Bionics [Accepted]. Presented at the 2024 IEEE RAS/EMBS 10th International Conference on Biomedical Robotics and Biomechatronics (preprint https://arxiv.org/abs/2206.09906 ) [2] R. Wen, Q. Rouxel, M. Mistry, Z. Li and C. Tiseo, “Collaborative Bimanual Manipulation Using Optimal Motion Adaptation and Interaction Control: Retargeting Human Commands to Feasible Robot Control References,” in IEEE Robotics & Automation Magazine, doi: 10.1109/MRA.2023.3270222. [3] Z. J. Hu, Z. Wang, Y. Huang, A. Sena, F. Rodriguez y Baena and E. Burdet, ""Towards Human-Robot Collaborative Surgery: Trajectory and Strategy Learning in Bimanual Peg Transfer,"" in IEEE Robotics and Automation Letters, vol. 8, no. 8, pp. 4553-4560, Aug. 2023. [4] H. Zheng, Z. J. Hu, Y. Huang, X. Cheng, Z. Wang and E. Burdet, ""A User-Centered Shared Control Scheme with Learning from Demonstration for Robotic Surgery,"" IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 15195-15201.
Keywords:Robotic Diagnostics, Skin Cancer, Early Diagnosis, Reinforcement Learning, human-robot collaboration, autonomous agents
- Event-based machine learning
Supervised by:Thomas Nowotny (Informatics)/ James Knight (Informatics)
Event-based Spiking Neural Networks (SNNs) are inspired by the power and efficiency of biological neurons and with recent advances [1,2] we can train them using supervised learning. However, there are technical difficulties [2] and the methods have thus far only been applied to a handful of benchmark problems. Using our GPU-accelerated SNN simulation framework [3,4], the proposed PhD topic could take two possible directions: 1) continue to improve the methods for gradient descent in SNNs, and 2) extend the application of these methods towards real-world problems.
References
1. Wunderlich & Pehle (2021), Event-based backpropagation can compute exact gradients for spiking neural networks, https://doi.org/10.1038/s41598-021-91786-z 2. Nowotny et al. (2024). Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks, arXiv. http://arxiv.org/abs/2212.01232 3. Knight et al. (2021), PyGeNN: A Python Library for GPU-Enhanced Neural Networks, https://doi.org/10.3389/fninf.2021.659005 4. Knight, J. C., & Nowotny, T. (2023). Easy and efficient spike-based Machine Learning with mlGeNN. Neuro-Inspired Computational Elements Conference, 115–120. https://doi.org/10.1145/3584954.3585001
Keywords:spiking neural networks; Eventprop; event-based; machine learning; neuromorphic computing
- Understanding and addressing spurious features and variations in affective movement recognition
Supervised by:Temitayo Olugbade, Peter Wijeratne
The goal in machine learning is to build a model that learns features that map to labels and generalize to unseen data. Confounding variables in the real world are a challenge to generalizability when correlated with (or appear about as related to the label as) 'true features'. This problem is significant in affective movement recognition (AMR) where data is limited, expression and inference depend on unknown/latent context, and so-called ground truth is fuzzy at best. This PhD will formalize spurious features and variations problems in AMR, contribute methods that decompose observed data into coincidental, weak vs more generalizable features, and rigorously evaluate methods and theories.
References
Olugbade, T., de C Williams, A. C., Gold, N., & Bianchi-Berthouze, N. (2023). Movement representation learning for pain level classification. IEEE Transactions on Affective Computing. Olugbade, T., Buono, R. A., Potapov, K., et al. (2024). The EmoPain@ Home Dataset: Capturing Pain Level and Activity Recognition for People with Chronic Pain in Their Homes. IEEE Transactions on Affective Computing. Wang, Y., & Jordan, M. I. (2024). Desiderata for Representation Learning: A Causal Perspective. Journal of Machine Learning Research, 25(275), 1-65. Izmailov, P., Kirichenko, P., Gruver, N., & Wilson, A. G. (2022). On feature learning in the presence of spurious correlations. Advances in Neural Information Processing Systems, 35, 38516-38532. Veitch, V., D'Amour, A., Yadlowsky, S., & Eisenstein, J. (2021). Counterfactual invariance to spurious correlations in text classification. Advances in neural information processing systems, 34, 16196-16208.
Keywords:Machine learning, Affective computing, Body movement modelling, Distribution shift, Spurious variations
- Control certificate enhanced AI for robotic manipulation
Supervised by:Dr Yanan Li, Department of Engineering; Prof Andrew Philippides, Department of Informatics
Traditional robotic manipulation faces a fundamental challenge in dealing with highly uncertain tasks. Recent AI advancements have shone a light on tackling this challenge [1], but also its limitation: while extensive data are required for training a model, data acquisition is costly and hard, since every manipulation task is different and even the same task with a slight environmental change (e.g., friction) may require different control strategies and/or parameters. In this project, we aim to address this limitation by developing algorithms to learn control certificates [2] instead of robot actions (as the current AI does). These algorithms will be tested in various manipulation tasks.
References
[1] Zhao et al (2024), Conference on Robot Learning (CoRL). URL: https://openreview.net/forum?id=gvdXE7ikHI [2] Dawson et al (2023), IEEE Transactions on Robotics 39(3): 1749-1767
Keywords:Control certificate; Robotic manipulation
- Bayesian Brain Principles for Next-Generation Machine Learning
Supervised by:Christopher L Buckley (Department of Informatics), Anil K Set h(Department of Informatics)
The Bayesian brain hypothesis posits that the brain operates as a probabilistic inference engine. The Free Energy Principle (FEP) and Active Inference provide a unifying framework to explain perception, action, and learning as processes rooted in Bayesian principles. However, while these ideas hold significant implications for understanding natural intelligence, their application in artificial intelligence (AI) remains largely unexplored. This PhD project seeks to address this question by translating the Bayesian brain hypothesis into scalable computational models for ML, combining neuroscience-inspired theories with modern algorithmic innovations.
References
The free energy principle for action and perception: A mathematical review CL Buckley, CS Kim, S McGregor, AK Seth Journal of mathematical psychology 81, 55-79 Attention as implicit structural inference R Singh, CL Buckley Advances in Neural Information Processing Systems 36, 24929-24946 Brain-inspired computational intelligence via predictive coding T Salvatori, A Mali, CL Buckley, T Lukasiewicz, RPN Rao, K Friston, arXiv preprint arXiv:2308.07870 Predictive coding approximates backprop along arbitrary computation graphs B Millidge, A Tschantz, CL Buckley Neural Computation 34 (6), 1329-1368
Keywords:Bayesian, Machine Learning, Free Energy, Active Inference
- AI solutions to guide treatment strategies for cancer
Supervised by:Frances Pearl. (Biochem&Biomedicine, Life Sciences), Nick Hay (Informatics).
The Pearl Bioinformatics Group have a strong track-record of developing artificial intelligence (AI) algorithms that use cancer genomic data sets to predict therapeutic vulnerabilities in cancer cells. The aim of this proposal is the development of artificial intelligence (AI) methods to aid the diagnosis and inform treatment strategies to improve the outcome for cancer patients. Using multi-omic data sources from large public cancer sample repositories as input, AI methods using deep learning will be trialled and optimised to identify precision therapies that a patient may benefit from.
References
Benstead-Hume, G., et al., Predicting synthetic lethal interactions using conserved patterns in protein interaction networks. PLoS Comput Biol, 2019. 15(4): p. e1006888. Wooller, S.K., L.H. Pearl, and F.M.G. Pearl, Identifying actionable synthetically lethal cancer gene pairs using mutual exclusivity. FEBS Lett, 2024. 598(16): p. 2028-2039. Baeissa, H.M., et al., Mutational patterns in oncogenes and tumour suppressors. Biochem Soc Trans, 2016. 44(3): p. 925-31. Benstead-Hume, G., et al., Biological network topology features predict gene dependencies in cancer cell-lines. Bioinform Adv, 2022. 2(1): p. vbac084.
Keywords:Cancer, drug response, deep learning
- Computational neurophenomenology of visuals illusions using predictive coding networks
Supervised by:Anil Seth (Informatics), Chris Buckley (Informatics), Ishan Singhal (Informatics)
Visual illusions suggest that perceptual experience is the result of the brain inferring the most likely causes of sensory data, in a Bayesian framework. Excitingly, recent advances in predictive coding (PC) networks instantiate this process in a machine learning setting in an efficient and biologically plausible manner. This project will integrate machine learning and computational neuroscience to pursue a ‘computational neurophenomenology’ of perceptual experience. Specifically, we will develop and test PC networks modelling visual illusions under the constraints of neurophysiology and phenomenology, exploring conditions under which these networks ‘perceive’ the world in ways like us.
References
(1) Tschantz, A., Millidge, B., Seth, A.K., & Buckley, C.L. (2023). Hybrid predictive coding: Inferring, fast and slow. PLoS Computational Biology 19(8):e1011280. (2) Suzuki, K., Seth, A.K., and Schwartzman, D. (2024). Modelling phenomenological differences in aetiologically distinct visual hallucinations using deep neural networks. Frontiers in Human Neuroscience. DOI 10.3389/fnhum.2023.1159821
Keywords:consciousness, perception, illusions, predictive processing, active inference, predictive coding, computational neuroscience
- Multimodal Machine Learning for Ecological Monitoring
Supervised by:Ivor Simpson (Informatics), Alice Eldridge (Music), Chris Sandom (Ecology and Evolution)
Ecosystems involve a complex web of interactions across a vast range of temporal and spatial scales. Various sensors offer a limited view of the underlying ecological processes. This complicates evidencing for nature recovery programmes, and there is a real need for more comprehensive metrics to assess ecological health and integrity. This project will develop multimodal machine learning models that unify data from disparate measurements including audio, camera traps, vegetation, and soil surveys etc. to build a holistic picture of an ecosystem to describe spatiotemporal change. This builds on ongoing research within the supervisory team, which currently focuses on ML for ecoacoustics.
References
Gibb, K.A., Eldridge, A., Sandom, C.J. and Simpson, I.J., 2024. Towards interpretable learned representations for Ecoacoustics using variational auto-encoding. Ecological Informatics, 80, p.102449. https://doi.org/10.1016/j.ecoinf.2023.102449. Balfour, N.J., Durrant, R., Ely, A., Sandom, C.J., 2021. People, nature and large herbivores in a shared landscape: A mixed-method study of the ecological and social outcomes from agriculture and conservation. People and Nature 3, 418–430. https://doi.org/10.1002/pan3.10182 Bradfer‐Lawrence, T., Desjonqueres, C., Eldridge, A., Johnston, A. and Metcalf, O., 2023. Using acoustic indices in ecology: Guidance on study design, analyses and interpretation. Methods in Ecology and Evolution, 14(9), pp.2192-2204. https://doi.org/10.1111/2041-210X.14194
Keywords:Machine learning, Ecological monitoring, Ecoacoustics
- Multi-modal machine learning for disease progression modelling
Supervised by:Dr Peter Wijeratne (Informatics), Dr Ivor Simpson (Informatics)
Dr Wijeratne works on AI for healthcare and neuroscience, focusing on data- and computationally-efficient probabilistic modelling of disease progression with a strong theoretical foundation. Along with Dr Simpson, who has expertise in developing ML approaches for inverse problems with spatiotemporal data, we are looking to supervise PhD projects developing innovative and statistically principled ML approaches for incorporating multi-modal medical imaging data into models of disease progression. This will build on our recent works developing an optimal transport (OT) model of disease progression and utilising structured data likelihoods to robustly incorporate different sources of evidence.
References
Wijeratne, PA, Alexander, DC. (2024). Unscrambling disease progression at scale: fast inference of event permutations with optimal transport. Advances in Neural Information Processing Systems (NeurIPS). doi: 10.48550/arXiv.2410.14388 Simpson, IJA, Vicente, S, Campbell, NDF. (2022). Learning structured Gaussians to approximate deep ensembles. Computer Vision and Pattern Recognition (CVPR). doi: 10.48550/arXiv.2203.15485
Keywords:Machine learning, medical imaging, disease progression modelling, generative models
- Biomarker inference in quantitative MRI
Supervised by:Ivor Simpson (Informatics), Nicholas Dowell (CISC, BSMS), Itamar Ronen (CISC, BSMS)
Magnetic resonance (MR) scanning offers a powerful and flexible method to understand the human brain, and how it can be perturbed by disease. A vast range of quantitative measurements of structure, microstructure and tissue composition are possible, which enables the development of sensitive biomarkers. Biomarkers are derived by fitting statistical models to data, which in many cases presents as an ill-posed inverse problem. In this project, we would like to develop principled uncertainty-aware analysis methodologies to infer sensitive latent biomarkers from data collected using cutting-edge brain MR techniques. This project could use data collected at either 3T or ultra-low-field (50mT).
References
Simpson, I.J., McManamon, A., Örzsik, B., Stone, A.J., Blockley, N.P., Asllani, I., Colasanti, A. and Cercignani, M., 2022. Flexible Amortized Variational Inference in qBOLD MRI. arXiv preprint arXiv:2203.05845. Duff, M.A., Simpson, I.J., Ehrhardt, M.J. and Campbell, N.D., 2023. VAEs with structured image covariance applied to compressed sensing MRI. Physics in Medicine & Biology, 68(16), p.165008. 10.1088/1361-6560/ace49a De Marco, R., Ronen, I., Branzoli, F., Amato, M.L., Asllani, I., Colasanti, A., Harrison, N.A. and Cercignani, M., 2022. Diffusion-weighted MR spectroscopy (DW-MRS) is sensitive to LPS-induced changes in human glial morphometry: a preliminary study. Brain, Behavior, and Immunity, 99, pp.256-265. https://doi.org/10.1016/j.bbi.2021.10.005 Alruwais, N.M., Rusted, J.M., Tabet, N. and Dowell, N.G., 2022. Evidence of emerging BBB changes in mid‐age apolipoprotein E epsilon‐4 carriers. Brain and Behavior, 12(12), p.e2806. doi: 10.1002/brb3.2806
Keywords:Machine learning, biomarkers, magnetic resonance imaging
- Advancing mechanistic understanding of atrial fibrillation through physics-based ML
Supervised by:Luc Berthouze (Informatics) and John Silberbauer (University Hospitals Sussex)
Atrial fibrillation (AF), the most common heart rhythm disorder, is complex, involving interactions between focal triggers and re-entrant rotors across the heart's inner and outer layers (1). Most studies, however, focus on a single layer. This project seeks to address this issue by closing the analytical-experimental loop between physics-based machine learning (2-6) and a new dual-layer data acquisition protocol (7) developed at University Hospitals Sussex to analyse dissociation (8) between layers and predict how layer interactions generate and perpetuate AF. A successful outcome of this research would inform the future development of closed-loop approaches to treatments such as ablation.
References
(1) Xu et al. Rotor mechanism and its mapping in atrial fibrillation. EP Europace. 2023 Mar 30;25(3):783–92. (2) Herrero Martín et al. EP-PINNs: cardiac electrophysiology characterisation using physics-informed neural networks. Frontiers in Cardiovascular Medicine. 2021;8. (3) Trayanova et al. Machine Learning in Arrhythmia and Electrophysiology. Circulation research. 2021;128 4:544–66. (4) McGillivray et al. Machine learning methods for locating re-entrant drivers from electrograms in a model of atrial fibrillation. Royal Society Open Science. 2017;5. (5) Cantwell et al. Rethinking multiscale cardiac electrophysiology with machine learning and predictive modelling. Computers in Biology and Medicine. 2018;104:339–51. (6) Sánchez et al. Using Machine Learning to Characterize Atrial Fibrotic Substrate From Intracardiac Signals With a Hybrid in silico and in vivo Dataset. Frontiers in Physiology. 2021;12. (7) Juliá et al. Assessment of pericardial adhesions by means of the EpiCO2 technique: the Brighton adhesion classification. Heart Rhythm. 2024 May 9:S1547-5271(24)02551-7. (8) Gharaviri et al. Mutual influence between dyssynchrony and transmural conduction maintains atrial fibrillation. In: 2012 Computing in Cardiology. p. 897–900.
Keywords:physics-based machine learning; cardiac electrophysiology; wave dynamics