My name is Marc de Kamps. This is my website. It contains blog posts and scientific topics that interest me.

Interests

Computational/Cognitive Neuroscience

After a PhD in physics I started a postdoc in a psychology department. As a consequence I’ve done research in this area. I was PI in the Human Brain Project in the Theory Part of the project from 2015-2020. In 2006, Frank van der Velde and I published a proposal for a cognitive architecture: the neural blackboard architecture. Frank has made considerable progress of late and I intend to address his novel insights here.

I will also discuss the work of my former PhD student, Hugh Osborne, who has pushed population density methods where I thought they couldn’t go. Moreover, Hugh has modelled data from Samit Chakrabarty who investigates muscle activation of human subjects. Hugh’s model has interpreted the influence of proprioceptive feedback on so-called synergies: co-activation patterns of muscles that we all use to organise our movements, but which may differ from individual to individual.

Causal Inference

I was introduced to causal inference by Mark Gilthorpe. We both feel that machine learning and causal inference are skating close to each other.

Computer Graphics

I’ve taught computer graphics and got hooked on projective geometry and other aspects related to rendering. Computer graphics textbooks underplay the projective aspects somewhat, and that makes some maths topics harder than necessary, in my opinion. I think I can do better and will publish that here.

Machine Learning

I teach machine learning at Leeds University and am involved with neural network analysis of histopathology images with convolutional neural networks and variational autoencoders. The images come from a remarkable Leeds initiatve, NPIC, run by Darren Treanor. He and and Alex Wright co-supervise PhD students Andrew Broad and Jason Keighley together with me.

Serge Sharoff is a specialist in neural machine translation. I’m co-supervisor of his PhD student Yuqian Dai who investigates the use of BERT, a transformer in neural machine translation. For me this is an introduction to a novel world, artificial neural networks applied to machine translation.

Recently, I’ve become quite interested in variational inference and will write about it.

Mathematics

Always interesting. Latest exploration: Visual Differential Geometry and Forms by Tristan Needham. A book that has shocked me a number of times, for example with a proof of the Spectral Theorem (symmetric matrices have real eigenvalues and eigenvectors that are orthogonal) that is so simple it should be taught in every linear algebra course.

I will write blog posts about these topics, and at times organise them into more coherent pieces of text.

Posts

subscribe via RSS