Research Projects

Neurodynamical modelling of visual cortex


White_effect_pattern_W2_128 neurons
Visual stimulus Computational simulation of the dynamical firing-rate activity map of a network of neurons with high spatial frequency sensitivity
One of the core objectives of our group is the development of biologically-plausible computational neurodynamical models of the visual cortex. These models (both firing-rate and spiking models) are nonlinear dynamical systems that describe the temporal evolution of the activity of a network of interconnected neurons. Connections between these neurons are modeled by reproducing the architecture described in anatomical and neurophysiological findings. We apply our models to problems as colour perception, visual saliency, visual aesthetics and machine learning.

Spiking Neural Networks for Machine Learning


We are working on the development of Spiking Neural Networks (SNN) using Spike Time Dependent Plasticity (STDP) to perform learning tasks. Our achitectures are exclusively based on biologically plausible mechanisms and properties such as receptive fields, excitatory and inhibitory cells, multilayer architecture, lateral connections, etc. The objective is to show that a biologically plausible SNN computational architecture can obtain similar results to the classical Deep Networks in several learning tasks.

Chromatic and Brightness Induction


Chromatic (and brightness) induction refers to how a colour (and brightnes) is perceived in the presence of surrounding stimulus. Two types of induction effects (i.e. contrast and assimilation) have been described. Contrast refers to a shift of colour away from its surroundings and assimilation refers to the opposite (colour shifts towards that of its surroundings). We are interested in low-level computational models that predict these effects. Our first approach was unifying these two effects under a single process by our low-level multi-resolution model (BIWaM). The next step was to try to reproduce chromatic induction effects and the resulting model was termed CIWaM.

Here you can find some publications and code developed.

ShevellCircles_small

fnhum-08-00640-g011

Psychophysical Experimentation


The Computer Vision Centre accommodates a Psychophysical Laboratory which allows us to base our research on results obtained via psychophysical experimentation. The laboratory material includes ViSaGe stimulus generators, SMI gaze and eye tracking systems, colorimeters, tele-spectroradiometers, calibrated visual displays, calibrated digital cameras, color vision tests,  and illumination-controlled experimental environments. Here you can find the Laboratory’s webpage.

OP_eye_photo

Migraine and Visual Discomfort neural mechanisms


Visual discomfort refers to the sensation an observer has in front of visual stimulus with particular characteristics that may lead, in subjects with high sensitivity, to moving vision, headaches and sickness. Two examples of this pathology are migraines and the Meares-Irlen syndrome. Recent neurological and psychophysical findings suggest that the neural mechanisms underlying this effect are related to an unusual inhibitory activity of the visual cortex that produce an “overload” of the visual system. We are using our neurodynamical computational models to study this neural processes.

Visual discomfort example

McColloughTest

McColloughTest

Look at these images for more than 10 seconds. The annoying visual sensation you feel is called visual discomfort.
Left image credit Nicholas Wade

Visual Saliency


Visual saliency refers to the parts of an image that attracts the subjects’ gaze. A modified version of CIWaM predicts the eye-movements of subjects in several well-known databases and obtains results comparable to the state-of-the art.

Binocular disparity


Binocular disparity refers to the perception of depth obtained from two eyes visual stimulus. We are working on the modelling of primary visual cortex binocular cells and V2 mechanisms for relative disparity coding.

Visual Aesthetics


The manipulation of objects and images to appeal to our sense of beauty (and that of others) is perhaps the most intrinsically human activity and one of the oldest human trades. We propose to build a computational model of the visual cortex that exploits visual scene attributes in the same way as our evolution-shaped brain does. Since the higher and more complex levels of cerebral activity for this task are largely unknown, we rely on machine learning algorithms to gain knowledge from the computational operators (neurons) of the model, using well known psychophysical methods to build a database from where to learn.

G_BeautyInTheAI

The Barcelona Calibrated Image Database


Although there are several good calibrated image datasets around, none seems to fit all the particular needs for every given project. This database aims to fill this gap with images optimally calibrated for vision science (in device-independent colour spaces) and detailed explanations of the methods (and their limitations) used to produce it.

Here you can find some publications and code developed.

Dragon