Projects

From Hande Celikkanat

Development of Hierarchical Concepts in Humanoid Robots
Mar. 2014 - present
Funded by The Scientific and Technological Research Council of Turkey (TUBITAK) through project 111E287

In this project, we are interested in how we "conceptualize". This is a very basic question, but underlies almost all of our cognitive abilities: Without the ability to form concepts, we wouldn't be able to understand the world, since we wouldn't know how to discretize the immense amount of continuous sensory signals we are receiving in each moment. We could not "categorize", since categories are manifestations of concepts. We definitely could not talk, since language is nothing but giving labels to concepts in a commonly-agreed on syntax. Conceptualization, therefore, lies at the very core of what makes us make sense of the world.

A specific aspect we are interested is how concepts are discovered and represented structurally. Our stance is that concepts are only meaningful when in relation to each other. A human never learns a concept in isolation: The Sun is hot and yellow and in the sky and far and very very bright, hot is good if it is far or if we are cold, hot is also bright if it is very hot, for example in the case of a lamp, also a lamp is enclosed safely so it does not have to very far to be good. Infants learn new concepts every day, and have the non-trivial task of connecting what they learn to what they already know (one of the possible causes of the ever-perennial question, "Why?") But after such a hard work, when we grow up, we depend immensely on this gross network of knowledge: Every concept, when invoked, brings into mind countless other ones, from which we select relevant ones according to the context.

We therefore propose a structural representation: A densely-connected, probabilistic web of concepts, as an answer to how these concepts might be represented. On a Markov Random Field, we show different concepts can be related probabilistically, and invoke each other as necessary. We propose that there are many different kinds of concepts, and hence different "categories" in language: Nouns, adjectives, verbs, even prepositions, which are connected to each other alike.

Then, we look at "context": The bane of Artificial Intelligence. Humans have to use context, without which we would be useless in this world. We have such a great repertoire of actions, for instance, there would be no way we could sensibly act, if we considered selecting among each and every of these actions every time we need to make a move. We just select contextually relevant ones. (And even those are enough to baffle us sometimes...)

Within the scope of this project, I have modeled this probabilistic concept web by extending the Markov Random Field model to a hybrid version, which turned out to have significant benefits in reasoning and object recognition, as compared to considering concepts indivually. Which makes sense, because the world is composed of logically "connected" concepts (a ball is generally round), and it is definitely advantageous to exploit this statistical regularity.

I have then modeled the contextual information on top of this concept web, by proposing an incremental and online variant of Latent Dirichlet Allocation. This contextual information, as well, turned out to be hugely beneficial, in a variety of scenarios including object recognition, selecting appropriately safe actions, and computationally efficient planning.

Here is a video summarizing what we have done in this project so far.


ROSSI: Emergence of Communication in Robots through Sensorimotor and Social Interaction
Mar. 2010 - Jan. 2011
Funded by European Union through Framework Programme 7

This was a very comprehensive project, aiming to acquire a better understanding of the canonical and mirror neurons in the premotor area, and exploit their mechanisms in robots for duplicating a similar development of conceptualization and language. For detailed information, please see here. Within the project context, I have implemented a neural model for pose-independent, head-centered visual representation of the environment on the iCub robot. (originally due to Grossberg et al.) The module combined information coming from the retinal images, and the proprioceptive sensing of the current head-eye configuration, for compensating for head-eye movements.


Controllable Robotic Swarm
Jun. 2006 - Sept. 2009
Funded by The Scientific and Technological Research Council of Turkey (TUBITAK) through project 104E066

In this project, our aim was to investigate mechanisms of control in self-organized swarms. It is easy to control robots externally, that is, by forcing commands on them, but not so clear how we can do that without disturbing their self-organization. We have designed and implemented the Kobot robot within the context of this project, and then proposed and implemented the first truly self-organized flocking behavior in a physical swarm of this robot.

Within the project, I have developed the self-organized flocking behavior for, and then as my M.Sc. thesis work, I have investigated the necessary and sufficient conditions for successful directional control of this self-organized flock. As consistent with the analysis of Ian Couzin analysis in biological swarms, it turned out that we can control this swarm if we have only a small minority of individuals spread around the swarm, who knows which direction they need to go. They did not need to advertise their knowledge, the other individuals (naive ones) did not even need to know that some individuals in the flock are aware of a goal direction. The minority of informed individuals could subtly guide the flock by just incorporating the goal direction in their action choices, while taking care to stay within the flock.

(It is shown by many studies in swarm intelligence that, when humans are in large groups, for instance walking in a crowded mall, they tend to follow extremely simplistic rules, which are identical with animal swarms, and even with physical particles. Does that mean, we are also so easy to control as groups? Possibly so :) )

As part of my work in this project, I have also took part in development of an OpenGL/ODE-based physical simulator for Kobot, and developed a kernel module running on an onboard Linux processor for camera access from the Kobot.

Navigation