Projects

From Hande Celikkanat

(Difference between revisions)
m
m
Line 1: Line 1:
-
[http://kovan.ceng.metu.edu.tr/index.php/MultilevelConceptualization ==== '''Development of Hierarchical Concepts in Humanoid Robots'''  ==== ] <br>
+
[http://kovan.ceng.metu.edu.tr/index.php/MultilevelConceptualization ====Development of Hierarchical Concepts in Humanoid Robots==== ] <br>
Mar. 2014 - present <br>
Mar. 2014 - present <br>
Funded by [http://www.tubitak.gov.tr/ The Scientific and Technological Research Council of Turkey (TUBITAK)]  through project  111E287
Funded by [http://www.tubitak.gov.tr/ The Scientific and Technological Research Council of Turkey (TUBITAK)]  through project  111E287

Revision as of 12:35, 5 May 2015

====Development of Hierarchical Concepts in Humanoid Robots====
Mar. 2014 - present
Funded by The Scientific and Technological Research Council of Turkey (TUBITAK) through project 111E287

In this project, we are interested in how we "conceptualize". This is a very basic question, but underlies almost all of our cognitive abilities: Without the ability to form concepts, we wouldn't be able to understand the world, since we wouldn't know how to discretize the immense amount of continuous sensory signals we are receiving in each moment. We could not form "categories", since categories are manifestations of concepts. We definitely could not talk, since language is nothing but giving labels to concepts in a commonly-agreed on syntax. Conceptualization, therefore, lies at the very core of what makes us make sense of the world.

A specific aspect we are interested is how concepts are discovered and represented structurally. Our stance is that concepts are only meaningful when in relation to each other. A human never learns a concept in isolation: The Sun is hot and yellow and in the sky and far and very very bright, hot is good if it is far or if we cold, hot is also bright if it is very hot, in the case of a lamp, but a lamp is enclosed safely so it does not have to very far. Infants learn new concepts every day, and have the non-trivial task of connecting what they learn to what they already know (one of the possible causes of the ever-perennial question, "Why?") But after such a hard work, when we grow up, we depend immensely on this gross network of knowledge: Every concept, when invoked, brings into mind countless other ones, from which we select relevant ones according to the context.

We therefore propose a structural representation: A densely-connected, probabilistic web of concepts, as an answer to how these concepts might be represented. On a Markov Random Field, we show different concepts can be related probabilistically, and invoke each other as necessary. We propose that there are many different kinds of concepts, and hence different "categories" in language: Nouns, adjectives, verbs, even prepositions, which are connected to each other alike.

Then, we look at "context": The bane of Artificial Intelligence. Humans have to use context, without which we would be useless in this world. We have such a great repertoire of actions, for instance, there would be no way we could sensibly act, if we considered selecting among each and every of these actions every time we need to make a move. We just select contextually relevant ones. (And even those are enough to baffle us sometimes...)

Within the scope of this project, I have modeled this probabilistic concept web by extending the Markov Random Field model to a hybrid version, which turned out to have significant benefits in reasoning and object recognition, as compared to considering concepts indivually. Which makes sense, because the world is composed of logically "connected" concepts (a ball is generally round), and it is definitely advantageous to exploit this statistical regularity.

I have then modeled the contextual information on top of this concept web, by proposing an incremental and online variant of Latent Dirichlet Allocation. This contextual information, as well, turned out to be hugely beneficial, in a variety of scenarios including object recognition, selecting appropriately safe actions, and computationally efficient planning.

Here is a video combining what we have done in this project so far: here

ROSSI: Emergence of Communication in Robots through Sensorimotor and Social Interaction
Mar. 2010 - Jan. 2011
Funded by European Union through Framework Programme 7

  • Implemented a head-centered visual representation module (proposed by Grossberg et al.). The module relies on retinal images and current head-eye configuration information from the proprioceptive sensors of the robot in order to extract a pose-independent representation for environmental objects.
  • Implemented the Vector Integration to Endpoint (VITE) model for human-like movements on the iCub humanoid robot.

Controllable Robotic Swarm
Jun. 2006 - Sept. 2009
Funded by The Scientific and Technological Research Council of Turkey (TUBITAK) through project 104E066

  • Developed self-organized flocking behavior for Kobot robot (designed and implemented by KOVAN Research Lab. for swarm robotic studies)
  • Investigated the necessary and sufficient conditions for successful directional ``control of a self-organized flock by informed individuals spread throughout the flock, whose identities are not known to the other naive individuals
  • Took part in development of an OpenGL/ODE-based physical simulator for the Kobot robot platform
  • Developed a kernel module running on an onboard Linux processor for camera access on the Kobot robot platform
  • Developed an embedded omnidirectional vision algorithm for the Kobot robot platform
Navigation