Projects


Start date: March 2018
End date: September 2018


Sponsors: Tüpraş


Start date: October 2017
End date: October 2021


Sponsors: European Cooperation in Science and Technology (COST) Programme


Start date: March 2017
End date: March 2021


Sponsors: European Cooperation in Science and Technology (COST) Programme


Start date: March 2014
End date: March 2018

Abstract:

Combining Computer Vision and Language Processing For Advanced Search, Retrieval, Annotation and Description of Visual Data.


Sponsors: European Union under European Cooperation in Science and Technology (COST) Programme

Partner(s): http://www.cost.eu/domains_actions/ict/Actions/IC1303?management

Start date: November 2013
End date: November 2017

Abstract:

Ambient Assisted Living (AAL) is an area of research based on Information and Communication Technologies (ICT), medical research, and sociological research. AAL is based on the notion that technology and science can provide improvements in the quality of life for people in their homes, and that it can reduce the financial burden on the budgets of European healthcare providers. The concept of Enhanced Living Environments (ELE) refers to the AAL area that is more related with the Information and Communication Technologies. To design, plan, deploy and operate, an AAL system often comprehends the integration of several scientific areas. The Architectures, Algorithms and Platforms for Enhanced Living Environments (AAPELE) COST Action addresses the issues of defining software, hardware and service architectures for AAL, on studying and creating more efficient algorithms for AAL, particularly those related to the processing of large amounts of data and of biosignals in lossy environments, and on the research of protocols for AAL or, with more detail, on studying communication and data transmission protocols for AAL. This Action aims to promote interdisciplinary research on AAL, through the creation of a research and development community of scientists and entrepreneurs, focusing on AAL algorithms, architectures and platforms, having in view the advance of science in this area and the development of new and innovative solutions.


Sponsors: European Union under European Cooperation in Science and Technology (COST) Programme

Partner(s): Istanbul Technical University, SiMiT Lab

Start date: June 2013
End date: May 2017

Abstract:

The objective of this project is to derive and utilize multiple modes of information from face images to facilitate its practical use in several domains, such as assistive systems, entertainment, and multimedia content analysis.


Sponsors: European Union under the FP7 Marie Skłodowska-Curie Actions
Partner(s): http://www.cost.eu/domains_actions/ict/Actions/IC1303?management

Start date: March 2013
End date: March 2017


Sponsors: European Union under European Cooperation in Science and Technology (COST) Programme

Partner(s): Istanbul Technical University, SiMiT Lab, Karlsruhe Institute of Technology (Germany)

Start date: January 2014
End date: December 2016

Abstract:

According to the World Health Organization, there are about 284 million people in the world, who are visually impaired. This visually impaired population is at a disadvantage, when it comes to communicating, since its members are not able to interpret most nonverbal messages. Humans rely on visual cues, when interacting within a social context and for those who cannot see, this interferes with the quality of social interactions. The majority of the visually impaired community is formed by older adults with age-related vision impairments. Those people are forced to adapt their behavior to their new capabilities, and it is shown that social adjustment to vision loss involves hard difficulties in social functioning, changes in social support, and loneliness. This loss of social activity and social support is shown to exacerbate emotional problems such as depression, which is twice as common in the group of elderly people with visual impairments. A healthy social activity is thus desirable for visually impaired people to help avoid depression and emotional distress. This social disadvantage can best be addressed through an application of technology. We propose the implementation of computer vision based assistive technology to provide real-time feedback of visual cues (such as facial identification, expression, and gaze etc.) in a social interaction scenario for the visually impaired. While few such systems exist, they offer a number of disadvantages that hinders their practical use. Among others, the main limitations are adapting the computer vision algorithms for the blinds, integrating various visual cues outputs from the system into a coherent informative framework, and communicating this to the user in an effective manner. In this study, we aim to address these limitations, towards building a more practical system, with the help of the target community. With our expertise in the related computer vision technologies and experience in using technology to help blinds, we propose to investigate new methods of adapting the existing specific computer vision algorithms to help blind and partially sighted people, integrating the various system outputs into a common framework, and investigating a more useful way of communicating this to the visually impaired user in real time.


Sponsors: The Scientific and Technological Research Council of Turkey (TUBITAK) & German Federal Ministry of Education and Research (BMBF) under the Intensified Cooperation Programme (IntenC)
Partner(s): Istanbul Technical University, SiMiT Lab

Start date: Dec. 2015
End date: May 2016

Sponsors: Türk Telekom

Partner(s):Istanbul Technical University, SiMiT Lab

Start date: Jan. 2016
End date: June 2016

Sponsors: The Scientific and Technological Research Council of Turkey (TUBITAK)

Partner(s): Istanbul Technical University, SiMiT Lab, LIMSI-CNRS, LIG-CNRS, IMMI-CNRS (France), UPC (Spain), CRP-Lippmann (Luxembourg)

Start date: October 2012
End date: September 2015

Abstract:

Human activity is constantly generating large volumes of heterogeneous data, in particular via the Web. These data can be collected and explored to gain new insights in social sciences, linguistics, economics, behavioural studies as well as artificial intelligence and computer sciences. In this regard, 3M (multimodal, multimedia, multilingual) data could be seen as a paradigm of sharing an object of study, human data, between many scientific domains. But, to be really useful, these data should be annotated, and available in very large amounts. Annotated data is useful for computer sciences which process human data with statistical-based machine learning methods, but also for social sciences which are more and more using the large corpora available to support new insights, in a way which was not imaginable few years ago. However, annotating data is costly as it involves a large amount of manual work, and in this regard 3M data, for which we need to annotate different modalities with different levels of abstraction is especially costly. Current annotation framework involves some local manual annotation, with the help sometimes of some automatic tools (mainly pre-segmentation). The proposal aims at developing a first prototype of collaborative annotation framework on 3M data, in which the manual annotation will be done remotely on many sites, while the final annotation will be localized on the main site. Furthermore, with the same principle, some systems devoted to automatic processing of the modalities (speech, vision) present in the multimedia data will help the transcription, by producing automatic annotations. These automatic annotations are done remotely in each expertise point, which will be then combined locally to produce a meaningful help to the annotators. In order to develop this new annotation concept, we will test it on a practical case study: the problem of person annotation (who is speaking?, who is seen?) in video, which needs collaboration of high level automatic systems dealing with different media (video, speech, audio tracks, OCR, ...). The quality of the annotated data will be evaluated through the task of person retrieval. This new way to envision the annotation process, should lead to some methodologies, tools, instruments and data that are useful for the whole scientific community who have interest in 3M annotated data; to support this will, all the work will be supervised by a committee which will contain representatives from the main international organizations dealing with language resources and evaluation.


Sponsors: FP7 ERA-NET CHIST-ERA Grant
Partner(s): Istanbul Technical University, SiMiT Lab

Start date: October 2013
End date: September 2015

Abstract:

The focus of this project is to develop a unified framework for face analysis with the objective of building a system that receives a face image or video and generates several types of information by processing the face with a shared, optimized structure, enabling real-time processing.


Sponsors: The Scientific and Technological Research Council of Turkey (TUBITAK) under the Career Development Programme
Partner(s): Istanbul Technical University, SiMiT Lab

Start date: September 2013
End date: August 2014

Abstract:

Environmental sustainability is an essential requirement that the modern world has to meet. An important task for having sustainable environment is developing efficient monitoring methods. In this project, we focus on this task and propose an approach that exploits smartphones and crowdsourcing for environmental monitoring. The project consists of a data collection application that enables smartphone users take pictures of plants and send them to a central server and advanced computer vision methods that enable automatic identification of plant species. Based on these developed tools and technologies, in the long term, the objective is to build a smartphone application as an electronic field guide for plants and to devise statistical techniques to model and monitor distribution of plant species.


Sponsors: Turk Telekom Argela Collaborative Research Award
Partner(s): Istanbul Technical University, SiMiT Lab

Start date: January 2014
End date: December 2014

Abstract:

In the past few years, the digital entertainment world has focused on human-machine interaction on many different platforms. This interaction has grown in importance in the market, where the mobile devices and mobile games got the largest slice and massive amounts of users got attracted by augmented reality based applications. When the domination of computers and mobile devices over our social lives is concerned, we can easily realize that, popularity of these technologies was a very expected output. It may also possible to decrease this asociality of society with proper usage of these resources. In the project, inspired with this idea, developing an augmented reality aided mobile game to socialize players has been planned. Considering that the game aims to promote socializing, it is clear that multiplayer interaction is also one of the main focuses beside human-machine interaction. Favoring this aim may lead this game to include aspects, which enables players to engage physically with other players, using a camera. All these features are well fit for a mobile game application. In a nutshell, this project aims to build a mobile game that will encourage players to socialize with augmented reality and multiplayer features. The application will utilize image processing and face detection techniques to accomplish its unique gameplay elements.


Sponsors: Avea Labs Research Grant
Partner(s): Istanbul Technical University, SiMiT Lab

Start date: January 2013
End date: March 2014

Abstract:

One of the most important aspects of the next generation user interfaces is being human-centered. In human-centered user interfaces, the machines are expected to adapt themselves to the user in order to have a natural, efficient interaction. The most important source of information that enables machines to customize themselves according to the user is faces. In this project, we will focus on this topic: efficient processing of faces and receiving information about the user to provide a natural user interaction.


Sponsors: Istanbul Technical University under the ITU Research Fund
Partner(s): Istanbul Technical University, SiMiT Lab, Queen Mary University of London, Avea Labs

Start date: April 2012
End date: December 2014

Abstract:

Affective and behavioural computing aims to equip computing devices (personal PCs, smart phones) with the means to interpret, understand, and respond to human communicative behaviour, emotions, moods, and, possibly, intentions in a naturalistic way - similar to the way humans rely on their senses to assess each other’s communicative and affective state. In the last 15 years researchers in the affective computing fields have invested increased effort into creating perceptive and intelligent systems with elaborate emotional and social skills. Such efforts have been bearing fruit – the affective computing field has undergone a rapid growth and has become a highly active area of research and practice. Despite such notable developments, the academic and teaching aspect of affective computing has largely been neglected. Academic efforts on offering modules and courses in these areas are relatively new. Additionally, existing teaching and research efforts have never been brought together to shed light into the issues in designing, teaching, sharing material/resources and experiences by academics. Therefore, this project aims to become the initial but crucial step toward bringing together researchers and academics from relevant yet diverse research fields to discuss the issues and the challenges pertinent in all relevant fields, explore the possible solutions, set key standards, and define future module and course design directions and teaching strategies for making affective computing tangible for greater number of university students as well as industry partners, in particular companies in mobile and telecommunications industry, that will potentially benefit from employing these students.


Sponsors: British Council under the UK-Turkey Knowledge Partnership Programme