###HEADBILDALT###



At SAIL LABS, all key technologies are developed with a vision of continuous innovation at the leading edge of linguistic and IT research.

Focusing on Security Research, SAIL LABS currently maintains active profiles
on an Austrian and European Security Research Map.


Link back to top

QuOIMA – Quelloffene Integrierte Multimedia Analyse

Nutzung von Open-Source-Informationsquellen, insbesondere sozialen Medien (Social Media),
um für das Krisen- und Katastrophenmanagement Informationen für eine realitätsnähere Risikobewertung abzuleiten.

Krisen und Katastrophen werden heutzutage von Medien ständig begleitet. Es gibt keine Minute, die nicht medial dokumentiert wird; daher ist der Informationsgehalt der heutigen Medien, insbesondere der Sozialen Medien (Social Media), wie Facebook, Twitter u.a., ein oft ungenutztes Potential. Durch Kombination von Quellen lassen sich unterschiedliche, einander ergänzende und komplementäre Informationen für eine integrierte Nutzung erschließen. Diese Informationsfülle bringt jedoch Probleme mit der Sichtung, Kanalisierung und Verwertung dieser inhomogenen und enormen Menge an Information mit sich.

Die automatische Medienbeobachtung von sowohl traditionellen insbesondere aber den neuen sozialen Medien erlaubt es, frühzeitig Risikoindikatoren und Risikofaktoren bei Krisen- und Katastrophenereignissen abzuleiten und für das Krisen- und Katastrophenmanagement rasch Strukturen und Trends zu erkennen. Die daraus resultierenden umfassenden Lageinformationen ermöglichen ein frühzeitigeres und schnelleres Reagieren auf potentielle Krisensituationen und Interdependenzen zwischen Akteuren.

Die derzeitig vorhandenen technischen und elektronischen Infrastrukturen im nationalen und internationalen Krisen- und Katastrophenmanagement sind nicht in der Lage, umfassende Analysen aller Medienkanäle – insbesondere Social Media – automatisiert durchzuführen. Gerade die ständig fortschreitende Entwicklung im Multi- sowie Social Media Bereich und die zunehmende Dynamik der Informationen verlangen die Entwicklung entsprechender Methoden. Aus den in traditionellen Medien (TV, Radio, Web) wie auch Social Media verfügbaren Text-, Bild-, und Video-Dokumenten sollen automatisch relevante Ereignis-Manifestationen identifiziert und die untersuchten Multimedia-Dokumente in Form eines Clusters den Experten der Krisen- und Katastrophen-Lagebeurteilung zur Verfügung gestellt werden. Die Erforschung und Entwicklung von Algorithmen und Methoden, um obige Ziele zu erreichen, stellen den Fokus im gegenständlichen Projekt dar. Die automatisierte Auswertung der angeführten Inhalte in Multi- sowie Social Media Bereichen bedeutet eine absolute Innovation.

Die angestrebten Erkenntnisse aus QuOIMA stellen sowohl aus technisch-wissenschaftlicher als auch aus GSK-Sicht einen fundamentalen Beitrag von neuer, bisher nicht erreichter Qualität, zur österreichischen Sicherheitsforschung dar.

Für Bedarfsträger ergibt sich durch die Integration der gewonnenen Erkenntnisse in die Lagebilderstellung eine realitätsnähere Risikobewertung und daraus resultierend erweiterte Handlungsoptionen. Die Bedarfsträger erweitern ihre Expertise, was in weiterer Folge die Handlungsfähigkeit der Gesamtorganisation steigert.

Audiobeitrag über QuOIMA in Oe1

For more information please visit
http://www.kiras.at/gefoerderte-projekte/detail/projekt/quoima-quelloffene-integrierte-multimedia-analyse

Funded by:

           

 

 


Link back to top

iTalk2Learn

Talk, Tutor, Explore, Learn:
Intelligent Tutoring and Exploration for Robust Learning

During a three year working period a platform will be developed that will teach young learners (5-11 years) mathematics. The utilized technology is really innovative and will be able to recognize which is the best exercise for that particular student. Moreover it allows to create more intuitive interfaces based on speech interacting. The use of natural language, instead of Keyboard and Mouse, will be helpful to young children, who are not already familiar with the written form.

Objectives

  • Provide an open source platform for Intelligent Support Systems,
    in particular Intelligent Tutoring Systems, integrated with structured
    practice and exploratory conceptually oriented learning.
  • Provide state-of-the-art and highly innovative reference implementations of plugins
    for the platform that could be used in wide range of application domains such as

    • Recommender systems for collecting student actions,
      building student profiles and adaptive control of single-student tutoring
    • Analysis and reasoning components for supporting
      effective interaction with exploratory activities
    • Speech recognition for a conversational channel via voice
  • Promote our understanding of the role of the different modalities of speech
    and direct manipulation and multiple and alternative representations
    in learning elementary mathematics through digital technologies 
  • A summative evaluation of activities and support features
    generated by our intelligent learning support platform

For more information please visit http://www.italk2learn.eu

ITalk2Learn is a collaborative research project funded by the
European Comission 7th Framework Programme Call 8 (FP7-ICT-2011-8).

 

       

 

 


Link back to top

VIRTUOSO

Versatile Information Toolkit for End-Users Oriented Open Sources Exploitation

VIRTUOSO is an EU FP7 co-funded project, that will provide a technical framework for the integration of tools for collection, processing, analysis and communication of open source information.
This middleware framework will enable "plug and play" functionalities that improve the ability of border control, security and law enforcement professionals to use data from across the source/format spectrum in support of the decision making process. As a proof of concept and to highlight the efficiency of this open-source code framework, a prototype will be built and demonstrated using operational scenarios. The project will comply with legal considerations and enforce the principles of privacy and data protection to ensure the interests of citizens within the European Union.

For more information please visit  http://www.virtuoso.eu/


Funded by:

   

Sail Labs in the Media: EU-funded project to prompt intelligence-sharing

 


Link back to top

M-Eco: Medical EcoSystem

Personalized Event-based Surveillance

The health of a society’s individuals is not isolated from the natural environment. Symbiotically we influence and are influenced by the ecosystem.

Public health officials are faced with new challenges for outbreak alert and response due to the continuous emergence of infectious diseases and their contributing factors such as demographic change, or globalization.

Early reaction is necessary, but often communication and information flow through traditional channels is slow.

SAIL LABS Technology is part of the ambitious M-ECO consortium that aims to develop an early warning system for disease detection by using additional approaches: The project proposes a complementing way to early detection of public health threats by using additional sources of information. These days, online media, weblogs, scientific and non-scientific discussion forums and direct electronic communication provide complements to traditional reporting mechanisms.

M-Eco will address limitations of current systems for Epidemic Intelligence by

  • More sophisticated event-detection technologies (unsupervised and supervised methods)
  • Additional resources (Web 2.0 data, Multimedia)
  • Personalization and filtering
  • Integration with existing systems


The M-Eco consortium is composed of seven complementary organisations which are based in five different European countries, thereby securing the dissemination and penetration of the project results across Europe.

User institutions participating in the project through the M-Eco Advisory Board include representatives of the World Health Organiziation (WHO), European Center of Disease Control (ECDC), Health Protection Agency (HPA), Institut de Veille Sanitaire (INVS), and the Mekong Basin Disease Surveillance (MBDS).

For more information please visit http://www.meco-project.eu/home

The research leading to these results has received funding from the European Community‘s Seventh Framework Programme.

      

 


Link back to top

Softnet Austria Competence Network

Softnet Austria is a private research association cooperating with business and university partners to conduct and promote applied research in software engineering. Being a competence network it deliberately operates as an intermediary between research institutions and companies targeting at the development and deployment of novel techniques in software engineering providing an institutionalized access to Austria's software engineering scene.


Softnet Austria is a private non-profit association within the so called K-net program of the Austrian Federal Ministry of Economics and Labor. Research and innovation work within the network is organized in terms of working groups under academic supervision. In turn, the working groups are assigned to the competence fields of engineering methodologies and models (SOFT-T&M), and web engineering and user interfaces (SOFT-WEB), which represent specific R&D issues in next generation software engineering.

For more information please visit http://www.soft-net.at/index.html


Funded by:


Link back to top

KIRAS (MDL - Multimedia Documentation Lab)

The KIRAS framework aims at supporting Austrian governmental organizations as well as companies in their activities in the area of improving security-standards. Situation reports as well as analyses of trends form the core elements of the project. The analysis of content plays a strategic role in both areas. Whereas in the past, the focus was placed on textual content, nowadays the globally ever-increasing proliferation of multimedia content requires a broadening of the scope onto new content components. National as well as international audio-, video- and multimedia-content has to be integrated into situation reports as well as form the basis for long-term-trend-analyses in addition to text-documents. 

Over the past years, the user (who has been in charge of processing open-source information) has been confronted with the increase of importance of multimedia-content. The proportion of multimedia data increases dramatically, both in quantity as well as quality. The diversity and improvement of quality lends itself to wider fields of use. Consequently, the integration of multimedia content has become an issue of most importance. The offer by the industrial partner to alleviate the existing deficits by employing an existing product only forms part of the target solution of the problem. This is why together with partners covering the required skill profile a scientifically sound solution will be developed in cooperation with the user.

To devise an optimal scheme for the knowledge representation as well as organization of multimedia content poses a special challenge for the project. These schemes will be developed according to up-to-date scientific as well as technical methodologies. The architecture of the demonstrator (prototype) will be a modular one, so as to optimize the synergy for other security-relevant users (governmental or private). The modules will comprise technologies to process multimedia content of varying degrees of quality and quantity. In a subsequent step, data will be analyzed and made searchable according to the latest scientifically founded methodologies. New terminology will be generated along the way.

The core module will provide analysis and visualization functionalities according to a wide array of criteria. Content will be made searchable using the created technologies.  Multimedia content can thus be linked into an analysis or situation report system via a push or pull mechanism. Within the scope of the project, special attention will be paid to the human-aspect. This is done in order to avoid an overflow of information and to best fulfill ergonomic requirements.

The intended use of the system is to allow experts to efficiently generate more realistic and high-quality situation reports in the face of critical situations. Subsequently, these can be employed for communication with the population of Austria and to increase its security and sense of security - target goals of the KIRAS framework. Civilian crisis-scenarios as well as economic or further security-relevant scenarios can be addressed (e.g. large-scale events or bottlenecks in supplies).

http://www.kiras.at/gefoerderte-projekte/programmlinie-3/mdl/

Funded by:

           


Link back to top

COAST

Competence Network for Advanced Speech Technologies
The competence network COAST is concerned with research and development in the field of speech recognition and speech interpretation with large lexicons for professional applications.
The main research areas are:

  • Development, improvement, and refinement of algorithms applied for speech recognition from the fields of statistics, acoustics, and signal processing, and their application specific parametrization.

  • Application of – in combination with speech recognition – new techniques of semantic interpretation using artificial intelligence to improve recognition results and their usability.

  • Application-specific improvement and optimization of speech recognition, i.e., analysis of the question as to how speech recognition can optimally support concrete applications. The main focus here is on professional transcription of documents, messages, and meetings, as well as media mining. The network is, however, open to further applications.

  • Testing of possibilities to apply speech recognition in new applications.

For more information please visit http://www.coast.at/jsite/index.php

Funded by:

                                                    

                                           

 


Link back to top

PHAROS

Platform for Searching of Audiovisual Resources across Online Spaces
The project PHAROS, is an Integrated Project co-financed by the European Union under the Information Society Technologies Programme (6th Framework Programme) – Strategic Objective ‘Search Engines for Audiovisual Content’

The PHAROS mission is to advance audiovisual search from a point-solution search engine paradigm to an integrated search platform paradigm. This platform will be built on an innovative, open, and distributed architecture that enables consumers, businesses and organisations to unlock the values found in audiovisual content.

The PHAROS search platform will create a new infrastructure for managing and enabling access to information sources of all types, supporting advanced audiovisual processing, content handling, and management that will enhance control, creation, and sharing of multimedia for all users in the value chain. The impact for the specific audiovisual industry will be to strengthen and extend product and service offerings, integrating oustanding technologies and achieving a competitive advantage by integrating solutions addressing the full content management processing chain.

http://www.pharos-audiovisual-search.eu/

http://deimos3.apple.com/WebObjects/Core.woa/Browse/itunes.open.ac.uk.2692919961?i=1845616339

http://projects.kmi.open.ac.uk/itunesu/

 

Funded by:


Link back to top

DIVAS

Direct Video and Audio Content Search Engine
DIVAS is an IST project of the EU that targets the design, implementation and demonstration of a multimedia search engine based on advanced direct video and audio search algorithms applied on encoded, compact and standards adhering representation formats of the content inside search databases.

The driving force is to disassociate content search from the availability of laboriously annotated metadata databases. Thus, algorithms to be developed will provide an alternative and complementary path for metadata based audio/video content search. The proposed approach advocates the automatic extraction of content features directly from the compressed content ("fingerprints" or "thumbnails"), thus greatly accelerating the search process. Further, through an associated classification, search databases will be rendered compact and suitable for binary search techniques, providing fast searches over huge content databases.

The search engine will be applicable to a number of different use cases ranging from "similarity" searches for video and audio web content, to information harvesting and data mining. Moreover the DIVAS approach enables "stream searchers" suitable for DRM resolution, advertisement time tracking, etc.

DIVAS implements algorithms for compressed video and audio characterisation, fingerprint extraction, and segmentation applied on compressed content, all being essential for efficient search and result correlation in audio/video search engines. This opens the way for the introduction and seamless integration of audiovisual searching to ANY web search engine, and the location of video content ANYWHERE, irrespective of transformations and annotations, thus adding true direct multimedia search capability to ambient intelligence.

 http://www.ist-divas.eu/portal/

Funded by:


Link back to top

MISTRAL

Measurable Intelligent and Reliable Semantic Extraction and Retrieval of Multimedia Data
Multimedia data has a rich and complex structure in terms of inter- and intra-document references and can be an extremely valuable source of information. However, this potential is severely limited until and unless effective methods for semantic extraction and semantic-based cross-media exploration and retrieval can be devised.

MISTRAL will extract a large variety of semantically relevant metadata from one media type and integrate it closely with semantic concepts derived from other media types. Eventually, the results from this cross-media semantic integration will also be fed back to the semantic extraction processes of the different media types so as to enhance the quality of the results of these processes. MISTRAL will focus on most innovative, semantic-based cross-media exploration and retrieval techniques employing concepts at different semantic levels.

MISTRAL addresses the specifics of multimedia data in the global, networked context employing semantic web technologies. The MISTRAL results for semantic-based multimedia retrieval will contribute to a significant improvement of today’s human-computer interaction in multimedia retrieval and exploration applications. New types of functionalities include but are not limited to

  • cross-media-based automatic detection of objects in multimedia data: For example, if a video contains an audio stream with barking together with a particular constellation of video features, the system can automatically consider the features in the video as an object “dog”.

  • semantic-enriched cross-media queries: A sample query could be “find all videos with a barking dog in the background and playing children in the foreground”.

  • cross-media synchronisation: The idea is to synchronize independent types of media according to the extracted semantic concepts. For example, if users see somebody walking in a video, they should also hear footfall from an audio.


SAIL LABS has been chosen as research partner in its area of expertise 'speech recognition'.

For more information please visit mistral-project.tugraz.at.

Funded by:

                       


                                


Link back to top

REVEAL THIS

Retrieval of Video and Language for the home User in an Information Society
REVEAL THIS addresses a basic need underlying content organisation, filtering, consumption and enjoyment by developing content programming systems that will help European citizens keep up with the explosion of digital content scattered over different platforms (radio, TV, World Wide Web, etc), different media (speech, text, image, video) and different languages. People should be spending most of their leisure time enjoying the content, not searching for it.

REVEAL THIS aims at developing content programming technology able to capture, semantically index, categorise and cross-link multiplatform, multimedia and multilingual digital content, as well as provide the system user with semantic search, retrieval, summarisation and translation functionalities.

For more information please visit www.reveal-this.org.

Funded by:


Link back to top

Combined Image and Word Spotting: CIMWOS

This project aims to facilitate common procedures of archiving and retrieval of audio-visual material.The participating organisations in this project come from various countries like France, Belgium, Austria, Switzerland, and Greece.The objective of the project is to develop and integrate a robust unrestricted keyword spotting algorithm and an efficient image-spotting algorithm specially designed for digital audio-visual content, leading to the implementation and demonstration of a practical system for efficient retrieval in multimedia databases. Specifically, a system will be developed to automatically retrieve images, video, and speech frames from an audio-visual database based on keywords entered by the user through keyboard or speech.

Today, a vast amount of information is accumulated in the form of video, pictures, and audio, which does not lend itself to automated searching. To improve the usability of these invaluable resources, indexing techniques are required, which are currently very expensive and time-consuming tasks mainly carried out manually by experts. In view of the expansion of the digital television and of video-based communications and related applications the need for an editor-like tool that allows the user to see/hear, select/modify and search audio-visual databases becomes indispensable. Although some European projects are addressing the issue of automated indexing of audio-visual material based on subtitles and speech recognition, the problem of locating important video clips based on their image contents has not been addressed. CIMWOS will use a dual audio and visual approach to locate important clips within multimedia material employing state-of-the-art algorithms for both image and speech recognition.

The CIMWOS system will be a powerful tool in the hands of the world of media and television, video, news broadcasting, show business, advertisement, and any organisation that produces, markets and/or broadcasts video and audio programmes. It will facilitate common procedures of retrieving audio-visual material during a research, a production of a documentary, etc. Utilizing the vast amounts of information accumulated in audio and video, the CIMWOS system will become an invaluable assistant in promoting the re-use of existing resources and cutting down the budgets for new productions.

To find out more about the CIMWOS project go to www.xanthi.ilsp.gr/cimwos/.

Funded by:


Link back to top

V-Man

V-Man: The Virtual Man  Project
SAIL LABS Technology is part of the ambitious V-Man consortium that aims to develop an intuitive system allowing non-computer specialists to create, animate, control and interact with a new generation of 3D virtual characters: the V-Men. These autonomous characters are intended for use in interactive media such as games and virtual reality as well as for special effects in film and television.

The project will bring together state-of-the-art video game, research and industrial 3D technologies allowing realistic simulation of body and clothes appearance, facial expressions, and real-time physics. Thus, the V-Man will be able to adapt its behavior to its environment, interact with its environment, and understand intuitive high-level user commands.

The V-Man product will be available as a stand-alone virtual reality application that exports animation in standard formats, a plug-in for the computer graphics applications, and a C++ toolkit allowing developers to populate their visual simulations or videogames with realistic characters. The V-Man system features realistic simulation of body and clothes appearance, facial expressions, and real-time physics. A V-Man is able to walk on any kind of terrain, to go upstairs, downstairs, to calculate paths in order to avoid obstacles, and to adapt his movements and actions to his environment. Transitions between movements are accomplished with innovative combination of motion blending algorithms, animation sampling methods and real-time physical simulation of the body. Physical character animation, where dynamics and animation is blended continuously allows V-Men synthesising motion at runtime depending on their environment, their task and their physical parameters.

The V-Man consortium is composed of six complementary organisations among the top-ranking ones in their respective fields of activity and based in five different European countries, thereby securing the dissemination and penetration of the project results across Europe.

End-users, coming from complementary fields of activity (simulation, television), will provide a perfect industrial setting for specifying and evaluating different V-Man versions on various real-world problems: each end-user partner will validate the software on a pilot application while providing valuable information for the elaboration of best practice and marketing materials.

With regards to the V-Man project, SAIL LABS Technology will provide components and solutions for the next generation language technology products with emphasis on natural language applications. This will involve next generation speech processing, multilingual document processing; multimedia content processing, multi- modal interfaces and dialogue systems.

Funded by:


Diese Seite druckenPrint this Page
Zum Seitenanfang


LEGAL NOTICESITEMAP