• COCOBOTS: Conversational Programming for Cobots (BMBF funded; cooperation w/ Synergeticon GmbH (Germany), Linagora (France), ANITI / University of Toulouse (France)
  • SFB 1287 “Limits of Variability in Language”, project B06 (w/ Manfred Stede), “Limits of Variability in Neural Language Models”
  • Visually-Grounded Interaction / Meetup
  • Explainable Concept Induction
  • Exploring Neural Language Generation
  • ICSPACE
  • CSRA
COCOBOTS: Natural Language Programming for Conversational Cobots (2022) A joint French/German AI project funded by respective federal grants (Maike Paetzel-Prüsmann, David Schlangen)
Linking and combining perceptual and distributional aspects of word meaning (2017--2018) We investigated models of word meaning that link visual to lexical information, and explored several paths for combining them. (Sina Zarrieß, David Schlangen)
Conversational Referring Expression Generation (2016--2017) Research on generating referring expressions (REG) has so far mostly focussed on "one-shot reference", where the aim is to generate a single, written, discriminating expression. In interactive settings, however, it is not uncommon for reference to be established in "installments", where referring information is offered piecewise until success has been confirmed. (Sina Zarrieß, David Schlangen)
Buying Time: Bridging Conversational Pauses (2014--2018) Soledad Lopez' PhD project (funded by CITEC). What should a system do when it has the turn, but has nothing to say yet? (Soledad Lopez, Sina Zarrieß, David Schlangen)
Towards Visual Dialogue: Lexical Knowledge for Situated Interaction (2015--) We explored representing the perceptual aspects of word meaning as classifiers of visual input. (Casey Kennington, Sina Zarrieß, David Schlangen)
Generating Referring Expressions with Symbolic and Iconic Elements (2016--2018) In this project, Ting Han (together with Sina Zarrieß) explore multimodal ensembles consisting of utterances (language) and sketches, as a means to refer to concrete objects. We augmented an existing sketch dataset with verbal descriptions, and trained neural generation models on that. (Ting Han, Sina Zarrieß, David Schlangen)
Multimodal Spatial Descriptions (2013--2018) This PhD project (Ting Han, funded by the China Scholarship Council) focussed on the incremental interpretation of multimodal descriptions combining deictic and iconic gestures with speech. (Ting Han, David Schlangen)
Reading Times for NLG Evaluation (2015) Typically, human evaluation of NLG output is based on user ratings. We explored "mouse contingent reading" as a metric for evaluation. (Sina Zarrieß, David Schlangen)
DUEL: Disfluencies, Exclamations, and Laughter in Conversation (2014--2017) In this cooperation with Jonathan Ginzburg (Paris Diderot), we looked at markers of conversations which typically do not receive much attention: disfluencies, and laughter. Both phenomena are characterised by constraints on the occurence, which overlay syntactic considerations. Dedicated Site (Julian Hough, David Schlangen)
The Speech Assistant as In-Car Passenger (2013--2017) In this project we've put our incremental dialogue system to work as a passenger in a car, adapting its behaviour to the wider, extra-conversational, situation. (Spyros Kousidis, Casey Kennington, David Schlangen)
Incremental Reference Resolution (2011--2015) In his PhD project (funded by the Cluster of Excellence “Cognitive Interaction Technology” at Bielefeld), Casey Kennington explored statistical models for understanding spoken utterances incrementally, also taking into account multimodal input such as gaze information and gesture. (Casey Kennington, David Schlangen)
Multimodal Interaction Lab, Bielefeld (2011--2018) Funded by SFB 673. In Bielefeld, I set up a lab that made it possible to record interactions multimodally, with motion capture, eye tracking, and high-quality audio and video recording. At the time, that required fairly specialised hardware with propretiary software, and so we spent quite a bit of effort making these systems talk to each other, to enable synchronised recordings and multi-channel analyses. Dedicated website here, although that is mostly of historical interest. (Spyros Kousidis, David Schlangen)
InPro: Incrementality and Projection in Dialogue Management (2007--2013) Funded by DFG in the Emmy Noether Programme. In this larger scale project (2 PhD students, 2 Post-Docs), we investigated the role that incremental processing (that is, interpretation of the speech signal while the utterance is ongoing, or generation of a speech signal while planning is still ongoing) can play in practical dialogue systems. As part of this project, we devised an influential abstract processing model (described in (Schlangen and Skantze 2009), nominated for the “test of time award” at NAACL 2018) and an influential implementation of a dialogue middle-ware (InProTK). See more at the dedicated website.
  1. David Schlangen, and Gabriel Skantze A General, Abstract Model of Incremental Dialogue Processing Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2009) 2009 [PDF]
    BibTeX
    @inproceedings{Schlangen-2009-1,
      author = {Schlangen, David and Skantze, Gabriel},
      booktitle = {Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2009)},
      pages = {710--718},
      title = {{A General, Abstract Model of Incremental Dialogue Processing}},
      year = {2009},
      topics = {},
      domains = {},
      approach = {},
      project = {}
    }
    
    Details
(Timo Baumann, Okko Buß, Gabriel Skantze, David Schlangen)