The Picto Project: Conclusions and Lessons Learned
Leen Sevens, Vincent Vandeghinste, Ineke Schuurman and Frank Van Eynde


Over the past six years, we have developed language technologies that automatically translate natural language text into pictographs and vice versa for people with an intellectual disability (ID), allowing them to read and write emails and chat messages in online environments. During the development and improvement of the translation engines, we made use of various existing tools for natural language processing, and we developed the following tools and methods: (a) A stand-alone spelling corrector that is tailored toward the characteristics of text written by people with ID; (b) A syntactic simplification module that automatically simplifies natural language text as a pre-processing step for pictograph translation; (c) A temporal analysis module that analyses the temporal characteristics of the input sentence; (d) A method to incorporate word sense disambiguation into the Text-to-Pictograph translation pipeline; (e) A static pictograph hierarchy, designed according the principles of user-centered design, that allows people with ID to construct pictograph-based messages; (f) A pictograph prediction tool that suggests relevant pictographs to the user; And (g) language modelling-based and data-driven machine translation-based methods toward Pictograph-to-Text translation. Due to the fragmented nature of this research, general conclusions are not easy to make. Therefore, we discuss our contributions to the different subtasks, and we briefly summarise our experiments and the results. We then present our personal observations and recommendations with respect to the development of language technologies for people with ID, as well as several possibilities for future work.