In my research, I have been interested in investigating articulatory and acoustic properties of speech that assure correct communication, across differences in contextual and individual characteristics. In the first part of my career, I worked in speech technology, creating phonetic modules of multilingual speech recognizers and synthesizers. My research focused on the effects of coarticulation, phonetic context and prosody on perception of speech. During these years, I observed a great variability in acoustic realization of speech, due to individual characteristics of the speakers and, in part, to the prosodic characteristics and to the context of realization of the utterance. Therefore, I decided to complete a PhD in Speech and Hearing Science, to investigate whether there would be some form of invariance in dynamic properties of articulatory movements, rather than in the acoustic properties of speech: I carried out my PhD dissertation on "Invariant patterns in articulatory movements", under the tutoring of Prof. Osamu Fujimura, at The Ohio State University, to know more about possible regularities in articulatory patterns, that might carry invariant perceptual information. Results showed that, in the observed corpus of ad hoc dialogues (recorded by an X-ray microbeam system), the analyzed dynamic parameters (speed of tongue tip and lower lip movements for production of dental and labial consonants, like [t, f]), remained relatively constant, only when produced in words under focus in phrase final position, but not in other linguistic and prosodic contexts. I further investigated this topic, based on data collected by an electromagnetic articulograph (EMA), available in my lab at Case Western Reserve University, testing differences in speech production by speakers with dentures vs. normal dentate individuals. The results showed significant differences between productions by the speakers with dentures vs. controls, both for the dynamic parameters relative to single movements, and for timing coordination properties of vocalic and consonantal gestures, and of the gesture targets with their relative acoustic cues. I will pursue this line of research at Hofstra in collaboration with the Haskins Labs, New Haven, CT. A second line of research aims to assess the effectiveness of robots as therapy for autism: at Monmouth University, I realized, with the collaboration of the Department of Computer Science, a robot which pronounces some questions (prerecorded with the voice of a child), to engage in piloted dialogues with children with autism. The research aims to verify whether the type of verbal interaction by the children with autism with the robot, would be different from the interaction with the real peers. Both linguistic pragmatic features (e.g. turn taking, topic maintenance) and prosodic parameters (e.g. intonation, type of voice) are measured. Data from Italian and Tamil speakers have been collected, to compare whether same patterns of communication with the robot are found in both languages and across cultures.