History of machine translation
Machine translation is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one natural language to another.
In the 1950s Machine translation became a reality in research, although references to subject can be found as early as the 17th century[citation needed]. The Georgetown experiment, which involved successful fully automatic translation of more than sixty Russian sentences into English in 1954, was one of the earliest recorded projects. Researchers of the Georgetown experiment asserted their belief that machine translation would be a solved problem within three to five years.[1] In the Soviet Union, similar experiments were performed shortly after.[2] Consequently, the success of the experiment ushered in an era of significant funding for machine translation research in the United States. The achieved progress was much slower than expected; in 1966, the ALPAC report found that ten years of research had not fulfilled the expectations of the Georgetown experiment and resulted in dramatically reduced funding[citation needed].
Interest grew in statistical models for machine translation, which became more common and also less expensive in the 1980s as available computational power increased.
Although there exists no autonomous system of “fully automatic high quality translation of unrestricted text,”[3][4][5] there are many programs now available that are capable of providing useful output within strict constraints. Several of these programs are available online, such as Google Translate and the SYSTRAN system that powers AltaVista’s BabelFish (now Yahoo’s Babelfish as of 9 May 2008).
Fonte: Wikipedia