It is the study of language as expressed in samples (corpora) or “real world” text. This method represents a digestive approach to deriving a set of abstract rules by which a natural language is governed or else relates to another language. Originally done by hand, corpora are largely derived by an automated process, which is corrected. The core of a corpus is the derivation of a set of Part-of-speech tags, representing a formal overview of the various types of words and word-relationships in a given language. Computational methods had once been viewed as a holy grail of linguistic research, which would ultimately manifest a ruleset for natural language processing and machine translation at a high level. Such has not been the case, and since the cognitive revolution, cognitive linguistics has been largely critical of many claimed practical uses for corpora. However, as computation capacity and speed have increased, the use of corpora to study language and term relationships has gained some respectability. The corpus approach runs counter to Noam Chomsky’s view that real language is riddled with performance-related errors, thus requiring careful analysis of small speech samples obtained in a highly controlled laboratory setting. Corpus linguistics does away with Chomsky’s competence/performance split; adherents believe that reliable language analysis best occurs on field-collected samples, in natural contexts and with minimal experimental interference.
In linguistics, a corpus (plural corpora) or text corpus is a large and structured set of texts (now usually electronically stored and processed). They are used to do statistical analysis, checking occurrences or validating linguistic rules on specific universe. Corpora are the main knowledge base in corpus linguistics. The analysis and processing of various types of corpora are also the subject of much work in computational linguistics, speech recognition and machine translation, where they are often used to create hidden Markov models for POS-tagging and other purposes. A corpus may contain texts in a single language (monolingual corpus) or text data in multiple languages (multilingual corpus). Multilingual corpora that have been specially formatted for side-by-side comparison are called aligned parallel corpora. In order to make the corpora more useful for doing linguistic research, they are often subjected to a process known as annotation.
- http://view.byu.edu/ : Variation in English Verbs and Phrases.
- http://ask.metafilter.com/mefi/37340 : Querying the hive mind.
- http://www-nlp.stanford.edu/links/statnlp.html : Statistical natural language processing and corpus-based computational linguistics: An annotated list of resources.
“Information extraction (IE)”
It is a type of information retrieval whose goal is to automatically extract structured or semistructured information from unstructured machine-readable documents. It is a sub-discipline of language engineering, a branch of computer science. It aims to apply methods and technologies from practical computer science such as compiler construction and artificial intelligence to the problem of processing unstructured textual data automatically, with the objective to extract structured knowledge in some domain. A typical example is the extraction of information on corporate merger events, whereby instances of the relation are extracted from online news (”Yesterday, New-York based Foo Inc. announced their acquisition of Bar Corp.”). The significance of Information Extraction is determined by the growing amount of information available in unstructured (i.e. without metadata) form, for instance on the Internet. This knowledge can be made more accessible by means of transformation into relational form.
“The Text Encoding Initiative (TEI)”
It is a consortium of institutions and research projects which collectively maintains and develops a standard for the representation of texts in digital form. Originally sponsored by three scholarly societies, the TEI is now an independent membership consortium, hosted by academic institutions in the US and in Europe. Its major deliverable is a set of Guidelines, which specify encoding methods for machine-readable texts, chiefly in the humanities, social sciences and linguistics. Since 1994, these guidelines have been a widely-used standard for text materials for performing online research and teaching. The scholarly societies originally sponsoring the TEI are the Association for Computers and the Humanities, the Association for Computational Linguistics, and the Association for Literary and Linguistic Computing. These three groups first organized the TEI in 1987 as a research effort funded exclusively by significant grants from many agencies. Today, the TEI Consortium is a member-funded non-profit corporation hosted by: The Research Technologies Service at the University of Oxford,the Scholarly Technology Group at Brown University,a francophone group comprising ATILF, INIST, and LORIA, co-ordinated at Nancy the Electronic Text Center and the Institute for Advanced Technology in the Humanities at the University of Virginia.