G22.2590 – Natural Language Processing – Spring 2010

The final exam

The questions will be taken from the following list of question types. Most of these correspond directly to questions asked  for homework. I may also ask one or two short (1-blue-book-page) essay questions about the issues we have discussed in the lectures.

  1. Java patterns:  Write a Java pattern for a simple construct (e.g., an NYU course number) (lecture/homework #2).
  2. English sentence structure: Label the constituents (NP, VP, PP, etc.) of an English sentence based on the grammar given in Chapter #12 (and summarized in the handout for homework #3). If the sentence is ambiguous, show its multiple parses. If the sentence violates some grammatical constraint, describe the constraint. (homework #3).
  3. Context-free grammar: Extend the context-free grammar to cover an additional construct, or to capture a grammatical constraint. (homework #3).
  4. Parsing: Given a very small context-free grammar, to step through the operation, or count the number of operations performed by a top-down backtracking parser, a bottom-up parser, or a chart parser (homework #4).
  5. POS tagging: Tag a sentence using the Penn POS tags (homework #4).
  6. HMMs and the Viterbi decoder: Describe how POS tagging can be performed using a probabilistic model (J&M sec. 5.5 and chap 6; lecture 5 notes). Create an HMM from some POS-tagged training data. Trace the operation of a Viterbi decoder. Compute the likelihood of a given tag sequence and the likelihood of generating a given sentence from an HMM (homework #5).
  7. Chunkers and name taggers.  Explain how BIO tags can be used to reduce chunking or name identification to a token-tagging task.  Explain how chunking can be evaluated. (lecture #6).  Explain how a maximum-entropy model can be used for tagging or chunking (lecture #7 and homework #8).
  8. Probabilistic CFG: Train a probabilistic CFG from some parses; apply this PCFG to disambiguate a sentence. Explain how this PCFG can be extended to capture lexical information.  Compute lexically-conditioned probabilities.  (homework #9)
  9. Logical form: write the logical form of an English sentence, with or without event reification (lecture #11).
  10. Jet: be able to extend, or trace the operation, of one of the Jet pattern sets we have distributed and discussed (for noun and verb groups, and for appointment events).  Analyze and correct a shortcoming in the appointment patterns (homework #10).
  11. Reference resolution:  analyze a reference resolution problem -- identify the constraints and         preference which would lead a system to select the correct antecedent (lecture #10).
  12. Word sense disambiguation:  given a word with two senses and small training set of contexts for each of the two senses. apply the naive Bayes procedure to resolve the sense of the word in a test case (J&M 20.2.2, lecture #12)
  13. Bag-of-word methods:  compute the similarity of two bags of words using normalized dot product (eqn. 23.7 in the text).  Explain the tf and idf factors (sec. 23.1.2).  Describe the role of the similarity metric in question answering and summarization (Lecture #13).