Hidden Memories of RNNs

Understanding RNNs: lack of interpretable models, hidden states, and the role of memory.

  • Performance based: alter the components and see how it affects the accuracy.
  • Interpretability extension: visualization (and comparative clustering), use adjacency metrics.
    • Justaposition: detail / sentence / overview level
    • Superposition:
    • Explicit encoding

Requirements

  • Interpret information captured by hidden states / layers

  • Provide information distribution

  • Explore hidden states at sentence level

  • Examine stats for hidden states

  • See learning outcomes

  • See distribution of hidden states

  • See model expected response based on update of cell

  • C = maintains long term memory

  • h = directly computed from cell state and used for output

  • Dc = distribution of cell states change

Co cluster hidden states and words to see how they are related. Also sequence analysis to see how hidden states change over time.