A research team from the University of Southern California and Google proposes TOME, a “mention memory” approach to factual knowledge extraction for NLU tasks. A transformer model with attention over a semi-parametric representation of the entire Wikipedia text corpus, TOME can extract information without supervision and achieves strong performance on multiple open-domain question answering benchmarks.

Here is a quick read: Mention Memory: Incorporating Factual Knowledge From Various Sources Into Transformers Without Supervision.

The paper Mention Memory: Incorporating Textual Knowledge into Transformers Through Entity Mention Attention is on arXiv.



Source link