BERT - huggingface.co
huggingface.co › docs › transformersBERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. This model was contributed by thomwolf. The original code can be found here.
bert-score · PyPI
https://pypi.org/project/bert-score10.12.2021 · bert-score -r example/refs.txt example/refs2.txt -c example/hyps.txt --lang en where the -r argument supports an arbitrary number of reference files. Each reference file should have the same number of lines as your candidate/hypothesis file. The i-th line in each reference file corresponds to the i-th line in the candidate file.