From 864b6b3a43c821ed5eb5678169d73fadb091eb67 Mon Sep 17 00:00:00 2001
From: tanel <alumae@gmail.com>
Date: Thu, 29 Jan 2015 16:34:00 +0200
Subject: [PATCH] Implemented optional rescoring with (large) a 'constant ARPA'
 LM

---
 README.md | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/README.md b/README.md
index b8cb8c9..5e564b0 100644
--- a/README.md
+++ b/README.md
@@ -4,12 +4,18 @@
 GStreamer plugin that wraps Kaldi's SingleUtteranceNnet2Decoder. It requires iVector-adapted
 DNN acoustic models. The iVectors are adapted to the current audio stream automatically.
 
-~~The iVectors are reset after the decoding session (stream) ends.
-Currently, it's not possible to save the adaptation state and recall it later
-for a particular speaker, to make the adaptation persistent over multiple decoding
-sessions.~~
 
-Update: the plugin saves the adaptation state between silence-segmented utterances and between
+# CHANGELOG
+
+2015-01-09: Added language model rescoring functionality. In order to use it,
+you have to specify two properties: `lm-fst` and `big-lm-const-arpa`. The `lm-fst`
+property gives the location of the *original* LM (the one that was used fpr 
+compiling the HCLG.fst used during decodong). The `big-lm-const-arpa` property
+gives the location of the big LM used that is used to rescore the final lattices.
+The big LM must be in the 'ConstArpaLm' format, use the Kaldi's 
+`utils/build_const_arpa_lm.sh` script to produce it from the ARPA format.
+
+2014-11-11: the plugin saves the adaptation state between silence-segmented utterances and between
 multiple decoding sessions of the same plugin instance. 
 That is, if you start decoding a new stream, the adaptation state of the
 previous stream is used (unless it's the first stream, in which case a global mean is used).
-- 
GitLab