<pre style='margin:0'>
Ryan Schmidt (ryandesign) pushed a commit to branch master
in repository macports-ports.

</pre>
<p><a href="https://github.com/macports/macports-ports/commit/2ec5038982ebaa5e7740d34cba65c56d7ce03591">https://github.com/macports/macports-ports/commit/2ec5038982ebaa5e7740d34cba65c56d7ce03591</a></p>
<pre style="white-space: pre; background: #F8F8F8"><span style='display:block; white-space:pre;color:#808000;'>commit 2ec5038982ebaa5e7740d34cba65c56d7ce03591
</span>Author: Steven Thomas Smith <s.t.smith@ieee.org>
AuthorDate: Thu Oct 22 15:45:43 2020 -0400

<span style='display:block; white-space:pre;color:#404040;'>    sentencepiece: Update to version 0.1.93
</span>---
 textproc/sentencepiece/Portfile | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

<span style='display:block; white-space:pre;color:#808080;'>diff --git a/textproc/sentencepiece/Portfile b/textproc/sentencepiece/Portfile
</span><span style='display:block; white-space:pre;color:#808080;'>index 8a994fce065..8ff46d0e456 100644
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>--- a/textproc/sentencepiece/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>+++ b/textproc/sentencepiece/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -4,7 +4,7 @@ PortSystem          1.0
</span> PortGroup           cmake 1.1
 PortGroup           github 1.0
 
<span style='display:block; white-space:pre;background:#ffe0e0;'>-github.setup        google sentencepiece 0.1.84 v
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+github.setup        google sentencepiece 0.1.93 v
</span> revision            0
 categories          textproc
 
<span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -15,21 +15,21 @@ platforms           darwin
</span> description         Unsupervised text tokenizer for Neural Network-based\
                     text generation.
 
<span style='display:block; white-space:pre;background:#ffe0e0;'>-long_description    SentencePiece is an unsupervised text tokenizer\
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    and detokenizer mainly for Neural Network-based\
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    text generation systems where the vocabulary size\
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    is predetermined prior to the neural model\
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    training. SentencePiece implements subword units\
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    (e.g., byte-pair-encoding (BPE) \[Sennrich et al.\])\
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    and unigram language model \[Kudo.\]) with the\
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    extension of direct training from raw\
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    sentences. SentencePiece allows us to make a\
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    purely end-to-end system that does not depend on\
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+long_description    SentencePiece is an unsupervised text tokenizer \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    and detokenizer mainly for Neural Network-based \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    text generation systems where the vocabulary size \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    is predetermined prior to the neural model \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    training. SentencePiece implements subword units \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    (e.g., byte-pair-encoding (BPE) \[Sennrich et al.\]) \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    and unigram language model \[Kudo.\]) with the \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    extension of direct training from raw \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    sentences. SentencePiece allows us to make a \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    purely end-to-end system that does not depend on \
</span>                     language-specific pre/postprocessing.
 
<span style='display:block; white-space:pre;background:#ffe0e0;'>-checksums           rmd160  91bc4a0a23d9a4daf98abbb9ec83aed2241fdc42 \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    sha256  dacab5fe63875b94538e06f4995ffb8cf35400708f3c333263f7b72e2e5b43de \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                    size    11828855
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+checksums           rmd160  76b1e1ae6e5b297ed07c749d7ff35f12e4852322 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    sha256  9a146e83dc6a403e431ae444dcc0c28203be81436566acc628edae42fc8901ec \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                    size    11684466
</span> 
 compiler.cxx_standard 2011
 compiler.thread_local_storage yes
</pre><pre style='margin:0'>

</pre>