<pre style='margin:0'>
Zhenfu Shi (i0ntempest) pushed a commit to branch master
in repository macports-ports.
</pre>
<p><a href="https://github.com/macports/macports-ports/commit/6770d67424fee809a70b93821894ea79625b280e">https://github.com/macports/macports-ports/commit/6770d67424fee809a70b93821894ea79625b280e</a></p>
<pre style="white-space: pre; background: #F8F8F8">The following commit(s) were added to refs/heads/master by this push:
<span style='display:block; white-space:pre;color:#404040;'> new 6770d67424f llama.cpp: 4267
</span>6770d67424f is described below
<span style='display:block; white-space:pre;color:#808000;'>commit 6770d67424fee809a70b93821894ea79625b280e
</span>Author: i0ntempest <i0ntempest@i0ntempest.com>
AuthorDate: Thu Dec 5 10:50:41 2024 +0800
<span style='display:block; white-space:pre;color:#404040;'> llama.cpp: 4267
</span>---
sysutils/llama.cpp/Portfile | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
<span style='display:block; white-space:pre;color:#808080;'>diff --git a/sysutils/llama.cpp/Portfile b/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;color:#808080;'>index 630ad8cd7d8..db6b5dc688b 100644
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>--- a/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>+++ b/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -5,9 +5,9 @@ PortGroup github 1.0
</span> PortGroup cmake 1.1
PortGroup legacysupport 1.1
<span style='display:block; white-space:pre;background:#ffe0e0;'>-github.setup ggerganov llama.cpp 4240 b
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+github.setup ggerganov llama.cpp 4267 b
</span> github.tarball_from archive
<span style='display:block; white-space:pre;background:#ffe0e0;'>-set git-commit 642330a
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+set git-commit f112d19
</span> # This line is for displaying commit in CLI only
revision 0
categories sysutils
<span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -19,9 +19,9 @@ long_description The main goal of llama.cpp is to enable LLM inference wi
</span> setup and state-of-the-art performance on a wide variety of hardware\
- locally and in the cloud.
<span style='display:block; white-space:pre;background:#ffe0e0;'>-checksums rmd160 1fc545de2c73469a7da0bbdd62e31489b546f982 \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>- sha256 6616c648bc47efe72beec358e7cf50efb8096842b9c93a99ba3d5d6fb4da430b \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>- size 19580582
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+checksums rmd160 0a0d75e0d118079cd2df04e9bf9c761ec8cdf2d0 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ sha256 b54c0602fd22282ee49095740b4b1e9177d5fb4f9f71411179ec7f3f4f8926e9 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ size 19400342
</span>
# clock_gettime
legacysupport.newest_darwin_requires_legacy \
</pre><pre style='margin:0'>
</pre>