<pre style='margin:0'>
Zhenfu Shi (i0ntempest) pushed a commit to branch master
in repository macports-ports.

</pre>
<p><a href="https://github.com/macports/macports-ports/commit/7c013e3d69005d0693c14882ea5cce5ad42980d4">https://github.com/macports/macports-ports/commit/7c013e3d69005d0693c14882ea5cce5ad42980d4</a></p>
<pre style="white-space: pre; background: #F8F8F8">The following commit(s) were added to refs/heads/master by this push:
<span style='display:block; white-space:pre;color:#404040;'>     new 7c013e3d690 llama.cpp: 4291
</span>7c013e3d690 is described below

<span style='display:block; white-space:pre;color:#808000;'>commit 7c013e3d69005d0693c14882ea5cce5ad42980d4
</span>Author: i0ntempest <i0ntempest@i0ntempest.com>
AuthorDate: Mon Dec 9 12:52:12 2024 +0800

<span style='display:block; white-space:pre;color:#404040;'>    llama.cpp: 4291
</span>---
 sysutils/llama.cpp/Portfile | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

<span style='display:block; white-space:pre;color:#808080;'>diff --git a/sysutils/llama.cpp/Portfile b/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;color:#808080;'>index db6b5dc688b..724a3cf919e 100644
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>--- a/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>+++ b/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -5,7 +5,7 @@ PortGroup               github 1.0
</span> PortGroup               cmake 1.1
 PortGroup               legacysupport 1.1
 
<span style='display:block; white-space:pre;background:#ffe0e0;'>-github.setup            ggerganov llama.cpp 4267 b
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+github.setup            ggerganov llama.cpp 4291 b
</span> github.tarball_from     archive
 set git-commit          f112d19
 # This line is for displaying commit in CLI only
<span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -19,9 +19,9 @@ long_description        The main goal of llama.cpp is to enable LLM inference wi
</span>                         setup and state-of-the-art performance on a wide variety of hardware\
                          - locally and in the cloud.
 
<span style='display:block; white-space:pre;background:#ffe0e0;'>-checksums               rmd160  0a0d75e0d118079cd2df04e9bf9c761ec8cdf2d0 \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                        sha256  b54c0602fd22282ee49095740b4b1e9177d5fb4f9f71411179ec7f3f4f8926e9 \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                        size    19400342
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+checksums               rmd160  e86096629478215706b19946f5ef1235bc3bbd22 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                        sha256  1a5150b3d637633b136b8f79bbdfa7584cf25b1b9bcad11657bafa3bb9c6e3c3 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                        size    19420650
</span> 
 # clock_gettime
 legacysupport.newest_darwin_requires_legacy \
</pre><pre style='margin:0'>

</pre>