<pre style='margin:0'>
Zhenfu Shi (i0ntempest) pushed a commit to branch master
in repository macports-ports.

</pre>
<p><a href="https://github.com/macports/macports-ports/commit/9fe8b133a3a4e66722d15ca5ecee90a727136b66">https://github.com/macports/macports-ports/commit/9fe8b133a3a4e66722d15ca5ecee90a727136b66</a></p>
<pre style="white-space: pre; background: #F8F8F8">The following commit(s) were added to refs/heads/master by this push:
<span style='display:block; white-space:pre;color:#404040;'>     new 9fe8b133a3a llama.cpp: 4453
</span>9fe8b133a3a is described below

<span style='display:block; white-space:pre;color:#808000;'>commit 9fe8b133a3a4e66722d15ca5ecee90a727136b66
</span>Author: i0ntempest <i0ntempest@i0ntempest.com>
AuthorDate: Thu Jan 9 20:19:34 2025 +0800

<span style='display:block; white-space:pre;color:#404040;'>    llama.cpp: 4453
</span>---
 sysutils/llama.cpp/Portfile | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

<span style='display:block; white-space:pre;color:#808080;'>diff --git a/sysutils/llama.cpp/Portfile b/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;color:#808080;'>index 06c0fd6ece2..06cc686aaea 100644
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>--- a/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>+++ b/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -5,7 +5,7 @@ PortGroup               github 1.0
</span> PortGroup               cmake 1.1
 PortGroup               legacysupport 1.1
 
<span style='display:block; white-space:pre;background:#ffe0e0;'>-github.setup            ggerganov llama.cpp 4434 b
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+github.setup            ggerganov llama.cpp 4453 b
</span> github.tarball_from     archive
 set git-commit          a3d50bc
 # This line is for displaying commit in CLI only
<span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -19,9 +19,9 @@ long_description        The main goal of llama.cpp is to enable LLM inference wi
</span>                         setup and state-of-the-art performance on a wide variety of hardware\
                          - locally and in the cloud.
 
<span style='display:block; white-space:pre;background:#ffe0e0;'>-checksums               rmd160  1c4339cb457e2081ebca3184856d5241ed9d0dec \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                        sha256  859c4d46faadc43d11cd21a52e5dea58c21fcd84ce56fbe5dfe5ba5cd3bc6e18 \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>-                        size    20607874
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+checksums               rmd160  21c2eccb40f117a7cf0da2ca9861c8164299a0a5 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                        sha256  ab0241f66187940adc1ea16d129678d259ea4ef3a83edc8506a74ec1b29dc98f \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+                        size    20421626
</span> 
 # error: 'filesystem' file not found on 10.14
 legacysupport.newest_darwin_requires_legacy \
</pre><pre style='margin:0'>

</pre>