<pre style='margin:0'>
Zhenfu Shi (i0ntempest) pushed a commit to branch master
in repository macports-ports.
</pre>
<p><a href="https://github.com/macports/macports-ports/commit/e42ad8897646b15b5b2537d7680d5064a4b5d705">https://github.com/macports/macports-ports/commit/e42ad8897646b15b5b2537d7680d5064a4b5d705</a></p>
<pre style="white-space: pre; background: #F8F8F8">The following commit(s) were added to refs/heads/master by this push:
<span style='display:block; white-space:pre;color:#404040;'> new e42ad889764 llama.cpp: 4240
</span>e42ad889764 is described below
<span style='display:block; white-space:pre;color:#808000;'>commit e42ad8897646b15b5b2537d7680d5064a4b5d705
</span>Author: i0ntempest <i0ntempest@i0ntempest.com>
AuthorDate: Tue Dec 3 12:18:09 2024 +0800
<span style='display:block; white-space:pre;color:#404040;'> llama.cpp: 4240
</span>---
sysutils/llama.cpp/Portfile | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
<span style='display:block; white-space:pre;color:#808080;'>diff --git a/sysutils/llama.cpp/Portfile b/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;color:#808080;'>index 89e25ad0afd..630ad8cd7d8 100644
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>--- a/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>+++ b/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -5,9 +5,9 @@ PortGroup github 1.0
</span> PortGroup cmake 1.1
PortGroup legacysupport 1.1
<span style='display:block; white-space:pre;background:#ffe0e0;'>-github.setup ggerganov llama.cpp 4231 b
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+github.setup ggerganov llama.cpp 4240 b
</span> github.tarball_from archive
<span style='display:block; white-space:pre;background:#ffe0e0;'>-set git-commit ae5b2cf
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+set git-commit 642330a
</span> # This line is for displaying commit in CLI only
revision 0
categories sysutils
<span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -19,9 +19,9 @@ long_description The main goal of llama.cpp is to enable LLM inference wi
</span> setup and state-of-the-art performance on a wide variety of hardware\
- locally and in the cloud.
<span style='display:block; white-space:pre;background:#ffe0e0;'>-checksums rmd160 d39e75a0214063d4e65ad17c74c5c051dc964aae \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>- sha256 275d529f5d531010b5668ee6d135b15f7e7810345810b8a59548ad97c89618f3 \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>- size 19574414
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+checksums rmd160 1fc545de2c73469a7da0bbdd62e31489b546f982 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ sha256 6616c648bc47efe72beec358e7cf50efb8096842b9c93a99ba3d5d6fb4da430b \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ size 19580582
</span>
# clock_gettime
legacysupport.newest_darwin_requires_legacy \
</pre><pre style='margin:0'>
</pre>