<pre style='margin:0'>
Zhenfu Shi (i0ntempest) pushed a commit to branch master
in repository macports-ports.
</pre>
<p><a href="https://github.com/macports/macports-ports/commit/d3ab2e7a25c40b5dbc1daa10aef0808f52013e2a">https://github.com/macports/macports-ports/commit/d3ab2e7a25c40b5dbc1daa10aef0808f52013e2a</a></p>
<pre style="white-space: pre; background: #F8F8F8">The following commit(s) were added to refs/heads/master by this push:
<span style='display:block; white-space:pre;color:#404040;'> new d3ab2e7a25c llama.cpp 4382
</span>d3ab2e7a25c is described below
<span style='display:block; white-space:pre;color:#808000;'>commit d3ab2e7a25c40b5dbc1daa10aef0808f52013e2a
</span>Author: i0ntempest <i0ntempest@i0ntempest.com>
AuthorDate: Mon Dec 23 17:26:34 2024 +0800
<span style='display:block; white-space:pre;color:#404040;'> llama.cpp 4382
</span>---
sysutils/llama.cpp/Portfile | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
<span style='display:block; white-space:pre;color:#808080;'>diff --git a/sysutils/llama.cpp/Portfile b/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;color:#808080;'>index d598983eca1..345ae1259c7 100644
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>--- a/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>+++ b/sysutils/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -5,9 +5,9 @@ PortGroup github 1.0
</span> PortGroup cmake 1.1
PortGroup legacysupport 1.1
<span style='display:block; white-space:pre;background:#ffe0e0;'>-github.setup ggerganov llama.cpp 4371 b
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+github.setup ggerganov llama.cpp 4382 b
</span> github.tarball_from archive
<span style='display:block; white-space:pre;background:#ffe0e0;'>-set git-commit eb5c3dc
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+set git-commit 86bf31c
</span> # This line is for displaying commit in CLI only
revision 0
categories sysutils
<span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -19,9 +19,9 @@ long_description The main goal of llama.cpp is to enable LLM inference wi
</span> setup and state-of-the-art performance on a wide variety of hardware\
- locally and in the cloud.
<span style='display:block; white-space:pre;background:#ffe0e0;'>-checksums rmd160 32be10f56f106007213dd4d66fc3622195dc34ec \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>- sha256 211f9dcca3fe286c694382d92692c410cec2f96912cba7e2fe76e1fd9a9c446d \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>- size 20595240
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+checksums rmd160 f0d53074e7a9b90a69b4bec95483a294d7ca8ac1 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ sha256 74e6f8998d5f675a433f44985519404c9cd041e324bc8f697dade72ff31c2d0f \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ size 20595131
</span>
# error: 'filesystem' file not found on 10.14
legacysupport.newest_darwin_requires_legacy \
</pre><pre style='margin:0'>
</pre>