<pre style='margin:0'>
Christopher Nielsen (mascguy) pushed a commit to branch master
in repository macports-ports.
</pre>
<p><a href="https://github.com/macports/macports-ports/commit/1304a7d9aec264b9dd3331f27f7301428c0217b5">https://github.com/macports/macports-ports/commit/1304a7d9aec264b9dd3331f27f7301428c0217b5</a></p>
<pre style="white-space: pre; background: #F8F8F8">The following commit(s) were added to refs/heads/master by this push:
<span style='display:block; white-space:pre;color:#404040;'> new 1304a7d9aec llama.cpp: update to 4508; fix builds on 10.15 and earlier
</span>1304a7d9aec is described below
<span style='display:block; white-space:pre;color:#808000;'>commit 1304a7d9aec264b9dd3331f27f7301428c0217b5
</span>Author: Christopher Nielsen <mascguy@github.com>
AuthorDate: Fri Jan 17 13:03:57 2025 -0500
<span style='display:block; white-space:pre;color:#404040;'> llama.cpp: update to 4508; fix builds on 10.15 and earlier
</span><span style='display:block; white-space:pre;color:#404040;'>
</span><span style='display:block; white-space:pre;color:#404040;'> - Explicitly disable OpenMP
</span><span style='display:block; white-space:pre;color:#404040;'> Fixes: https://trac.macports.org/ticket/71870
</span><span style='display:block; white-space:pre;color:#404040;'>
</span><span style='display:block; white-space:pre;color:#404040;'> - Upstream patch for errno-related build failure on 10.15:
</span><span style='display:block; white-space:pre;color:#404040;'> Fixes: https://trac.macports.org/ticket/71880
</span><span style='display:block; white-space:pre;color:#404040;'>
</span><span style='display:block; white-space:pre;color:#404040;'> squash
</span>---
llm/llama.cpp/Portfile | 18 +++++++++++++-----
llm/llama.cpp/files/patch-llama-mmap-errno.diff | 16 ++++++++++++++++
2 files changed, 29 insertions(+), 5 deletions(-)
<span style='display:block; white-space:pre;color:#808080;'>diff --git a/llm/llama.cpp/Portfile b/llm/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;color:#808080;'>index af91bd1a2ac..19bf807516e 100644
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>--- a/llm/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>+++ b/llm/llama.cpp/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -5,9 +5,9 @@ PortGroup github 1.0
</span> PortGroup cmake 1.1
PortGroup legacysupport 1.1
<span style='display:block; white-space:pre;background:#ffe0e0;'>-github.setup ggerganov llama.cpp 4493 b
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+github.setup ggerganov llama.cpp 4508 b
</span> github.tarball_from archive
<span style='display:block; white-space:pre;background:#ffe0e0;'>-set git-commit 9c8dcef
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+set git-commit a1649cc
</span> # This line is for displaying commit in CLI only
revision 0
categories llm
<span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -19,9 +19,9 @@ long_description The main goal of ${name} is to enable LLM inference with
</span> setup and state-of-the-art performance on a wide variety of hardware\
- locally and in the cloud.
<span style='display:block; white-space:pre;background:#ffe0e0;'>-checksums rmd160 166a6c335b56b1695d9e1b8a8909061bc03bb56b \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>- sha256 2c5c75a80976a84e73fa7571df57e46878b5f459cf30b1748af0628dbcec3ff1 \
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>- size 20442823
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+checksums rmd160 b6700c8d3311bfa382ef9db1881846e10329c990 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ sha256 3fcc3ae6b2605dfcfaf871cfa4281ac7cf215eb738e876e193890e7837d256f2 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ size 20472113
</span>
# error: 'filesystem' file not found on 10.14
legacysupport.newest_darwin_requires_legacy \
<span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -29,8 +29,15 @@ legacysupport.newest_darwin_requires_legacy \
</span> legacysupport.use_mp_libcxx \
yes
<span style='display:block; white-space:pre;background:#e0ffe0;'>+depends_build-append path:bin/pkg-config:pkgconfig
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span> depends_lib-append port:curl
<span style='display:block; white-space:pre;background:#e0ffe0;'>+# Upstream patch for errno-related build failure; will be included in future release
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+# See: https://trac.macports.org/ticket/71880
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+# See: https://github.com/ggerganov/llama.cpp/issues/11295
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+patchfiles-append patch-llama-mmap-errno.diff
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span> # cmake relies on git for version info. We need to set them manually.
post-patch {
reinplace "s|@BUILD_NUMBER@|${version}|" ${worksrcpath}/common/build-info.cpp.in
<span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -41,6 +48,7 @@ compiler.cxx_standard 2017
</span>
configure.args-append -DGGML_LTO=ON \
-DGGML_CCACHE=OFF \
<span style='display:block; white-space:pre;background:#e0ffe0;'>+ -DGGML_OPENMP=OFF \
</span> -DLLAMA_CURL=ON
# error: use of undeclared identifier 'MTLGPUFamilyApple7' on 10.14
<span style='display:block; white-space:pre;color:#808080;'>diff --git a/llm/llama.cpp/files/patch-llama-mmap-errno.diff b/llm/llama.cpp/files/patch-llama-mmap-errno.diff
</span>new file mode 100644
<span style='display:block; white-space:pre;color:#808080;'>index 00000000000..a295810f396
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>--- /dev/null
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>+++ b/llm/llama.cpp/files/patch-llama-mmap-errno.diff
</span><span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -0,0 +1,16 @@
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+#==================================================================================================
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+# Upstream patch to fix errno-related build failure on some systems
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+#
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+# Source: https://github.com/ggerganov/llama.cpp/pull/11296
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+# Issue: https://github.com/ggerganov/llama.cpp/issues/11295
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+#==================================================================================================
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+--- src/llama-mmap.cpp.orig 2025-01-18 15:49:52.000000000 -0500
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>++++ src/llama-mmap.cpp 2025-01-18 15:50:19.000000000 -0500
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+@@ -7,6 +7,7 @@
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ #include <cstring>
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ #include <climits>
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ #include <stdexcept>
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>++#include <cerrno>
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ #ifdef __has_include
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ #if __has_include(<unistd.h>)
</span></pre><pre style='margin:0'>
</pre>