<pre style='margin:0'>
Clemens Lang (neverpanic) pushed a commit to branch master
in repository macports-ports.
</pre>
<p><a href="https://github.com/macports/macports-ports/commit/5a91f9d60f6a53b9f350d4a1465d0783c847c250">https://github.com/macports/macports-ports/commit/5a91f9d60f6a53b9f350d4a1465d0783c847c250</a></p>
<pre style="white-space: pre; background: #F8F8F8">The following commit(s) were added to refs/heads/master by this push:
<span style='display:block; white-space:pre;color:#404040;'> new 5a91f9d60f6 ramalama: new port
</span>5a91f9d60f6 is described below
<span style='display:block; white-space:pre;color:#808000;'>commit 5a91f9d60f6a53b9f350d4a1465d0783c847c250
</span>Author: Clemens Lang <cal@macports.org>
AuthorDate: Tue May 13 13:00:58 2025 +0200
<span style='display:block; white-space:pre;color:#404040;'> ramalama: new port
</span>---
llm/ramalama/Portfile | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
<span style='display:block; white-space:pre;color:#808080;'>diff --git a/llm/ramalama/Portfile b/llm/ramalama/Portfile
</span>new file mode 100644
<span style='display:block; white-space:pre;color:#808080;'>index 00000000000..726477e6a1c
</span><span style='display:block; white-space:pre;background:#ffe0e0;'>--- /dev/null
</span><span style='display:block; white-space:pre;background:#e0e0ff;'>+++ b/llm/ramalama/Portfile
</span><span style='display:block; white-space:pre;background:#e0e0e0;'>@@ -0,0 +1,39 @@
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+# -*- coding: utf-8; mode: tcl; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 4 -*- vim:fenc=utf-8:ft=tcl:et:sw=4:ts=4:sts=4
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+PortSystem 1.0
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+PortGroup github 1.0
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+PortGroup python 1.0
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+github.setup containers ramalama 0.8.3 v
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+github.tarball_from archive
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+checksums rmd160 2bbffb2b0133ddbe6d168d1b8c59bd18393b81f2 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ sha256 c15c1d2999badc10c18b53add1e37af6398b877aedfdb7317e0e517711481324 \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ size 504468
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+homepage https://ramalama.ai/
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+license MIT
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+description A tool to simplify the use of local AI models
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+long_description \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ Ramalama is an open-source developer tool that simplifies the local serving \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ of AI models from any source and facilitates their use for inference in \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ production, all through the familiar language of containers.
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+maintainers {cal @neverpanic} openmaintainer
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+categories llm science
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+supported_archs noarch
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+python.default_version 313
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+depends_run-append \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ port:krunkit \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ port:podman
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+notes \
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ "${name} defaults to running AI models in podman containers in a podman\
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ machine (i.e., VM) started by libkrun. This is not the podman default, so\
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ you will have to change it, either by exporting the\
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ CONTAINERS_MACHINE_PROVIDER=libkrun environment variable, or by adding\
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ 'provider = \"libkrun\"' to the '\[machine]' section of\
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ '\$HOME/.config/containers/containers.conf'. See man 7 ramalama-macos for\
</span><span style='display:block; white-space:pre;background:#e0ffe0;'>+ more information."
</span></pre><pre style='margin:0'>
</pre>