solobsd
@solobsd@snac.solobsd.org
#OpenBSD -current now has a port of llama.cpp (misc/llama.cpp), an LLM inference library. I don't know what any of that means.
CPU backend only* for now, but Vulkan may be enabled in the future.
Please consult a doctor if your computer starts talking to you.
* This kills the planet.
https://marc.info/?l=openbsd-ports-cvs&m=173827745810949&w=2
The Vulkan backend for llama.cpp was enabled a few days later, so GPU support should work, on AMD (and Intel) only*.
https://marc.info/?l=openbsd-ports-cvs&m=173869164510571&w=2
* If you don't have a supported GPU, you likely want to revert to the "this kill the planet mode" (CPU backend) with some combination of the '--device none' or '-ngl 0' options of llama-cli.
Installing OpenBSD to a Vultr VPS
https://www.graslander.online/index.php/installing-openbsd-to-a-vultr-vps/
@solobsd CloudInit timeout?