The Llama.cpp AI connection allows using the LLM inference in C/C++ project with the INETAPP. Due to its ease of use, this LLAMA can quickly be hosted locally. The following settings can be made here:
The installation can be done either by downloading and following the instructions on the GitHub page or using a prepared llamafile.
Please follow the most recent instructions on either project page.