Local LLM not working?
-
RAMarianHaertel last edited by
I need some help to get it running,
I installed two local LLM, Gemma and Mistral. Both downloaded, but whatever I entered, there is not reply (it is emptly) from the local LLM. What could be the reason?
Also I am trying to find a data protection relevant use case to have the local LLM analyze and summarize a document. Is this already doable with this solution?
-
-
JoostD last edited by
@ramarianhaertel I'm having the same issue. I inspected the aria sidebar and see this error in my console from aria.js:
http://localhost:11435/api/chat 404 (Not Found)
I suspect there is an issue initializing the models on certain machines and operating systems. Perhaps Apple isn't supported at this time (I'm running on an M2 mac).
-
rikazkhan7 Banned last edited by
@ramarianhaertel Troubleshooting Gemma and Mistral
- No Reply from Local LLMs
If you're not receiving any response from Gemma and Mistral after entering input, there could be a few reasons for this:
Installation Issues: Double-check that the installations were successful and that there were no errors during the process. Sometimes, installation issues can prevent the models from running correctly.
Environment Setup: Ensure that your environment is set up properly to run the local LLMs. This includes having the necessary dependencies installed and configured correctly.
Input Format: Make sure you're providing input in the correct format expected by the models. Depending on how Gemma and Mistral are designed, they might require input in a specific format or structure. - No Reply from Local LLMs