Again, this doesn’t work with GPT4All because GPT4All is running locally on my computer and it doesn’t allow for any external access from the website (on a Siteground web server) - so unless i run everything (including the web server) all on the same computer as GPT4All, it won’t even see it.
As for using LM Studio (far more versatile than GPT4All) I can remotely connect to the port from the website hosting server. Without loading a model, I get the expected error:
OpenAI Error : {“error”:{“message”:“No models loaded. Please load a model in the developer page or use the lms load command.”,“type”:“invalid_request_error”,“param”:“model”,“code”:“model_not_found”}}
Then I load the model manually (since it won’t show up in the Provider settings pulldown) and then I still get that error from before:
OpenAI Error : {“error”:"‘messages’ field is required"}
The overall problem is that GPT4All will only be accessible from ‘localhost’ and unless my website is running locally on my computer with GPT4All, it will never be able to see it - because of how GPT4All is built. I would love to see if you can allow for other local LLM options that can be accessed from a web server because this isn’t working at all and I don’t know how you tested it unless you had everything running on the same server - GPT4All and the web server.