CSAI Intergration with GPT4ALL

So I have both GPT4ALL and LM Studio installed locally on my computer but I can’t seem to get CSAI to access it from my website (even though I have a port forwarded specifically for that API call, to the computer running the LLMs locally).

GPT4ALL doesn’t even seem to get any “outside connections” (apparently that’s a known limitation of GPT4ALL so I don’t see how anyone can use it in CSAI unless their whole website is running locally) but I can connect to LM Studio… however I’m getting the error:

OpenAI Error : {“error”:"‘messages’ field is required"}

Any ideas on this? I would prefer to use a local LLM than pay for OpenAI’s API usage.

thanks!

Trying it now with Ollama and getting this error:

OpenAI Error : {“error”:{“message”:“model is required”,“type”:“api_error”,“param”:null,“code”:null}}

Again, it would be great if there was a real way to access a local LLM from a website on an external webserver… otherwise there’s no point to GPT4ALL since it won’t even open up locally on a LAN, it ONLY runs on your local machine.

In the provider settings. You’ll need to click a model from the dropdown. It’s also a good way to test if it’s connected properly if you see this list at all. Internally we’ll see if we can have a solution where it selects the first available model if none is provided. Let us know if this helps. Have a great day.

Provider settings button

image

Again, this doesn’t work with GPT4All because GPT4All is running locally on my computer and it doesn’t allow for any external access from the website (on a Siteground web server) - so unless i run everything (including the web server) all on the same computer as GPT4All, it won’t even see it.

As for using LM Studio (far more versatile than GPT4All) I can remotely connect to the port from the website hosting server. Without loading a model, I get the expected error:

OpenAI Error : {“error”:{“message”:“No models loaded. Please load a model in the developer page or use the lms load command.”,“type”:“invalid_request_error”,“param”:“model”,“code”:“model_not_found”}}

Then I load the model manually (since it won’t show up in the Provider settings pulldown) and then I still get that error from before:

OpenAI Error : {“error”:"‘messages’ field is required"}

The overall problem is that GPT4All will only be accessible from ‘localhost’ and unless my website is running locally on my computer with GPT4All, it will never be able to see it - because of how GPT4All is built. I would love to see if you can allow for other local LLM options that can be accessed from a web server because this isn’t working at all and I don’t know how you tested it unless you had everything running on the same server - GPT4All and the web server.

I gotcha. So to host it live you’ll need to port forward your local computer to a live server. There is a guide below with SSH. The only problem here is that you’re AI will live for all to use. On our end we could find a solution where we could HTTP Auth for this scenario, but right now it is just using the endpoint setting to connect.

As for LMS Studio, it was a solution that popped up on my radio, but it required a newer CPU / Intel CPU at the time I was looking at it then I had. I’ll try to test it again on my own and if need be I’ll buy a different CPU so I can add this integration to our provider list.

Yea I’ve done the port forwarding thing and it works with LM Studio… but not with GPT4All for some reason - at least on my network, even though I’ve given it access through the firewall etc…

So yes, if you’re able to check out LM Studio, that’d be great to have as an option… since it also allows for a LOT more LLMs to be used.

https://lmstudio.ai/

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.