![]() So the web UI has an API I can use? That will be very helpful. When I played around with dalai I could only get vanila alpaca 7b to work for some reason but I suspect the install had some issues. Thanks man this is some good stuff! Gonna check out those web UIs and the gpt4-x-alpaca-13b as I have a 4090 so should be able to handle it fine as long as I can get it installed right. If you need help setting stuff up we can chat more, I don't mind. ![]() If you want more information I actually have a pretty long list, I only explained the first few that came to mind hopefully this helps. I always try to use GPTQ models because they tend to use less VRAM and use less disk space. Pygmalion is only good at nsfw roleplay, shygmalion is EXTREMLY NSFW and WizardLM is another good general chat model.ĭespite saying some of these models are "only good at" specific things they all pass the turing test for sure during general conversation. Gpt4-x-alpaca is a good general uncensored model, vicuna is a good general censored model. Only use 13b parameter models if you have more than 10gb of VRAM, if you have only 8gb of VRAM I'd recommend a 7b parameter model like usamakenway/WizardLM-7B-uncensored-GPTQ-4bit-128g or pygmalion-7b-4bit-128g-cuda. The model I use the most because it's the most coherent and uncensored one is anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g but if you don't care about censorship in your models anon8231489123/vicuna-13b-GPTQ-4bit-128g is also a great option. In terms of setup I use oobabooga/text-generation-webui (API) and Cohee1207/SillyTavern (WebUI) in conjunction, there are lots of nice tutorials that cover how to set that stuff up but it's pretty straightforward. You want information on the different AI models? I think I can catch you up.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |