this post was submitted on 05 Jan 2025
33 points (77.0% liked)

No Stupid Questions

40325 readers
1000 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

Available online as in, you just log in to a website and use it, not on hugging face or github, where you need to download, install and configure.

LLMs are already made so "safe" that they won't even describe an erotic or crime story - content you would easily find visually represented in all its detail on Netflix, Amazon, HBO, youtube, etc. Ie writing "Game of Thrones" with an AI is not possible in most chat bots anymore.

you are viewing a single comment's thread
view the rest of the comments
[–] CubitOom@infosec.pub 5 points 3 months ago* (last edited 3 months ago) (2 children)

You are right that something that most others will host for free are going to be censored since otherwise they might have some kind of responsibility legally. I learned this while trying to diagnose an issue with my cars door lock.

At the end of the day, anything you ask some hosted llm is being recorded so if you actually want something uncensored or something that gives you a sense of freedom then the only real option is to self host.

Luckily, it's very simple and can even be done on a low spec device if you pick the right model. The amount and type of ram you have a will dictate how many parameters you can run at a decent speed.

Here's 3 options with increasing difficulty (not much however) written for Arch Linux: https://infosec.pub/comment/13623228

[–] avattar 2 points 3 months ago (1 children)

It says wrong key on the 0bin

[–] CubitOom@infosec.pub 1 points 3 months ago

Sorry about that bad link. Here it is.

Install ollama

```sh
pacman -S ollama
```

Download any uncensored llm

From ollama's library

Serve, Pull, and Run

  1. In terminal A execute
    ollama serve
    
  2. In terminal B execute
    ollama pull wizard-vicuna-uncensored:7B
    ollama run wizard-vicuna-uncensored:7B
    

From huggingface

download and any gguf model you want with uncensored in the name. I like ggufs from TheBloke

  • Example using SOLAR-10.7B-Instruct-v1.0-uncensored-GGUF
    • Click on Files and Versons and download solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf
    • change directory to where the downloaded gguf is and write a modelfile with just a FROM line
      echo "FROM ~/Documents/ollama/models/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf" >| ~/Documents/ollama/modelfiles/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf.modelfile
      
    • Serve, Create, and Run
      1. In terminal A execute
        ollama serve
        
      2. In terminal B execute
        ollama create solar-10:7b -f ~/Documents/ollama/modelfiles/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf.modelfile
        ollama run solar-10:7b
        

Create a GGUF file from a non gguf llm for ollama

setup python env

Install pyenv and then follow instructions to update .bashrc

curl https://pyenv.run/ | bash

Update pyenv and install a version of python you need

source "${HOME}"/.bashrc
pyenv update
pyenv install 3.9

Create a virtual environment

pyenv virtualenv 3.9 ggufc

Use the virtual environment and download the pre-reqs

pyenv activate ggufc
pip install --upgrade pip
pip install huggingface_hub
mkdir -p ~/Documents/ollama/python
cd ~/Documents/ollama/python
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
pip install -r requirements.txt

Download the model from huggingface.

For this example, Im going to pull llama3.2_1b_2025_uncensored Note that this llm is 1B so can be ran on a low spec device.

mkdir -p ~/Documents/ollama/python
mkdir -p ~/Documents/ollama/models
model_repo_slug='carsenk'
model_repo_name='llama3.2_1b_2025_uncensored'
model_id="$model_repo_slug/$model_repo_name"
cat << EOF >| ~/Documents/ollama/python/fetch.py
from huggingface_hub import snapshot_download

model_id="$model_id"
snapshot_download(repo_id=model_id, local_dir="$model_id",
                  local_dir_use_symlinks=False, revision="main")
EOF

cd ~/Documents/ollama/models
python ~/Documents/ollama/python/fetch.py

Transpose HF to GGUF

python ~/Documents/ollama/python/llama.cpp/convert.py "$model_id" \
  --outfile "$model_repo_name".gguf \
  --outtype q8_0

Serve, Organize, Create, and Run

  1. In terminal A execute
    ollama serve
    
  2. Open a new terminal while ollama is being served.
    mkdir -p ~/Documents/ollama/modelfiles
    echo "FROM ~/Documents/ollama/models/llama3.2_1b_2025_uncensored.gguf" >| ~/Documents/ollama/modelfiles/llama3.2_1b_2025_uncensored.modelfile
    ollama create llama3.2:1b -f ~/Documents/ollama/modelfiles/llama3.2_1b_2025_uncensored.modelfile
    ollama run llama3.2:1b