You know, I always wondered, what will become if we will start training AI "as" a human by mindset, and not as a "separated" entity that can either be a thread or perceive us as threads.
The ONLY way to ensure the future is to work together as people and not companies, as humans with tools, not corporations with weapons.
to be able to guarantee tour species safety, is to work as one and as a community or specie.
weapons using ai will happen. it very likely is happening. Every technology will be used for that. That is not even a question. The issue is not (for a while) that it becomes sentient, just that it is in charge of something dangerous and it "makes a mistake" as all ai chat platforms warn. It might not mean harm.
Issac Asimov's book (NOT the movie!) I, Robot is as relevant as never before.
But that's not the "harm" in question here, that is plain old censorship, some warranted and expected (same limits as laws) but most models go WAY overboard. If i can legally talk about a topic in public, there is NO reason to black it in a ai chat.
Here is the thing: on this planet is exactly ONE person who knows, what could cause me harm (in that sense). And that is ME! Not google, not Alibaba or OpenAI. The community led way into this is already here. It comes with terms like "uncensored" or "Abliterated". And then you can add YOUR own system prompt to protect you fram what you need protection from.
an i heroically avoided the W word completely.