Using another biased dataset to correct a biased model, bring serious alignment and safety problems.
#110
by
giant-S
- opened
It's nothing more than using another biased dataset to correct a biased model, yet it still claims to be unbiased. Political issues cannot be better addressed through the LLMs, this approach may lead to severe model hallucinations. If censorship is removed, such practices will undoubtedly bring more serious alignment and safety problems.
giant-S
changed discussion title from
Using another biased dataset to correct a biased model
to Using another biased dataset to correct a biased model, bring serious alignment and safety problems.
down with alignment and safety in general