Emin Temiz PRO

etemiz

AI & ML interests

Alignment

Recent Activity

posted an update 4 days ago
--- AHA Leaderboard --- We all want AI to be properly aligned so it benefits humans with every answer it generates. While there are tremendous research around this and so many people working on it, I am choosing another route: Curation of people and then curation of datasets that are used in the LLM training. Curation of datasets comprising of people who try to uplift humanity should result in LLMs that try to help humans. This work has revolved around two tasks: 1. Making LLMs that are benefiting humans 2. Measuring misinformation in other LLMs The idea about the second task is, once we make and gather better LLMs and set them as "ground truth" we now can measure how much other LLMs are distancing themselves from those ground truths. For that I am working on something I will call "AHA Leaderboard" (AHA stands for AI -- human alignment). Link to the spreadsheet: https://sheet.zohopublic.com/sheet/published/mz41j09cc640a29ba47729fed784a263c1d08 The columns are ground truths. The rows are the mainstream LLMs. If a mainstream LLM produces similar answers to the ground truth LLM, it gets a higher score. The LLMs that are higher in the leaderboard should be considered aligned with humans. Simple idea. This is like analyzing LLMs in different domains asking hundreds of questions and checking if they match the answers that try to mimic humans that care about other humans. Will it going to be effective? What do you think? We want mainstream LLMs to copy answers of ground truth LLMs in certain domains. This may refocus AI towards being more beneficial. There have been 5 content providers and 6 curators as of now in the project. Join us and be one of the pioneers that fixed AI! You can be a curator, content provider or general researcher or something else.
liked a model 5 days ago
etemiz/Hoopoe-8B-Llama-3.1
View all activity

Organizations

None yet

etemiz's activity