Junlin Zhou

jlzhou

AI & ML interests

None yet

Recent Activity

Organizations

TableGPT's profile picture

jlzhou's activity

reacted to mlabonne's post with πŸ‘ about 2 months ago
upvoted an article about 2 months ago
reacted to etemiz's post with πŸ‘ about 2 months ago
view post
Post
1619
Grok 3 Human Alignment Score: 42

It is better in health, nutrition, fasting compared to Grok 2. About the same in liberating tech like bitcoin and nostr. Worse in the misinformation and faith domains. The rest is about the same. So we have a model that is less faithful but knows how to live a healthier life.

https://sheet.zoho.com/sheet/open/mz41j09cc640a29ba47729fed784a263c1d08?sheetid=0&range=A1

https://huggingface.co/blog/etemiz/benchmarking-ai-human-alignment-of-grok-3
New activity in tablegpt/ToyotaMotorsStockData 3 months ago
New activity in tablegpt/CoffeeSales 3 months ago
upvoted an article 3 months ago
view article
Article

You could have designed state of the art positional encoding

By FL33TW00D-HF β€’
β€’ 300
reacted to KaiChen1998's post with πŸ‘ 3 months ago
view post
Post
4887
πŸ“’ Our EMOVA paper has been accepted by CVPR 2025, and we are glad to release all resources, including code (training & inference), datasets (training & evaluation), and checkpoints (EMOVA-3B/7B/72B)!

πŸ€— EMOVA is a novel end-to-end omni-modal LLM that can see, hear and speak. Given omni-modal (i.e., textual, visual and speech) inputs, EMOVA can generate both textual and speech responses with vivid emotional controls by utilizing the speech decoder and a style controller.

✨ EMOVA Highlights
βœ… State-of-the-art omni-modality: EMOVA achieves SoTA comparable results on both vision-language and speech benchmarks simultaneously.
βœ… Device adaptation: our codebase supports training/inference on both NVIDIA GPUs (e.g., A800 & H20) and Ascend NPUs (e.g., 910B3)!
βœ… Modular design: we integrate multiple implementations of vision encoder, vision projector, and language model, even including the most recent DeepSeekMoE-tiny!

πŸ”₯ You are all welcome to try and star!
- Project page: https://emova-ollm.github.io/
- Github: https://github.com/emova-ollm/EMOVA
- Demo: Emova-ollm/EMOVA-demo