Jason Corkill

jasoncorkill

AI & ML interests

Human data annotation

Recent Activity

Organizations

Blog-explorers's profile picture Hugging Face Discord Community's profile picture mlo-data-collab's profile picture Rapidata's profile picture

jasoncorkill's activity

reacted to their post with πŸ”₯ about 13 hours ago
view post
Post
540
πŸ”₯ It's out! We published the dataset for our evaluation of @OpenAI 's new 4o image generation model.

Rapidata/OpenAI-4o_t2i_human_preference

Yesterday we published the first large evaluation of the new model, showing that it absolutely leaves the competition in the dust. We have now made the results and data available here! Please check it out and ❀️ !
posted an update about 13 hours ago
view post
Post
540
πŸ”₯ It's out! We published the dataset for our evaluation of @OpenAI 's new 4o image generation model.

Rapidata/OpenAI-4o_t2i_human_preference

Yesterday we published the first large evaluation of the new model, showing that it absolutely leaves the competition in the dust. We have now made the results and data available here! Please check it out and ❀️ !
replied to their post 1 day ago
view reply

The dataset will soon be published. In the meantime, check out the detailed benchmark here: πŸ‘‰ rapidata.ai/leaderboard/image-models

reacted to their post with πŸ‘πŸš€πŸ”₯ 1 day ago
view post
Post
1585
πŸš€ First Benchmark of @OpenAI 's 4o Image Generation Model!

We've just completed the first-ever (to our knowledge) benchmarking of the new OpenAI 4o image generation model, and the results are impressive!

In our tests, OpenAI 4o image generation absolutely crushed leading competitors, including @black-forest-labs , @google , @xai-org , Ideogram, Recraft, and @deepseek-ai , in prompt alignment and coherence! They hold a gap of more than 20% to the nearest competitor in terms of Bradley-Terry score, the biggest we have seen since the beginning of the benchmark!

The benchmarks are based on 200k human responses collected through our API. However, the most challenging part wasn't the benchmarking itself, but generating and downloading the images:

- 5 hours to generate 1000 images (no API available yet)
- Just 10 minutes to set up and launch the benchmark
- Over 200,000 responses rapidly collected

While generating the images, we faced some hurdles that meant that we had to leave out certain parts of our prompt set. Particularly we observed that the OpenAI 4o model proactively refused to generate certain images:

🚫 Styles of living artists: completely blocked
🚫 Copyrighted characters (e.g., Darth Vader, Pokémon): initially generated but subsequently blocked

Overall, OpenAI 4o stands out significantly in alignment and coherence, especially excelling in certain unusual prompts that have historically caused issues such as: 'A chair on a cat.' See the images for more examples!
  • 1 reply
Β·
posted an update 1 day ago
view post
Post
1585
πŸš€ First Benchmark of @OpenAI 's 4o Image Generation Model!

We've just completed the first-ever (to our knowledge) benchmarking of the new OpenAI 4o image generation model, and the results are impressive!

In our tests, OpenAI 4o image generation absolutely crushed leading competitors, including @black-forest-labs , @google , @xai-org , Ideogram, Recraft, and @deepseek-ai , in prompt alignment and coherence! They hold a gap of more than 20% to the nearest competitor in terms of Bradley-Terry score, the biggest we have seen since the beginning of the benchmark!

The benchmarks are based on 200k human responses collected through our API. However, the most challenging part wasn't the benchmarking itself, but generating and downloading the images:

- 5 hours to generate 1000 images (no API available yet)
- Just 10 minutes to set up and launch the benchmark
- Over 200,000 responses rapidly collected

While generating the images, we faced some hurdles that meant that we had to leave out certain parts of our prompt set. Particularly we observed that the OpenAI 4o model proactively refused to generate certain images:

🚫 Styles of living artists: completely blocked
🚫 Copyrighted characters (e.g., Darth Vader, Pokémon): initially generated but subsequently blocked

Overall, OpenAI 4o stands out significantly in alignment and coherence, especially excelling in certain unusual prompts that have historically caused issues such as: 'A chair on a cat.' See the images for more examples!
  • 1 reply
Β·
reacted to their post with πŸ§ πŸ‘πŸ”₯πŸ˜ŽπŸ‘€ 11 days ago
view post
Post
3780
At Rapidata, we compared DeepL with LLMs like DeepSeek-R1, Llama, and Mixtral for translation quality using feedback from over 51,000 native speakers. Despite the costs, the performance makes it a valuable investment, especially in critical applications where translation quality is paramount. Now we can say that Europe is more than imposing regulations.

Our dataset, based on these comparisons, is now available on Hugging Face. This might be useful for anyone working on AI translation or language model evaluation.

Rapidata/Translation-deepseek-llama-mixtral-v-deepl
  • 1 reply
Β·
posted an update 11 days ago
view post
Post
3780
At Rapidata, we compared DeepL with LLMs like DeepSeek-R1, Llama, and Mixtral for translation quality using feedback from over 51,000 native speakers. Despite the costs, the performance makes it a valuable investment, especially in critical applications where translation quality is paramount. Now we can say that Europe is more than imposing regulations.

Our dataset, based on these comparisons, is now available on Hugging Face. This might be useful for anyone working on AI translation or language model evaluation.

Rapidata/Translation-deepseek-llama-mixtral-v-deepl
  • 1 reply
Β·
reacted to their post with πŸ‘€ 17 days ago
view post
Post
2185
Benchmarking Google's Veo2: How Does It Compare?

The results did not meet expectations. Veo2 struggled with style consistency and temporal coherence, falling behind competitors like Runway, Pika, Tencent, and even Alibaba. While the model shows promise, its alignment and quality are not yet there.

Google recently launched Veo2, its latest text-to-video model, through select partners like fal.ai. As part of our ongoing evaluation of state-of-the-art generative video models, we rigorously benchmarked Veo2 against industry leaders.

We generated a large set of Veo2 videos spending hundreds of dollars in the process and systematically evaluated them using our Python-based API for human and automated labeling.

Check out the ranking here: https://www.rapidata.ai/leaderboard/video-models

Rapidata/text-2-video-human-preferences-veo2
posted an update 17 days ago
view post
Post
2185
Benchmarking Google's Veo2: How Does It Compare?

The results did not meet expectations. Veo2 struggled with style consistency and temporal coherence, falling behind competitors like Runway, Pika, Tencent, and even Alibaba. While the model shows promise, its alignment and quality are not yet there.

Google recently launched Veo2, its latest text-to-video model, through select partners like fal.ai. As part of our ongoing evaluation of state-of-the-art generative video models, we rigorously benchmarked Veo2 against industry leaders.

We generated a large set of Veo2 videos spending hundreds of dollars in the process and systematically evaluated them using our Python-based API for human and automated labeling.

Check out the ranking here: https://www.rapidata.ai/leaderboard/video-models

Rapidata/text-2-video-human-preferences-veo2
posted an update 30 days ago
view post
Post
3833
Has OpenGVLab Lumina Outperformed OpenAI’s Model?

We’ve just released the results from a large-scale human evaluation (400k annotations) of OpenGVLab’s newest text-to-image model, Lumina. Surprisingly, Lumina outperforms OpenAI’s DALL-E 3 in terms of alignment, although it ranks #6 in our overall human preference benchmark.

To support further development in text-to-image models, we’re making our entire human-annotated dataset publicly available. If you’re working on model improvements and need high-quality data, feel free to explore.

We welcome your feedback and look forward to any insights you might share!

Rapidata/OpenGVLab_Lumina_t2i_human_preference
reacted to their post with πŸ”₯ about 1 month ago
view post
Post
2473
The Sora Video Generation Aligned Words dataset contains a collection of word segments for text-to-video or other multimodal research. It is intended to help researchers and engineers explore fine-grained prompts, including those where certain words are not aligned with the video.

We hope this dataset will support your work in prompt understanding and advance progress in multimodal projects.

If you have specific questions, feel free to reach out.
Rapidata/sora-video-generation-aligned-words
posted an update about 1 month ago
view post
Post
2473
The Sora Video Generation Aligned Words dataset contains a collection of word segments for text-to-video or other multimodal research. It is intended to help researchers and engineers explore fine-grained prompts, including those where certain words are not aligned with the video.

We hope this dataset will support your work in prompt understanding and advance progress in multimodal projects.

If you have specific questions, feel free to reach out.
Rapidata/sora-video-generation-aligned-words
reacted to their post with πŸ‘€ about 1 month ago
view post
Post
2851
Integrating human feedback is vital for evolving AI models. Boost quality, scalability, and cost-effectiveness with our crowdsourcing tool!

..Or run A/B tests and gather thousands of responses in minutes. Upload two images, ask a question, and watch the insights roll in!

Check it out here and let us know your feedback: https://app.rapidata.ai/compare
posted an update about 1 month ago
view post
Post
2851
Integrating human feedback is vital for evolving AI models. Boost quality, scalability, and cost-effectiveness with our crowdsourcing tool!

..Or run A/B tests and gather thousands of responses in minutes. Upload two images, ask a question, and watch the insights roll in!

Check it out here and let us know your feedback: https://app.rapidata.ai/compare