Martin Viewegger

Viewegger

AI & ML interests

None yet

Recent Activity

liked a model 13 days ago
hexgrad/Kokoro-82M
View all activity

Organizations

None yet

Viewegger's activity

reacted to alibabasglab's post with ๐Ÿ‘ about 22 hours ago
reacted to alibabasglab's post with ๐Ÿ‘ 5 days ago
view post
Post
5226
๐ŸŽ‰ ClearerVoice-Studio New Feature: Speech Super-Resolution with MossFormer2 ! ๐Ÿš€
Weโ€™re excited to announce that ClearerVoice-Studio now supports speech super-resolution, powered by our latest MossFormer2-based model!
Whatโ€™s New?

๐Ÿ”Š Convert Low-Resolution to High-Resolution Audio:
Transform low-resolution audio (effective sampling rate โ‰ฅ 16 kHz) into crystal-clear, high-resolution audio at 48 kHz.

๐Ÿค– Cutting-Edge Technology:
Leverages the MossFormer2 model plus HiFi-GAN, optimised for generating high-quality audio with enhanced perceptual clarity.

๐ŸŽง Enhanced Listening Experience:
Perfect for speech enhancement, content restoration, and high-fidelity audio applications.

๐ŸŒŸ Try It Out!
Upgrade to the latest version of ClearerVoice-Studio (https://github.com/modelscope/ClearerVoice-Studio) to experience this powerful feature. Check out the updated documentation and examples in our repository.

Let us know your thoughts, feedback, or feature requests in the Issues section.
New activity in jpgallegoar/F5-Spanish about 2 months ago
New activity in PetrosStav/F5-TTS-Greek about 2 months ago

Dataset size and output quality

#2 opened about 2 months ago by
Viewegger
New activity in marduk-ra/F5-TTS-German about 2 months ago

Training process details

4
#2 opened about 2 months ago by
Nils11
reacted to m-ric's post with ๐Ÿ”ฅ 2 months ago
view post
Post
789
๐—”๐—ฟ๐—ฒ ๐˜€๐—ฐ๐—ฎ๐—น๐—ถ๐—ป๐—ด ๐—น๐—ฎ๐˜„๐˜€ ๐—ผ๐˜ƒ๐—ฒ๐—ฟ? ๐—” ๐—ฟ๐—ฒ๐—ฝ๐—ผ๐—ฟ๐˜ ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜๐—ต๐—ฒ ๐—œ๐—ป๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ฎ๐—ป๐—ป๐—ผ๐˜‚๐—ป๐—ฐ๐—ฒ๐—ฑ ๐˜๐—ต๐—ฎ๐˜ ๐—ข๐—ฝ๐—ฒ๐—ป๐—”๐—œ ๐—ถ๐˜€ ๐˜€๐—ฒ๐—ฒ๐—ถ๐—ป๐—ด ๐—ฑ๐—ถ๐—บ๐—ถ๐—ป๐—ถ๐˜€๐—ต๐—ถ๐—ป๐—ด ๐—ฟ๐—ฒ๐˜๐˜‚๐—ฟ๐—ป๐˜€ ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜€๐—ฐ๐—ฎ๐—น๐—ถ๐—ป๐—ด ๐˜‚๐—ฝ ๐˜๐—ต๐—ฒ ๐—ป๐—ฒ๐˜…๐˜ ๐—š๐—ฃ๐—ง ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€.

๐Ÿ“Š What are scaling laws? These are empiric laws that say "Every time you increase compute spent in training 10-fold, your LLM's performance will go up by a predictable tick". Of course, they apply only if you train your model with the right methods.

The image below illustrates it: they're from a paper by Google, "Scaling Autoregressive Models for Content-Rich Text-to-Image Generation", and they show how quality and instruction following of models improve when you scale the model up (which is equivalent to scaling up the compute spent in training).

โžก๏ธ These scaling laws have immense impact: they triggered the largest gold rush ever, with companies pouring billions into scaling up theiur training. Microsoft and OpenAI spent 100B into their "Startgate" mega training cluster, due to start running in 2028.

๐Ÿค” So, what about these reports of scaling laws slowing down?

If they are true, they would mean a gigantic paradigm shift, as the hundreds of billions poured by AI companies into scaling could be a dead-end. โ›”๏ธ

But I doubt it: until the most recent publications, scaling laws showed no signs of weakness, and the researchers at the higher end of the scale-up seems to imply the scaling up continues.

Wait and see!
  • 1 reply
ยท
reacted to yongchanghao's post with ๐Ÿ”ฅ 3 months ago
view post
Post
3756
We just released a paper (NeuZip) that compresses VRAM in a lossless manner to run larger models. This should be particularly useful when VRAM is insufficient during training/inference. Specifically, we look inside each floating number and find that the exponents are highly compressible (as shown in the figure below).

Read more about the work at NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks (2410.20650)
New activity in gokaygokay/Flux-Seamless-Texture-LoRA 3 months ago

Size of the dataset?

8
#1 opened 3 months ago by
Viewegger