Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sakuga-42M Dataset: Scaling Up Cartoon Research

Published 13 May 2024 in cs.CV | (2405.07425v1)

Abstract: Hand-drawn cartoon animation employs sketches and flat-color segments to create the illusion of motion. While recent advancements like CLIP, SVD, and Sora show impressive results in understanding and generating natural video by scaling large models with extensive datasets, they are not as effective for cartoons. Through our empirical experiments, we argue that this ineffectiveness stems from a notable bias in hand-drawn cartoons that diverges from the distribution of natural videos. Can we harness the success of the scaling paradigm to benefit cartoon research? Unfortunately, until now, there has not been a sizable cartoon dataset available for exploration. In this research, we propose the Sakuga-42M Dataset, the first large-scale cartoon animation dataset. Sakuga-42M comprises 42 million keyframes covering various artistic styles, regions, and years, with comprehensive semantic annotations including video-text description pairs, anime tags, content taxonomies, etc. We pioneer the benefits of such a large-scale cartoon dataset on comprehension and generation tasks by finetuning contemporary foundation models like Video CLIP, Video Mamba, and SVD, achieving outstanding performance on cartoon-related tasks. Our motivation is to introduce large-scaling to cartoon research and foster generalization and robustness in future cartoon applications. Dataset, Code, and Pretrained Models will be publicly available.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  2. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023.
  3. Sora. https://openai.com/sora. Accessed: 2024-5-12.
  4. Deep animation video interpolation in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6587–6595, 2021.
  5. Deep geometrized cartoon line inbetweening. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7291–7300, 2023.
  6. Internvid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023.
  7. Videomamba: State space model for efficient video understanding. arXiv preprint arXiv:2403.06977, 2024.
  8. Pika. https://pika.art/. Accessed: 2024-5-3.
  9. Gen-2. https://research.runwayml.com/gen2. Accessed: 2024-5-3.
  10. Learning inclusion matching for animation paint bucket colorization. CVPR, 2024.
  11. Joint stroke tracing and correspondence for 2d animation. ACM Trans. Graph., 43(3), apr 2024.
  12. The animation transformer: Visual correspondence via segment matching. In Proceedings of the IEEE/CVF international conference on computer vision, pages 11323–11332, 2021.
  13. Sprite-from-sprite: Cartoon animation decomposition with self-supervised sprite estimation. ACM Trans. Graph., 41(6), nov 2022.
  14. Re: Draw–context aware translation as a controllable method for artistic production. arXiv preprint arXiv:2401.03499, 2024.
  15. Toonsynth: example-based synthesis of hand-colored cartoon animations. ACM Transactions on Graphics (TOG), 37(4):1–11, 2018.
  16. Globally optimal toon tracking. ACM Transactions on Graphics (TOG), 35(4):1–10, 2016.
  17. Stereoscopizing cel animations. ACM Transactions on Graphics (TOG), 32(6):1–10, 2013.
  18. Dilight: Digital light table–inbetweening for 2d animations using guidelines. Computers & Graphics, 65:31–44, 2017.
  19. Exploring inbetween charts with trajectory-guided sliders for cutout animation. Multimedia Tools and Applications, pages 1–14, 2023.
  20. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023.
  21. Animate anyone: Consistent and controllable image-to-video synthesis for character animation. arXiv preprint arXiv:2311.17117, 2023.
  22. Panda-70m: Captioning 70m videos with multiple cross-modality teachers. arXiv preprint arXiv:2402.19479, 2024.
  23. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1728–1738, 2021.
  24. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278–25294, 2022.
  25. Pyscenedetect. https://github.com/Breakthrough/PySceneDetect. Accessed: 2024-5-12.
  26. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744, 2023.
  27. Share captioner. https://huggingface.co/Lin-Chen/ShareCaptioner. Accessed: 2024-5-12.
  28. Danbooru2021. https://gwern.net/danbooru2021. Accessed: 2024-5-12.
  29. Waifu dataset. https://github.com/thewaifuproject/waifu-dataset. Accessed: 2024-5-12.
  30. wd14-swin-v2. https://huggingface.co/SmilingWolf/wd-v1-4-swinv2-tagger-v2. Accessed: 2024-5-12.
  31. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730–19742. PMLR, 2023.
  32. chatgpt. https://chatgpt.com/. Accessed: 2024-5-12.
  33. Dall-e3. https://openai.com/dall-e-3. Accessed: 2024-5-12.
  34. cafe-aesthetic-model. https://huggingface.co/cafeai/cafe_aesthetic. Accessed: 2024-5-12.
  35. manga-image-translator. https://github.com/zyddnys/manga-image-translator. Accessed: 2024-5-12.
  36. Learning audio-video modalities from image captions. In European Conference on Computer Vision, pages 407–426. Springer, 2022.
  37. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3558–3568, 2021.
  38. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014.
  39. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123:32–73, 2017.
  40. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24, 2011.
  41. Align your latents: High-resolution video synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22563–22575, 2023.
  42. High-resolution image synthesis with latent diffusion models. 2022 ieee. In CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10674–10685, 2021.
  43. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
  44. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 4296–4304, 2024.
  45. Gpt-4v. https://openai.com/research/gpt-4v-system-card. Accessed: 2024-5-12.
  46. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023.
  47. Efficient in-context learning in vision-language models for egocentric videos. arXiv preprint arXiv:2311.17041, 2023.
  48. Manga line extraction. https://github.com/ljsabc/MangaLineExtraction_PyTorch. Accessed: 2024-5-12.
  49. Anime2sketch. https://github.com/Mukosame/Anime2Sketch. Accessed: 2024-5-12.
  50. Automatic temporally coherent video colorization. In 2019 16th conference on computer and robot vision (CRV), pages 189–194. IEEE, 2019.
  51. Optical flow based line drawing frame interpolation using distance transform to support inbetweenings. In 2019 IEEE International Conference on Image Processing (ICIP), pages 4200–4204. IEEE, 2019.
  52. Deep sketch-guided cartoon video inbetweening. IEEE Transactions on Visualization and Computer Graphics, 28(8):2938–2952, 2021.
  53. Ldmvfi: Video frame interpolation with latent diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 1472–1480, 2024.
  54. I2vgen-xl: High-quality image-to-video synthesis via cascaded diffusion models. arXiv preprint arXiv:2311.04145, 2023.
  55. Wenhao Wang and Yi Yang. Vidprom: A million-scale real prompt-gallery dataset for text-to-video diffusion models. arXiv preprint arXiv:2403.06098, 2024.
Citations (1)

Summary

  • The paper presents a 42M keyframe dataset that widens cartoon research with extensive semantic annotations and diverse content.
  • The dataset allows fine-tuning of foundation models, resulting in significantly improved video-text retrieval and cartoon generation tasks.
  • The work demonstrates robust enhancements in cartoon-specific analysis, paving the way for better animation comprehension and creation.

Sakuga-42M Dataset: Scaling Up Cartoon Research

The "Sakuga-42M Dataset: Scaling Up Cartoon Research" paper introduces the Sakuga-42M dataset to enhance cartoon research using large-scale data, addressing several limitations in existing cartoon datasets. This essay explores the dataset's construction, the integration of modern foundation models, and the implications of its introduction for the future of cartoon animation research.

Introduction

Recent advances in video analysis and generation models like CLIP and SVD have revolutionized natural video processing. However, when it comes to cartoons, these models often perform suboptimally due to a significant distributional shift between natural and cartoon data. The absence of large-scale, annotated datasets has been a major impediment to progress in cartoon research. The Sakuga-42M dataset aims to fill this gap by providing 42 million keyframes enriched with semantic annotations. The dataset supports various tasks, including comprehension and generation, by enabling the fine-tuning of state-of-the-art video-LLMs and generative models.

Dataset Preparation and Composition

The Sakuga-42M dataset represents a significant leap in cartoon animation datasets in terms of both scale and diversity. It comprises 42 million keyframes split from 1.4 million clips, sourced from over 150,000 publicly available cartoon videos. The dataset is annotated with rich semantic information such as video-text pairs, anime tags, and taxonomies.

Data Collection and Annotation:

Cartoon videos are collected from various Internet sources, including YouTube and Twitter, while respecting privacy norms. PySceneDetect is used to segment videos into semantic units for analysis. Keyframes extracted from videos are then annotated using models like wd14-swin-v2 for anime tagging, and BLIP-v2 for generating raw descriptions, which are further enhanced by integrating contextual knowledge from LLMs like ChatGPT-175B.

Diversity and Quality:

Sakuga-42M spans a wide range of artistic styles and temporal coverage, including animations from Japan, the US, and Europe, from the 1950s onward. It captures a broad spectrum of animation types—ranging from traditional hand-drawn styles to more contemporary digital animations—allowing comprehensive analysis and application. Figure 1

Figure 1: Raw cartoons from Sakuga-42M.

Foundation Models and Experiments

The paper fine-tunes several leading AI models using Sakuga-42M to demonstrate its efficacy in improving cartoon-specific tasks. These include models for video-language comprehension and generative modeling.

Video-Language Understanding:

The study employed models such as Video CLIP and Video Mamba, which were fine-tuned on the Sakuga-42M dataset for enhanced video-text retrieval tasks. The fine-tuned models achieved significant improvements in zero-shot retrieval tasks, outperforming baselines trained on natural video datasets by leveraging the dataset's breadth and quality. Figure 2

Figure 2: Pretraining of video-LLM. Foundation models like CLIP or Mamba are trained on keyframe videos. The timesheet predictor learns to predict the timesheet class on embeddings and recover the temporal repetitions.

Video Generation:

The SVD model was utilized to generate new frames based on Sakuga-42M, showing improved stability and animation dynamics in the generated sequences when compared to prominent generative models like Pika and Gen-2. Figure 3

Figure 3: Finetuning of generative model. SVD base model is finetuned to generate cartoon keyframes, which are then expanded back into a video, conditioned on the predicted timesheet class.

Implications and Future Directions

The Sakuga-42M dataset facilitates a paradigm shift in cartoon research by supporting scalable models capable of learning robust representations of cartoons. Its introduction allows for significant improvements in comprehension and creation tasks relevant to animation. Future research could explore enhancing resolution quality, as the majority of data in Sakuga-42M is 480p, which limits the dataset's applicability for high-definition tasks. Moreover, the integration of human feedback in annotation could address challenges in generating precise descriptions.

Conclusion

The Sakuga-42M dataset offers an invaluable resource for advancing cartoon animation research, bridging the domain gap with natural videos, and setting the groundwork for novel research directions in automatic animation generation and comprehension. The dataset supports a scalable approach, which can significantly influence both academic research and industry practices in animation.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 9 tweets with 28 likes about this paper.

HackerNews