Listen to this story
|
Google researchers recently introduced Synth2, a novel approach for training Visual-Language Models (VLMs) using synthetic image-text pairs. This new technique enhances the capabilities of LLMs and text-to-image generation, alongside addressing the limitations of manual image labelling.
This method solves the problem of not having enough data labelled by people. It uses LLMs to generate captions and a text-to-image model to synthesise corresponding images.
Google researchers demonstrate several significant findings in terms of improved VLM performance, data efficiency, and customisation and scalability.
For instance, the visual language model trained on their synthetic and human-annotated datasets showed marked improvement in image captioning tasks compared to a baseline trained exclusively on human-annotated data.
“This underscores the potential of our method to augment VLM capabilities efficiently. This is highly advantageous in scenarios where data acquisition and annotation are resource-intensive,” said the researchers.
In addition, this new approach yielded comparable performance while utilising only a fraction of human-labelled data, thereby increasing data efficiency.
This method proves to be flexible in customising generating image datasets for specific domains. The synthetic data generation is also scalable to support large scale VLM development.
You can find the research paper here.