Meta has announced the release of four new AI models and additional research artifacts at Meta FAIR, as part of its commitment to fostering an open ecosystem. These releases aim to inspire innovation in the community and advance AI in a responsible way.
The new AI models include Meta Chameleon, which offers 7B and 34B language models supporting mixed-modal input and text-only outputs.
Additionally, Meta Multi-Token Prediction is a pre-trained language model designed for code completion using multi-token prediction.Using this approach, Meta trains language models to predict multiple future words simultaneously, rather than one at a time. This method enhances model capabilities, improves training efficiency, and allows for faster speeds.
Meta JASCO, another new release, is a generative text-to-music model that accepts various conditioning inputs for greater controllability. The accompanying paper is available today, with a pretrained model to be released soon.
Meta AudioSeal is an audio watermarking model designed specifically for the localised detection of AI-generated speech and is available under a commercial license.
Alongside these models, Meta is releasing additional Responsible AI (RAI) artifacts, which include research, data, and code aimed at measuring and improving the representation of geographical and cultural preferences and diversity in AI systems.
Meta emphasises that access to state-of-the-art AI should be available to everyone, not just a few Big Tech companies. The company is eager to see how the community will utilise these technologies.