UHG
Search
Close this search box.

Transformers Can Now Work Pixel by Pixel, Says Meta AI’s New Study

The study, exploring “Transformers on Individual Pixels," challenges the long-held belief that locality – the notion that neighboring pixels are more related than distant ones – is a fundamental requirement for vision tasks.

Share

A latest research by Meta AI and the University of Amsterdam have shown that transformers, a popular neural network architecture, can operate directly on individual pixels of an image without relying on the locality inductive bias present in most modern computer vision models.

The study, exploring “Transformers on Individual Pixels,” challenges the long-held belief that locality – the notion that neighboring pixels are more related than distant ones – is a fundamental requirement for vision tasks.

Traditionally, computer vision architectures like Convolutional Neural Networks (ConvNets) and Vision Transformers (ViTs) have incorporated locality bias through techniques such as convolutional kernels, pooling operations, and patchification, assuming neighboring pixels are more related. 

However, researchers introduced Pixel Transformers (PiTs), which treat each pixel as an individual token, removing any assumptions about the 2D grid structure of images. Surprisingly, PiTs achieved highly performant results across various tasks.

Following the architecture of Diffusion Transformers (DiTs), PiTs operating on latent token spaces from VQGAN achieved better quality metrics like Fréchet Inception Distance (FID) and Inception Score (IS) than their locality-biased counterparts.

Perceiver IO Transformers (PiTs) are computationally expensive due to longer sequences, but they challenge the need for locality bias in vision models. Advances in handling large sequence lengths may make PiTs more practical. 

The study highlights reducing inductive biases in neural architectures, potentially leading to more versatile and capable systems for diverse vision tasks and data modalities.

Image generation using transformers

There are different models for image generation, such as Midjourney, Stable Diffusion, and Invoke, whose images can be reimagined with these technologies. Recently Midjourney has released the new feature “Character Reference” claiming to generate consistent characters across multiple reference images.

Stability AI announced Stable Diffusion 3, the most capable text-to-image model, featuring significantly enhanced performance in multi-subject prompts, image quality, and spelling abilities.

📣 Want to advertise in AIM? Book here

Picture of Gopika Raj

Gopika Raj

With a Master's degree in Journalism & Mass Communication, Gopika Raj infuses her technical writing with a distinctive flair. Intrigued by advancements in AI technology and its future prospects, her writing offers a fresh perspective in the tech domain, captivating readers along the way.
Related Posts
Association of Data Scientists
Tailored Generative AI Training for Your Team
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.