Their discoveries, shared exclusively with MIT Technology Reviewreveal a worrying trend: AI data practices risk massively concentrating power in the hands of a few dominant technology companies.
In the early 2010s, data sets came from a variety of sources, says Shayne Longpre, an MIT researcher who is part of the project.
This information came not only from encyclopedias and the Web, but also from sources such as parliamentary transcripts, telephone calls, and weather reports. Back then, AI datasets were specifically curated and collected from different sources to address individual tasks, Longpre explains.
Then transformers, the architecture behind language models, were invented in 2017, and the AI industry began to see its performance improve as models and datasets were bulky. Today, most AI datasets are built by indiscriminately sucking material from the Internet. As of 2018, the web has been the dominant source of datasets used in all media, such as audio, images and video, and a gap between retrieved data and more curated datasets has emerged and s ‘is expanded.
“In basic model development, nothing seems to matter more to capabilities than the scale and heterogeneity of the data and the web,” says Longpre. The need for scale has also massively driven the use of synthetic data.
Recent years have also seen the rise of multi-modal generative AI models, capable of generating videos and images. Like large language models, they need as much data as possible, and the best source for this has become YouTube.
For video models, as you can see in this chart, over 70% of the data in both voice and image datasets comes from a single source.
This could be a boon for Alphabet, the parent company of Google, which owns YouTube. While text is distributed across the web and controlled by many different websites and platforms, video data is extremely concentrated on a single platform.