news-28062024-065913

In 2016, Google engineer Illia Polosukhin had a lunch meeting with a colleague, Jacob Uszkoreit. Polosukhin was feeling frustrated with the lack of progress in his AI project, which aimed to provide helpful answers to user questions. Uszkoreit suggested a technique called self-attention, sparking an 8-person collaboration that led to the 2017 paper “Attention Is All You Need.” This paper introduced transformers as a way to enhance artificial intelligence, making a significant impact on the world.

However, eight years later, Polosukhin is not entirely satisfied with the current state of affairs. As a strong advocate for open source, he is worried about the secretive nature of transformer-based large language models, even from companies that claim to prioritize transparency. He expressed concerns about not knowing what data these models are trained on or what biases may be present in them. While some companies like Meta claim to have open-source models, Polosukhin believes that true openness involves transparency about the data used in training.

Polosukhin fears that as large language model (LLM) technology advances, it could become more dangerous, especially if profit motives shape its development. He believes that improved models could be used to manipulate people more effectively and generate revenue, raising ethical concerns.

Despite these worries, Polosukhin doubts that regulation alone can address these issues effectively. He argues that setting limits on models is a complex task, and regulators may ultimately have to rely on companies themselves to ensure compliance. He questions whether individuals, even engineers, have the expertise to evaluate model parameters and safety margins effectively, casting doubt on the ability of regulators to oversee the industry.

In response to these concerns, Polosukhin advocates for an open-source model of AI that integrates accountability into the technology itself. He left Google to establish the Near Foundation, a blockchain/Web3 nonprofit, with the goal of applying principles of openness and accountability to “user-owned AI.” This decentralized approach would allow everyone to have a stake in the system, promoting transparency and fairness.

Near has launched an incubation program to support startups developing applications based on this open-source model. One such application involves distributing micropayments to content creators whose work contributes to AI models. This innovative approach aims to align incentives and create a neutral platform for AI development.

Overall, Polosukhin’s vision for the future of AI emphasizes transparency, accountability, and user ownership. By promoting an open-source model, he hopes to address the ethical and social challenges posed by advanced AI technology while empowering individuals to participate in its development.