Limit potential biases in the training data

Talk big database, solutions, and innovations for businesses.
Post Reply
mayaboti
Posts: 295
Joined: Mon Dec 23, 2024 3:48 am

Limit potential biases in the training data

Post by mayaboti »

Google hopes MUM will help the search giant understand these queries, target key points, and find content that answers every aspect of the question. MUM and AI algorithms : Between ethics and paradoxes… The final point Nayak raises in his interview is the ethical implications of running AI models, generally citing three points that Google is working on specifically: to minimize the risk of biases in the data provided.


Nayak says Google uses only high-quality data cambodia telegram data to filter out most biases, but he also acknowledges that high-quality data may contain biases, and adds that Google takes steps to remove them. Internal assessments to identify any concerning patterns that may develop through training. Address the environmental cost of running large AI models, which consume large amounts of energy. Google says its choice of template technology is reducing its carbon footprint by up to 1,000x, and Nayak recalls that Google has been carbon neutral since 2007.


While MUM is a new technology for Google , the search giant has been working on systems to mitigate these potential issues for many years, as Nayak revealed in his original blog post announcement : “Just as we have thoroughly tested the many BERT applications launched since 2019, MUM will go through the same process as we apply these models in Search. In particular, we will look for patterns that may indicate bias in machine learning to avoid introducing bias into our systems.
Post Reply