A study conducted by the Massachusetts Institute of Technology (MIT) revealed that the way in which AI synthesizes data makes it elementary and biased: the researchers monitored a series of systems and found that many of them had a shocking bias. The team then developed a system to to help researchers ensure that their systems were less biased.
"Computer scientists often say quickly that the way these systems are less biased is simply to design better algorithms, but the algorithms are not," said Irene Chen, a doctoral student who wrote the paper with Professor David Sonntag of the Massachusetts Institute. Technology and postdoctoral assistant Frederick. Just as well as the data you use and our research shows that you can often make the difference with better data. "
In one example, the Group considered a system of income expectations and found that it was likely that female employees were misrepresented as low-income female workers, while male employees had high incomes and that, if they increased the data set, fewer than 40 percent fewer errors would make.
In another dataset, researchers found that the ability of the system to predict intensive care was less accurate for Asian patients. However, researchers warned that existing approaches to reduce discrimination would make non-Asian predictions less accurate.
Chen explained that one of the biggest misconceptions is that more data is always better, instead researchers should get more data from those under-represented groups.
"We consider this a toolbox to help computer technicians learn the questions to ask from their data to find out why their systems are making unfair predictions," Sonntag said.
Note: the content of this news was written on the seventh day and today does not represent the position of Egypt, but content was transferred as it is from the seventh day and we are not responsible for the content of the news and the preservation of the previous source.