FOX NEWS – Generative artificial intelligence like ChatGPT is susceptible to several forms of bias and could cause harm if not properly trained, according to artificial intelligence experts.
“They absolutely do have bias,” expert Flavio Villanustre told Fox News Digital. “Unfortunately, it is very hard to deal with this from a coding standpoint. It is very hard to prevent bias from happening.”
At the core of many of these deep learning models is a piece of software that will take the applied data and try to extract the most relevant features. Whatever makes that data specific will be heightened, Villanustre noted. He serves as Global Chief Information Security Officer for LexisNexis’ Risk Solutions.