Auto-translation used

AI risks (local and global). Part 3. Degradation of the quality of AI models

Friends and colleagues, I would like to continue thinking about the risks associated with the active introduction of artificial intelligence.

Once again, I am not an opponent of AI and I see in it a huge potential for application, but I also see certain alarming trends and trends, which I would like to talk about a little. 

Today I wanted to raise the issue of degradation of AI models that learn from data created by other AIS.

At the moment, the proportion of synthetic (AI-created) texts, images, and audio is noticeably growing on the Internet. And if new models learn from such data, the phenomenon of "data collapse" arises. 

Errors, simplifications, and distortions are transferred from one generation of information to another, becoming entrenched and amplified. This can be compared to copying a photo on a copier - each new generation is slightly worse than the previous one. The more copies, the worse the quality.

Why is it dangerous and what can it lead to?:

Loss of diversity, inheritance of errors (when false information is fixed as the norm), the illusion of quality(when models may seem smarter than they really are) - all these are most likely the inevitable consequences of this phenomenon.

But if you think about this issue in the context of areas where there is very high responsibility and risks (medicine, science, etc.), it becomes clear that it is necessary to take a very responsible and serious approach to the application of AI elements in practice.

Is there any way to fix or solve this problem?

It seems to me that without the introduction of uniform markup of the created information by all AI platforms (is this really possible?), the problem does not look very solvable.

That is, if the information created by AI and humans is not labeled in certain ways that differ from each other, all of the above problems of “multiple copywriting” (in my opinion) will most likely be inevitable.

What do you think about this topic? Will there be a solution to this problem, or will the inevitable “emasculation” of AI performance await us?

Earlier on the topic:

AI risks (local and global). Part 1. The Black Box

AI risks (local and global). Part 2. The price of intellectual convenience

The image in the material is generated by AI

Comments 2

Login to leave a comment

А вдруг «выхолащивание» — это не баг, а новый этап эволюции ИИ, когда мы сами перестанем понимать его «логику»?

Reply

Это очень похоже на эффект “сломанного телефона” в цифровом масштабе. Вопрос в том, кто и на каком уровне возьмет на себя роль «первого источника» для ИИ.

Reply