Auto-translation used

AI risks (local and global). Part 1. The Black Box

Friends and colleagues, in the light of the active development of AI technologies and its applications, I would also like to keep in mind some potential risks and threats (which sometimes, for some reason, are overlooked against the background of the general hype of technology).

In this regard, let me raise a few mini‑topics in the key of "AI risks (local and global)".

In my opinion, the most serious risk of introducing AI everywhere, everywhere and very quickly is the problem of the "black box". Below are just a couple (of the many) risks and threats that follow.:

Uncontrollability, unpredictability. By integrating AI elements into applications, services, and hardware management systems, in fact, no local developer can guarantee what the end result will look like. Moreover, even making adjustments and training models, this factor does not disappear anywhere, as well as some "glitchiness" of AI (as far as I understand, mathematically this is basically an unsolvable problem).

Lack of control. There is an aspect related to the first thesis. The operability of the AI elements is beyond our control and control. Today it's free, tomorrow there will be a cost. Today it works, tomorrow it may stop working or start working differently than the integrator assumes, etc. Do you want to introduce this into medicine? Be prepared for the fact that tomorrow digitized AI business processes may stand up, and the patient may be left without medical care or with an erroneous appointment. Well , etc.

Bias, external control of results. Everything in this world is connected in one way or another with politics (i.e. with big capital and the war of big capitals with each other). AI, of course, does not stand aside and will be used by each side as an effective tool, gaining more and more significant influence every year. Therefore, asking the AI to be "more open-minded" at certain times will certainly not work. And everything where AI will be embedded will work for the purposes of the final beneficiaries (and it will not be a local developer, and certainly not an end customer or user).

It's clear that I'm exaggerating a bit, but the topic, in my opinion, is very important if we look at the future strategically.

Colleagues, what do you think about this? Write in the comments if you are concerned about these aspects, especially if you are involved in the implementation or development of AI products and services. What are your thoughts on this?

PS: The picture has been created, and the spelling has been checked by AI.

PS2: I did not correct this last suggestion))

Comments 2

Login to leave a comment

Если ИИ-агенты не работают автономно, а являются помощниками для персонала, который в конечном счете ответственный за результат – то всё перечисленное не проблема. А если у вас ИИ-агенты где-то работают автономно. Это уже ошибка.

Reply

Перепроверять нужно ИИ

Reply