If anyone wants to write a song about AI, the refrain must be, “I am scared.” Talking with ordinary people of every education level, AI is seen as a “scary thing” that must be “controlled, observed, or killed if it tries something.” Some say there should be a kill switch… But is it really that dangerous, or is this just a typical reaction to new technology?
How old are you? Was TV in your kid’s room or not? (Fixed) telephone? Radio? If you didn’t have a radio in your room as a child, then you are 80 or older… But all of these were technologies once considered harmful, with warnings that “some nasty people will control your mind through it,” or something similar. This constant fear of “everybody wants to rule the world” through new technology is simply a déjà vu.
This déjà vu fear has acquired new features since the invention of computers. One of these features is the fear that “they will take your jobs.” Like the fear surrounding migrants, this anxiety keeps repeating with each new technological breakthrough in computing. With AI, this fear has exploded exponentially. We already have predictions saying that in five years, you will be jobless. And imagine, ChatGPT-4 has an IQ close to Einstein. Ahh, I am so scared!
Working for a company that has started to test and use AI in many fields, one thing that has remained the same is the number of employees. So far, that is good news. Our leading programmer is creating an AI environment for developing new apps. In the last couple of months, small apps have emerged from that kitchen, demonstrating that AI is significantly faster and better at programming than most programmers. However, there is a very interesting twist to this story: for AI to develop some apps, you need a human. And not just any human, but a programmer. What does that really mean?
It means a couple of things. After 30ish years of AI development, it is still in its own stone age. The wheel is there; perhaps we are entering the Bronze Age, but for further advancement (to create independent AI, for example), we need many more years.
It also means that AI is not taking your job—at least not yet. Someone still needs to explain to AI what they want to produce, and this must be done in small steps and in detail. If you are dreaming that AI will create a clone of WordPress just by saying, “make a new CMS similar to WordPress” – nope, it doesn’t work that way.
It also means that AI is a tool. A powerful one, but still a tool. It can be used in many areas, but it always needs to be guided and observed by humans. And not just any humans—by those who know the job that AI is trying to solve.
If you are waiting for Armageddon or something similar—nah, nothing is on the horizon. We have safety measures for now, in the form of AI regulations, and we need to be careful with those. Too many laws about AI can slow down or even halt development. That is why we need platforms like SEEDIG to discuss these laws from many angles.
On the other hand, if you still think that “standalone AI” will take over the world and we need a kill switch—let me ask you: to whom will you give that switch? The answer is not clear. What is clear – hype or no hype, you must follow the development in AI, or you will be soon illiterate.
*the blog is also published on gransy.blog