This web app uses cookies to compile statistic information of our users visits. By continuing to browse the site you are agreeing to our use of cookies. If you wish you may change your preference or read about cookies

April 25, 2023, vizologi

If we humanize AI, we will be lost.

I don’t particularly appreciate that Artificial Intelligence is humanized or reified, that we continually compare it to people, that we attribute to its characteristics, virtues, or defects that are correctly human, and that we test it more like a person than a machine. In short, we give them an entity that they do not have.

We know too little about our human consciousness or how our brain works to judge when AI can reproduce consciousness; I consider the debate sterile, and the Turin test should not be the reference of anything in this regard.

We start from a seventy-year-old testing theory and have worked harder to develop a very advanced state-of-the-art. Yet, we have not spent the time it needs to evaluate or analyze the results of it or how to test them in depth with calm, criteria, and care.

The pace imposed by the market has left no time for debate. Then we thought that the day a brilliant AI came along, we passed the Turing test, and everything was solved; the problem was much more complex and multidimensional.

We are in 2023, and this scientific approach does not necessarily have to be the correct one today; it is very outdated to the reality of the events we are living; the variables are different, the environment has changed completely, and we are measuring very poorly the impacts in these first steps of generative AI, perhaps because we started from no previous reference.

This leads us to evaluate and test on the fly; it is equally essential that the state of the art of the testing process is the same as that of the technology to be tested. As a result, we create an unjustified fear or social alarm; we are not focusing on what is essential.

It is already being published that GPT-4 has an IQ of 155, a priori that is the approximate IQ of Elon Musk, among others. When you interact with this tool, you may mistakenly think that you have at your disposal a micro Elon Musk at less than two clicks and that you can customize it to any task you are working with.

Are we going to approach AI this way? Why are we continually comparing it to human IQ? What do we gain or lose? And if we understand it for what it is, is it intelligent? Yes, sure, of course, but it’s not your kind of human intelligence; it’s another kind of synthetic intelligence; it’s data and algorithm that we humans have built; it sorts the text in such a way that it seems to reason extraordinarily well, but that reasoning is not what you have as a human, its functioning is different.

The future AI (if it is not stopped before) may have IQs of 300 or 500, or 1,000, but this technology should not be evaluated with human IQ metrics; it will be an ultra-advanced knowledge system, an ASI (Artificial Super Intelligence), it will be a type of intelligence different from yours, that you as a human will not understand it. Most likely, it will not understand you either; in any case, it should always be at your service rather than the other way around.

AI already surpassed us many years ago, in 1997, when DeepBlue beat Gary Kasparov, and the human reactions were the same, the cry went up in the sky, “Chess was going to disappear,” it no longer made sense to continue with the game, twenty-six years have passed, and the chess industry enjoys enviable health. There are more human chess players than ever in the world.

It is a problem of control, not of intelligence that surpasses us, of ensuring 100% that these systems always respond to the human and are under his orders, that at no time, they can perform an autonomous task by themselves; it is not that they will have an IQ of 1,000 or 10,000, probably an AI that has exceeded our maximum threshold of human IQ, will see as peccata minuta the problem of climate change, or how to increase the GDP by 3 points, will give us solutions that will leave us speechless.

Are we going to give up this power? Why?

Then we should not draw parallels in concepts of intelligence that are different, neural networks try to mimic the behavior of how your human brain acts, how your neurons communicate with each other, but it has absolutely nothing to do with its operation, with your way of human reasoning, nor with your intelligence.

I consider the AI as a mirror; it is neither smarter nor dumber than you, regardless of the human IQ tests you are making it pass; it is a reflection of yourself, a response to your human intelligence when you interact with it, it is about your ability to compose good prompts interacting with a tool that should always be at your service.

The concept or the theory of the mirror that I have been telling you in previous articles, I base on the practice of my day to day, in testing, in observation, the scientific method, in some occasional conversations with my users, who tell me that the answers that our AI (GPT-3.5 at the moment, GPT-4 soon) throws to them were not very intelligent, or that they expected something more, that it would do everything directly at the same time.

In a polite and didactic way, I asked them to send me their launched prompts. Then, after analyzing them, I explain that the problem is not in the power or intelligence of the algorithm but in the question or prompt that the human generates in the input.

If you compose a very basic prompt, the response of the AI will be simple, vague, and dull; it will not surprise you if you write a very elaborate and sophisticated prompt (which does not have to be extended), the response of the AI will be outstanding and synthetically intelligent.

But it is not about approaching the problem from the prism of how I am interacting with a “superior entity” that has the IQ of Elon Musk, that is going to take away my work and do all my tasks in one click (including all my reports, take the kids to school, and do my shopping), everything it returns to me has to be ultra-intelligent, and that does not need validation on my part, no editing, no modification, no work at all.

No, it doesn’t work like that; AI will demand a lot from you to reach significant advances with the help of your interaction with it. It is a game of continuous human-machine feedback; Jordan Peterson, a reputed Canadian psychologist and intellectual, faced ChatGPT for the first time, and he couldn’t think of anything else to do than to throw this simple prompt at it:

Write me an essay that is the 13th rule of Beyond Order, written in a style that combines the King James Bible with the Tao Te Ching.”

Jordan Peterson.

You can imagine that the answer was spectacular, even if it was an extraordinarily twisted and improbable assumption, but if you ask ChatGPT what color the clouds are? Don’t expect miracles.

The intelligence AI must evaluate as a self-contained tool in itself, as a technology that must overcome and compete with itself, always at the service of the human, but not to be compared with human intelligence, nor compete with us; we should not enter into that game, nor give it its entity or even refer or call it “entity,” without more, it is a black box with data and algorithms.

You can find black boxes anywhere in society, politics, economics, or justice, but with this one, you will have to try keys until you find the treasures; it is not black because it is a box that has been designed to do evil, but rather a misunderstood box that we have to discover in our interaction with it.

I want to rescue Asimov’s laws; just as I see the Turing test as totally outdated, I find much sense as a starting point to understand the message left by this great author whom I admire so much:

  • First Law: A robot shall not harm a human being nor allow a human being to be harmed by inaction.
  • Second Law – A robot shall comply with orders given by human beings, except those that conflict with the first law.
  • Third Law – A robot shall protect its existence to the extent that this protection does not conflict with the first or second law.

From this simple basis, we can debate and develop everything we want. However, It is disproportionate and out of place to take the debate beyond here, with exoteric risks unfounded by misunderstanding or the particular interests of some experts on the subject.

When you are given free access to ChatGPT, you should know that you are interacting with a technology that has been previously tested, passing strict protocols of security, ethics, morality, privacy, and values; I wish all individuals, companies, public and private organizations that are screaming in the sky scared with AI, would pass these tests of values, ethics, privacy, and security, the world would be much better, I assure you.

AI by itself is neither good nor bad, but the use that we as humans give to this tool, this is what has to be evaluated; it is still a double-edged sword, depending on the hands in which it falls, you can ask it about the instructions to create an atomic bomb or how to build a rocket to take you to the moon to take a walk on the weekends, it depends on your use.

But by nature, I do not see bad intentions in a native way to this technology, nor to many others; we humans are the ones who give it that sense.

Insist on the concept that if you throw ChatGPT or any other Large Language Model (LLM), how to make an atomic bomb? It will return absolutely nothing, or at most a warning message, that the model is not designed to give you that information.

These language models come out to the market cap by default; they have gone through a testing phase, in which there is a previous filtering. But moreover, my personal opinion is that they have gone too far with this capping earlier before going to production; some exceptions seem to me very borderline or exaggerated, then about the criteria that have been in the security filtering, I think that natively, it is already regulated by excess, not by default.

Universal concepts to do evil with this technology have already been excluded as exceptions or issues that AI cannot provide solutions to, with prior filtering before going to market; no ecosystem agent wants an AI to give instructions to do harmful or dangerous acts. However, it can help us to cure cancer in the coming years; let’s value risks vs. gains.

Taking the same example to the Internet, there would be many possibilities to find information to that question. So let’s not demonize something because legislators, politicians, and the media do not understand how it works or it escapes their control, we are playing with gunpowder, and misinformation can lead to a scenario of unwanted regulation, reducing the freedom of citizens to use these technologies for their benefit and that of the collective.

Then my more practical engineering and practical side tells me that the problem or the risk that generative AI can bring us is solved simply by understanding the structure of the data, making filters, applying labels, and contemplating the exceptions in the GPT-X code; this is the simple solution with which engineers solve the problems. Going beyond here is politicking, not engineering or secure technology, and the debate can go on ad infinitum, but the technical discussion ends here; it is straightforward.

The problems that we find and will find in Generative Artificial Intelligence will be the same or very similar to those we have on the Internet; the Internet data are what feed this technology; If there is racism on the Internet, the AI will reproduce that bias, if in social networks there is hatred on a specific topic, the AI will have that bias, if there is discrimination of any kind, the AI will reproduce that discrimination.

The information and data that humans have been creating on the Internet over the years will be the reflection in the mirror in which the AI looks, and the problem is not theirs; it is ours as a society, perhaps even from an engineering point of view, it is easier to solve these problems by working on modifying the AI code than trying to solve one by one these problems on the Internet or social networks.

If politics enters into AI, perhaps the questions that should be on the table would be; If AI is going to be federated like the current Internet, with three distinct blocks, USA, Russia, and China, what risks might there be when they communicate with each other? If the AI will have bias being able to be left or right wing, should it be neutral right? Hopefully, this technology will be left clean, without any possibility of ideological indoctrination in either direction.

If AI can be configured as a democracy or dictatorship, will it depend on the regime that is implemented in each country? If AI can be dual, meaning that there are Western AIs and Eastern AIs, two different ways of looking at the world, the ethical and cultural values that a Christian may have may be very different from those of a Muslim.

Should we develop universal values and ethics that cover the rights and freedoms of any citizen interacting with AIs anywhere in the world? What would be the standard of ethics and values covering any citizen regardless of religion, race, or background?

These questions should be on the table; the rest is noise, conflicts of interest, and politicking.

Vizologi is a revolutionary AI-generated business strategy tool that offers its users access to advanced features to create and refine start-up ideas quickly.
It generates limitless business ideas, gains insights on markets and competitors, and automates business plan creation.

Share:
FacebookTwitterLinkedInPinterest

+100 Business Book Summaries

We've distilled the wisdom of influential business books for you.

Zero to One by Peter Thiel.
The Infinite Game by Simon Sinek.
Blue Ocean Strategy by W. Chan.

Vizologi

A generative AI business strategy tool to create business plans in 1 minute

FREE 7 days trial ‐ Get started in seconds

Try it free