Is It Bullshit to Learn Software Just Because ChatGPT Can Write Code?

In the 50s, famous mathematician Alan Turing "Can machines think?" ie "Can machines think?" He included the question in his article. Today, there are quite advanced artificial intelligence applications. ChatGPT, which makes people nervous, is one of them.
 Is It Bullshit to Learn Software Just Because ChatGPT Can Write Code?
READING NOW Is It Bullshit to Learn Software Just Because ChatGPT Can Write Code?

The concept of artificial intelligence has always been misunderstood. This is due to the high expectations. Right now, people are commenting that “artificial intelligence will leave most people unemployed, including programmers”, “humanity will soon come to an end because of artificial intelligence”.

Let’s examine the artificial intelligence applications that have been put forward from time to time and find the answer to this question. As a software trainer, my thoughts are as follows:

Between 1950 and 1980 there was symbolic artificial intelligence. Artificial intelligence experienced its golden age in this period.

One of the first artificial intelligence applications is undoubtedly the chess game that everyone plays. At that time, playing chess against the computer on the computer and making the computer move against you was called artificial intelligence.

However, this was later realized that the computer playing chess against you is not artificial intelligence. The reason is quite understandable. The creators of the game actually coded each problem and move with functions. Coding each game move with the software language is not artificial intelligence.

Everyone was discouraged by the understanding of symbolic artificial intelligence between 1980 and 2000. In this period, there was not much focus on artificial intelligence issues.

In this period, we can say that artificial intelligence experienced the winter age due to the loss of hope and enthusiasm of people. Studies and developments were not encountered in this period.

However, with the popularity of the 1950-1980 period and the development of the cinema industry after the 80s, some popular films were shot. The Terminator movie is one of them.

The situation is quite different when we examine today. Projects developed after 2000 both disturbed and worried.

When we look at today, the concepts of machine learning and deep learning, which are sub-fields of artificial intelligence, have been developed day by day. This made the word autonomous the center of attention. So what is this word “autonomous”?

The word autonomous can be defined as the ability of human beings to apply their intellectual activities to electronic systems. Some breakthroughs made today;

  • Almost human-level image classification,
  • Almost human-level voice recognition
  • Transcribe audio in video at almost human level and
  • High points have been reached in answering questions asked almost at the human level.

Artificial intelligence may leave many professions, including software developers, unemployed, but it is not possible anytime soon.

Although artificial intelligence can perform almost human-level tasks, what it currently does is still not at the human level. For example, ChatGPT takes advantage of reinforcement learning from artificial intelligence. Judging by what it’s doing now, it’s actually inspired by human behavior.

For this reason, let’s pay attention to the structures we use. The preposition “like” can be explained as similar situations and the verb “inspired” can be explained as connoting because of something. Resembling and being inspired is currently the main topic of artificial intelligence. That is, currently developed artificial intelligence systems are no smarter than the human brain.

François Chollet, the famous deep learning library creator, said, “Especially starting to talk about near-human level intelligence should not be taken too seriously.” says. I want you to know that this person we are talking about has contributed to the formation of many artificial intelligence models and developed the models.

Artificial intelligence models do not work like the human brain. There is no scientific evidence to contradict this.

The fact that artificial intelligence can perform tasks close to the human level should not mean that it works like the human brain. There is no scientific evidence for this in the studies conducted.

Have peace of mind. Humans are still the most intelligent being on earth. The more a person develops himself, the more experienced he will be in the subject he has developed and the potential to find a job is quite high. I want you to know that it is humans who lay the foundations of artificial intelligence models.

The new and advanced GPT-4 artificial intelligence model offered by OpenAI is for specific areas.

We were all quite impressed and uneasy when we used ChatGPT. OpenAI will have realized that it has released the more advanced and paid version, GPT-4.

Without a doubt, the GPT-4 model is better than previous models. However, the goal of creating GPT-4 is for a specific area. Particularly focused areas are song composition, text editing, movie script writing, academic paper creation, coding, and learning the user’s writing style. For example, you corresponded with the GPT-4 model. After a certain correspondence, he will learn your writing style and will reply back to you with your style. Isn’t it surprising? So do we.

Those who develop artificial intelligence models still do not think that artificial intelligence will be better than humans. Also, learning software isn’t silly either.

Now let’s focus on the answer to our question. Learning software is never useless and ridiculous for now and for the next 50 years. This is clearly stated by the engineers who created the artificial intelligence models.

It’s also good to know. Artificial intelligence models, especially applications such as ChatGPT, can crash, cause security vulnerabilities, and consume high energy.

In order for an artificial intelligence system to destroy software and other fields, it must first make a profit for companies rather than being smarter than humans. Systems compared to humans today consist of servers the size of a stadium, yet they make a lot of mistakes and consume a lot of energy. While using the GPT-3.5 version makes an average of 35-40 errors in a code project, this number of errors is in the range of 5-15 in GPT-4. There is talk of the future of the GPT-5 model in the near future.

Every software developer is doomed to be unemployed if he does not adapt himself to new systems and technologies.

The biggest problem of software developers is that the industry is constantly improving itself. Today there is ChatGPT and GPT-4. GPT-5 will be released soon and new artificial intelligence models will increase constantly.

If a software developer does not constantly improve himself, he cannot keep up with the developing technology and is faced with the problem of being unemployed. As a software trainer, my advice is to constantly improve yourself.

Comments
Leave a Comment

Details
236 read
okunma14833