Would you like Artificial Intelligence to be your psychologist?

Would you like Artificial Intelligence to be your psychologist?

Artificial Intelligence and human psychology. You have mental health problems and want to feel better by going to therapy. But this time, the person listening to you is not a real person. An artificial intelligence! After his communication with you, he will not only make some evaluations but also maybe present his diagnosis. So, would you like to be treated by an artificial intelligence? Let’s discuss mental problems, which are considered the diseases of our age, and the studies on artificial intelligence.

In fact, there are therapy-chat bots currently in use. You can log in to chatbots such as Woebot, Wysa and Youper any day of the week, at any time you want, and talk to artificial intelligence in a chat-like manner. Of course, the software and machine learning of the artificial intelligence you are talking about are shaped according to human psychology. At this point, you may wonder what artificial intelligence can do in the already very complex psychological situation of humans.

Namely, the Deep Patient Algorithm, used in America in 2015, explains the frightening pattern between the human mind and artificial intelligence. A group of scientists working with Mount Sinai Hospital in New York enters the health histories of all patients to date into the Deep Patient Algorithm. Their aim is to determine to what extent an artificial intelligence will evaluate human biological diseases and to what extent it will accurately predict future diseases.

The Deep Patient Algorithm, which only has hospital records of patients, soon makes predictions about the psychological diseases of these people. Many of the results are either correct or have a high degree of accuracy. In the light of these and similar developments in 2015, algorithms that can diagnose people’s mental health problems and make treatment recommendations are being worked on.

The scientific world is divided into two regarding the possibility of artificial intelligence “becoming psychologists”. According to a group of scientists, people suffering from mental health problems may trust chatbots more because they do not fear being judged or shamed.

Thus, he can express his feelings, problems or fears much more clearly. This makes processes such as diagnosis much easier. On the other hand, some experts think that artificial intelligence can stand at a point that complements the treatment applied by therapists rather than replacing them. In other words, therapists can make more accurate diagnoses or reduce the rate of making mistakes during the treatment process by taking into account the comments of their patients’ chats in bots.

However, according to others who reject chat bots, it may be too risky for artificial intelligence to replace a therapist. For example, the chatbot may perceive the person it is talking to as a suicidal patient. In reality, that person may have just been making fun of the chatbot. On the contrary, the suicidal person may not even realize it.

As a result, therapists can sometimes act by listening to experiences at a conscious level that we cannot yet explain. For example, when the therapist observes his patient, he may be disturbed by his body language but not detect any problems in his words. He can observe small details throughout therapy and make some inferences. An artificial intelligence algorithm may not be able to notice these small nuances.

The last thing that is confusing about chat bots is accessibility. We know how difficult it is for a low-income person to see a psychologist or get the right therapy. However, any chatbot can allow people from all walks of life to reach out. While this ease of access can yield very good results, it can also lead to unexpected developments. At the very least, we often hear concerns about how frightening the unpredictable progress of artificial intelligence might be.

Dr. works on human and robot interaction at MIT Media Lab. Kate Darling said that artificial intelligence can be used in all areas of life; However, he states that necessary research should be done first. Ensuring that algorithms such as chat bots are adequately examined before they are released to the market and their risks and benefits are determined with close results can prevent possible problems.

Views: 230