|
It is a lever to be pulled to achieve a certain goal, and not an end in itself. Additionally, its information is limited. The dataset it trained on stops halfway through 2021, making ChatGPT unable to answer questions about recent news – as can be seen in the example below. However, he can learn from his conversations with users. And OpenIA plans to connect the next version of its program to the web. Also, the responses generated by the program suffer from multiple flaws.
They are generally of average quality, often summary or superficial, and always politically correct. ChatGPT does not get “wet” and constantly takes care of itself so as not to upset anyone, even on questions that involve nothing. His answers can also become Buy Bulk SMS Service eccentric, as long as the question is phrased in an ambiguous manner. Thus, artificial intelligence sometimes gives the impression of sticking together notions taken at random. The example below, with kangaroo eggs (spoiler: kangaroos don't lay eggs!), is pretty obvious.

But on more complex subjects, a poorly informed user is likely to take at face value a biased, truncated, or even completely false answer. Because ChatGPT is unable to identify an error in its own response. Finally, the massive use of ChatGPT raises ethical questions. The program having no concept of good or evil, it is capable of responding to any request without taking into account its morality. This is why OpenIA engineers have put in place safeguards that prevent the chatbot from explaining how to make a bomb or how to poison a person.
|
|