ChatGPT, sweet talker | The Press

OpenAI’s ChatGPT chatbot is arguably the most eye-opening technology in 2022. Many are trying to anticipate the effects that artificial intelligence (AI) systems like ChatGPT will have on human life. As is always the case when AI makes new inroads, the hype and hype make a fair and sober apprehension of its capabilities and impacts nearly impossible.


Even setting aside the fanciful scenarios often conjured up by those who have spent a lot of time writing code and consuming science fiction, one must recognize that AI technologies like ChatGPT or image generators like DALL- E or Midjourney are disruptive technologies. They substantially modify well-established social practices.

ChatGPT was designed based on a hybrid approach combining supervised machine learning and reinforcement learning.

ChatGPT – very energy-intensive – has gigantic computing power and had access to a very large textual database in the first phase of its training.

Secondly, he benefited from the feedback of human judges who evaluated the quality of his responses to queries. Note that if ChatGPT avoids giving racist, misogynistic or homophobic answers better than its predecessors, it is because rules of conduct have been added by the programmers so that the software stays away from this kind of remark. He has not learned the art of moral reasoning.

OpenAI’s conversational agent produces texts that are particularly well constructed from a syntactical point of view and sometimes surprising from a semantic point of view. For example, his answer to the question submitted by the editorialist Alexandre Sirois⁠1 on “the impacts of ChatGPT on the world of work” comes close to an answer that a fairly well-informed person could have given. However, this is an agreed answer. This is by no means surprising. A machine learning algorithm “learns” by spotting statistical regularities in very large data sets. ChatGPT excels at sort of synthesizing information contained in its training data. Contrary to what several commentators have claimed, ChatGPT does not learn continuously like human and non-human animals do. It is only when OpenAI releases the new version of its model that we will be able to see an improvement.

Natural language processing algorithms like ChatGPT will transform the tasks of those called upon to produce content. In most cases, human beings will not simply be replaced by machines. Programs like ChatGPT could produce drafts, summaries, or even food for thought. In several fields of activity, AI can be used as a heuristic procedure whose objective will be to generate proposals to be evaluated and revised, a raw material from which to work.

The main reason to avoid succumbing to the frenzy surrounding each new AI breakthrough is that the algorithms understand absolutely nothing about the data and symbols they manipulate.

They also lack any common sense that would allow them to react appropriately in new situations and to correct some of their initial responses. As the now accepted phrase goes, powerful inductive AIs like ChatGPT are stochastic parrots; their prowess is based on the statistical generalizations that their dizzying computing power allows them.

Made up answers

In addition, conversational robots have the unfortunate tendency to spiel, to invent answers from scratch based on the correlations they spot in the data. Two convincing examples were recently given by Alain McKenna⁠2. The journalist from To have to wanted to test ChatGPT by submitting two questions of political philosophy, namely what Hobbes and Rousseau thought of the principle of the separation of powers. ChatGPT argued that Hobbes recommended the separation of political powers. This is obviously a wrong answer. Hobbes is legendary for advocating the need for the concentration of power in the hands of a single political authority in order to maintain peace between inherently selfish and violent human beings.

Equally concerning, ChatGPT also gave the wrong answer to the question about Rousseau. The response of the conversational robot, however, is more of a spiel and approximation here than a frank error. It will be remembered that according to Harry Frankfurt, the smooth talker is distinguished from the liar by not caring about the truth or the falsity of his assertions. While the liar knows the truth and knowingly denies it, the smooth talker is indifferent to the epistemic quality of his remarks; he mixes the true, the false and the approximate in order to achieve his goal without any consideration for rigor and truth.

However, if Rousseau is, unlike Hobbes, a radical democrat, he shares with the latter an aversion to the fragmentation of political power. In Rousseau, the people are sovereign in a more direct way than in a system of representative democracy. Citizens must exercise their sovereignty as often as possible, by “stealing from assemblies” and voting for or against political proposals. Citizens can be free while being subject to laws only if the latter are freely chosen and self-imposed.

The main idea is that political freedom requires that nothing weaken the sovereignty of the people. There is no need to distribute political power between three distinct branches. Authentic citizen deliberation about what serves the common good is infallible; this is how the “general will” is created and renewed. Rousseau evokes the figure of the “legislator”, but his role is more to enlighten the multitude than to make the laws.

In short, ChatGPT is a very bad political philosopher, far inferior to a CEGEP student who would have been attentive in class.

As others have written before me, the advent of language AIs only serves as a reminder of the crucial importance of general knowledge and critical thinking. Those who will use ChatGPT to produce content will need to have the knowledge and skills to evaluate it⁠3.

I’m not one to think that ChatGPT represents a major leap in the direction of general artificial intelligence. Chatbots will disrupt jobs and education, but current models lack common sense, don’t understand causal relationships, and fail to reason logically. In addition to forcing teachers to review certain pedagogical practices, AI will benefit those who know how to use it and could make professional integration more difficult for those who are already disadvantaged in the digital economy. This is an issue of the utmost importance for public authorities.


source site-58

Latest