Tag Archives: intelligence

Talk at the 38th Annual AAAI Conference on Artificial Intelligence, in Vancouver

This week, François is in Vancouver, at the 38th Annual AAAI Conference on Artificial Intelligence,

presenting our joint work on Sequentially Fair Mechanism for Multiple Sensitive Attributes,

In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score. Throughout recent years, the scientific community has developed a host of definitions and tools to solve this task, which work well in many practical applications. However, the applicability and effectivity of these tools and definitions becomes less straightfoward in the case of multiple sensitive attributes. To tackle this issue, we propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features. We accomplish this by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This method also provides a closed-form solution for the optimal, sequentially fair predictor, permitting a clear interpretation of inter-sensitive feature correlations. Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness. This extension permits a targeted prioritization of fairness improvements for a specific attribute within a set of sensitive attributes, allowing for a case specific adaptation. A data-driven estimation procedure for the derived solution is developed, and comprehensive numerical experiments are conducted on both synthetic and real datasets. Our empirical findings decisively underscore the practical efficacy of our post-processing approach in fostering fair decision-making.

 

The Bullshit Society

(this article was intially written in French, La société du bullshit)

In 1986, the philosopher Harry Frankfurt defined the concept of “bullshit” by giving it a very precise meaning, sometimes translated into French as “baratin“, for “smooth talk” or “flannel”. A bullshitter is not a liar (who is aware of deceiving), he is just totally indifferent to the truth, which makes him almost even more dangerous (“bullshit is a greater enemy of the truth than lies are“). Bullshitters have an opinion about anything and everything, so they talk a lot about things they know almost nothing about. And the fascination of these last months around conversational agents (like GPT-3 or ChatGPT1), these algorithms born to chatter, can only question us.

ChatGPT or text completion

A Large Language Model (LLM) is a natural language processing model that uses a large amount of textual data to learn to predict the next words and sentences in a given text. These models are trained on large data sets, such as text corpora from various sources, such as books, newspaper articles, web pages. GPT-32 is one such model, containing 175 billion parameters (120 times more than GPT-2, launched a year earlier) distributed in a 96-layer neural network, with a contextual memory of 2048 tokens (or “lexemes“)3, and vectorizing in a 12288 dimensional space (I refer to Wolfram (2023) for more technical details on how ChatGPT works). This gibberish4 means that when typing some words, the algorithm first transposes them into a (vector) space giving them context, we will speak of word embedding. The best known of these algorithms is probably Word2vec5, by Mikolov (2013), which is based on the idea (stated in the 1950s) that words appearing in similar contexts have related meanings. We can then introduce a geometry into this latent space, which will allow us to interpret the fact that the vector connecting man and king is parallel to the vector connecting woman and queen, which we will sometimes write as king – man + woman = queen. Another famous example is Paris – France + Spain = Madrid, in other words Paris is to France what Madrid is to Spain. This lexical folding is not associated with an understanding of geopolitics, allowing to know that Madrid is indeed the capital of Spain, but simply on concomitances of words in sentences, used to train the algorithm. The quality of the texts used will then be important, and this logic allows to explain that the algorithm will reproduce biases observed in the texts, and will seem racist, or sexist. Here again, a well-known example is surgeon – man + woman = nurse.

Second, this model is a generative text completion tool, meaning that when it sees a succession of words, it will try to predict words that might follow, listing the most likely words, and then choosing. On GPT-3, the “temperature” parameter introduces randomness, by asking it not to necessarily always choose the most probable sequence, but allowing it a little randomness (which could be associated with a form of poetry or creativity). It is important to remember, as Li (2023) explains, that from a philosophical as well as a mathematical point of view, it is fundamentally impossible for models trained to guess the next word to learn the meanings of the language and that their performance is only the result of memorizing statistics and correlations that do not reflect a causal model of the process generating the sequence. To illustrate, let us imagine an algorithm trained on a single text, the Bible. If I ask it to complete the sentence “In the beginning was…”, the logical sequence for the algorithm will be “the word“, and “Let there be…” will be followed by “light” (while it will probably be “rock” if the training corpus was my music playlist). Here, the amount of text used to train the algorithm is astronomical. For example, on GPT-3, I can ask it to continue the beginning of a sentence:

The answer depends, among other things, on the number of terms that are asked. The sentences are coherent and grammatically correct, showing that GPT-3 has “formal linguistic competence”, as cognitive scientists would say. But it has no “functional linguistic competence”, i.e. it has no reasoning ability. In other words, GPT-3 behaves exactly like the operator locked in John Searle’s “Chinese room”, the thought experiment analyzed by Cole (2004). However, it can perform calculations, for example calculate an integral

or, when asking for more details

Indeed\int_{0}^{\pi/2}\cos(x)dx=\big[\sin(x)\big]_0^{\pi/2}=\sin({\pi/2})-\sin(0)=1-0=1.One can ask him to explain a little more. And the results are then astonishing, with formal calculations right, but wrong numerically

But sometimes, it is… weird (or like grading a student who did not quite understand the course)

I can’t help displaying others, just to illustrate how wrong it can go

or again, correct answer, but we hear about “the area of a triangle

or (with a multiplication gone wild)

We also have integration by parts

It is clear that the creative power of the algorithm is impressive

illustration by Sir John Tenniel, 1869 (Alice in Wonderland)

I can try to add an information, like I know mathematics (as a colleague said, “if you ask it to solve problems at your level, ask it to put itself at your level“)

Here again, we can get crazy things

One gets the impression that GPT-3 has made Lewis Carroll’s phrase6 its own, “But then,” says Alice, “if the world makes absolutely no sense, what’s to stop us inventing one?” We can also ask her for more… “personal” information, for example

then, by asking the question again,

We can ask the question again indefinitely, and have fun with the answers… but we have to admit that this is typically a bullshitter’s answer, because it is difficult to imagine that an algorithm has a favorite city. One could almost say that it is creative, even if it was content to search, in the whole corpus used to train it, for a logical answer, or rather a plausible sequence of terms to follow up on my request. One could almost stop here, and think back to all those books written about technology and the robotization of society, which claimed that robots would do the manual work, and humans the creative work. It’s 2023, and while GPT-3 is inventing favorite cities (he might even answer me in alexandrines), I still have to empty the dishwasher and to vacuum.

In the introduction, we mention that GPT-3 had no awareness of what the truth might be, he just makes conversation with us. We can tell him that he is wrong (even if he isn’t), and he will obediently agree

GPT-3 is thus an authentic producer of bullshit, as Frankfurt (1986) asserted. Worse, it can make statements that leave no room for doubt. For example

But would we prefer to read at the end of the sentence “but I have doubts“, which will almost complete the process of making him human7 ? This ambivalence is reinforced by a surprising feature of GPT-3 (and it is also the case of ChatGPT), with words that reveal themselves one after the other8.

When I write, do I really know what I want to write? Doesn’t the text reveal itself to itself as it formulates itself?” claimed Jacques Derrida in a lecture at Cornell in the novel Binet (2015). This is what we see when we ask GPT-3 a question, reinforcing the belief that behind the algorithm would almost be a human9.

The Turkish automaton and the parrots

We all know the mechanical Turk or the chess-playing automaton, this so-called life-size automaton, wearing a turban, which had the ability to play chess at the end of the 18th century, whose legendary story is told in Standage (2003).

illustration, drawing by W. de Kempelen and print by J. G. Pintz, 1783

This story is almost the opposite of ChatGPT’s, with here a man pretending to be a machine, whereas with conversational algorithms, it is algorithms that have almost convinced themselves that they think like people, that they can write original works. But if we open the machine, we won’t find anyone. Yet we would like to believe it, because we are all very good at personifying anything that seems to act like a human. As Mahowald & Ivanova (2022) wrote, we have a “persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do. It will be remembered that Joseph Weizenbaum, who developed ELIZA in the 1960s, one of the first conversational algorithms, wrote a book to warn people against anthropomorphizing computer programs, reminding them that his tools had no understanding of the world, but just relied on a few repetitive tricks based on keywords identified by the researchers.

ChatGPT has no understanding and no intention, it just answers us, because it has been programmed to do so. “We never talk to say nothing” says Viktorovitch (2021). But ChatGPT talks to us, while it literally has nothing to say to us, it is a simple random generator of words. Or as Bender et al. (2021) presented it, a “stochastic parrot”. The paper caused quite a stir at the time, as nine days before the paper was accepted for the conference, and published, Timnit Gebru, co-author of the paper, was fired from her position as co-director of the Google ethics team after refusing to remove her name from the paper (Margaret Mitchell, also author, was fired two months later). This context unfortunately overshadowed the significance of the paper, which warned about the dangers of large language models (the full title is “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?“). And to continue the avian metaphor, we can remember the anecdote told by the physicist Richard Feynman in 1966, when he was talking with his father as a child: “See that bird? It’s a brown-throated thrush, but in Germany it’s called a halsenflugel, and in Chinese they call it a chung ling and even if you know all those names for it, you still know nothing about the bird – you only know something about people; what they call that bird. Now that thrush sings, and teaches its young to fly, and flies so many miles away during the summer across the country, and nobody knows how it finds its way,” and so forth. There is a difference between the name of the thing and what goes on. In other words, there is a profound difference between saying and naming things, and actually understanding them. And the exponential explosion of the complexity and size of these algorithms will not change anything10. Even if GPT-4 contains a thousand times more parameters, the algorithm will not be “a thousand times smarter“, it will just be a better mirror of our intelligence, and of our stupidity, maybe only in an even more convincing way.

Bullshit, churn and risk

As Klein (2023) wrote, with GPT, “the cost of producing “bullshit” has gone to zero“. These algorithms are bullshitters, and this is not a flaw that subsequent versions will correct, it is a characteristic of these models. In 2020, as quoted by Marcus & Davis (2020), the researcher Douglas Summers-Stay stated “GPT is odd because it doesn’t ‘care’ about getting the right answer to a question you put to it. It’s more like an improv actor who is totally dedicated to their craft, never breaks character, and has never left home but only read about the world in books. Like such an actor, when it doesn’t know something, it will just fake it. You wouldn’t trust an improv actor playing a doctor to give you medical advice.” And it’s important to keep this characteristic in mind when imagining the possible uses of these discussion generators. And OpenIA’s competitors are responding, with Google having launched Google Bard a few weeks ago, this multiplication of the offer suggesting that there is a massive demand for this kind of tool.

The “reasoning” powers of the model are at best superficial, but more likely non-existent. As Mielczareck (2021) argues, “if bullshitters are ever more numerous, it is also because they respond to the injunctions of their time.” Again, it is important to keep in mind Frankfurt’s (2009) assertion, “bullshit is a greater enemy of the truth than lies are“, bullshitters are far more dangerous than liars. The “bullshitter” is the one who is caught with his hand in the jam jar, who looks at you with a smile and tells you that it’s not him, while continuing to stuff himself. And who could tweet “whatever” or “who cares”.

As we have seen, blindly trusting GPT-3 and blindly following him in his hallucinations11 can only be done by putting one’s critical sense on hold. So what is ChatGPT for? It is important to keep in mind that the algorithm is developed by OpenAI, a company12 whose official purpose is “to ensure that artificial general intelligence benefits all of humanity”, but which also intends to take advantage of this media hype to make money. Etymologically, the word “bullshit” was used in thieves’ slang to refer to the empty wallet that replaces the full one that has just been stolen so that the victim does not realize it. At the beginning of the 20th century, the word was used to describe the bluff of a businessman, a courtier or a sycophant.

The irony is that ChatGPT is probably close to this concept, the seductive vision of “artificial intelligence” is, for many, the promise that a sufficiently powerful technology will be able to replace human workers or, more importantly, will be able to precarity and undermine them. For there is a political message in this technology. The constant injunction to learn to live with these tools, which are here to stay and which we just have to learn to live with is frightening. As we have seen, it is important to distinguish what would be a temporary flaw from a profound characteristic, because these kinds of tools will never be more than a “stochastic parrot”. And in order to seriously quantify the risks, one should imagine the possible uses of this tool. For example, if one wishes to use a large language model to answer health questions, the accuracy and reliability of the answers is probably essential. Note the alert issued by researchers at cp<r> who had spotted that some recent cyberattacks had been possible thanks to GPT-3.

In June 2021, the Github platform launched Github Copilot, to generate code. For example “write a python code that, from a sentence, will return the same sentence by putting all the words in reverse order“, or other more complicated (and more useful) codes. To train such a code generation model, the algorithm learned from millions of codes, found online on dedicated sites. And very often, the algorithm copies a code found elsewhere, looking more like a search engine than a code generator. This is actually more efficient, because inventing a code from scratch has little chance that it will actually work when compiled. And unfortunately, several codes returned by the algorithm were protected by copyright, and several lawsuits are pending. Wikipedia constantly cites its sources, as Charpentier (2018) mentioned, but not GPT-3, which simply plagiarizes its training data. Should we be admiring, and assert, as Pablo Picasso13 (or Banksy) would have said, “the bad artists imitate; the great artists steal.” Gary Marcus had spoken of “pastiche” to describe ChatGPT. In its original sense, a pastiche is a forgery, and one can indeed question what this algorithm is really doing.

As mentioned earlier, one advance between GPT-3 and ChatGPT may be an intermediate layer for receiving human instructions14 to improve the algorithm, achieved by RLHF, Reinforcement Learning through Human Feedback. Perrigo (2023) recounts how OpenIA had gone through the company Sama, which employed staff in Kenya to analyze samples of writing identified as potentially “sensitive” (talking about murder, suicide, torture, incest, etc) in order to flag this content to the algorithm to avoid writing similar texts, to avoid producing hateful or pornographic content. The psychic impact on these workers (paid $2 per hour) was such that the contract between Sama and OpenIA was broken. The irony is that in the end the language-generating algorithms are not a Turkish automaton, they are not human, and to make them look “harmless”, their creators had to dehumanize real humans.

References

Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. Wired magazine, 16(7), 16-07.

Bender, E., Gebru, T., McMillan-Major, A. & Shmitchell, S. (2021) On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency,‎ mars 2021, p. 610–623

Binet, L. (2015). La septième fonction du langage. Grasset.

Charpentier, A. (2018) Fake news, Wikipedia et Blockchain : Vérité et Consensus. Risques.

Chomsky, N. (2002). On nature and language. Cambridge University Press.

Cole, D. (2004), “The Chinese Room Argument”, The Stanford Encyclopedia of Philosophy

Frankfurt, H. G. (2009). On bullshit. Princeton University Press.

Grice, H. P. (1975). Logic and conversation. In Speech acts (pp. 41-58). Brill.

Klein, E. (2023). A Skeptical Take on the A.I. Revolution: The A.I. expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?. New York Times, 6 janvier

Li, K. (2023). Do Large Language Models learn world models or just surface statistics?. The Gradient,

Mahowald, K. & Ivanova, A. (2022) Google’s powerful AI spotlights a human cognitive glitch: Mistaking fluent speech for fluent thought. The Conversation, 24 juin.

Marantz, (2023). “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT. The NewYorker,

Marcus, G., & Davis, E. (2020). GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about. Technology Review.

Marcus, G. (2022) How come GPT can seem so brilliant one minute and so breathtakingly dumb the next?  garymarcus.substack.com

McCulloch, B. (2023) The risks of Large Language Models (such as ChatGPT). Vux.

Mielczareck, É. (2021). Anti Bullshit: Post-vérité, nudge, storytelling: quand les mots n’ont plus de sens (et comment y remédier). Editions Eyrolles

Mikolov, T. (2013) Efficient Estimation of Word Representations in Vector Space, Arxiv.

Perrigo, B. (2023). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Times, 18 Janvier 2023

Standage, T. (2003). The Mechanical Turk : The True Story of the Chess Playing Machine That Fooled the World. Penguin Book.

Viktorovitch, V. (2021). Le pouvoir rhétorique. Seuil.

Weizenbaum, J. (1966). ELIZA — a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45.

Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment To Calculation, traduit en Puissance de l’Ordinateur et Raison de l’Homme – du Jugement au Calcul 1981, Ed. d’informatique.

Woldfram, S. (2023). What Is ChatGPT Doing … and Why Does It Work?

1. ChatGPT (Generative Pre-trained Transformer) is based on GPT-3 (Generative Pre-trained Transformer 3) from OpenIA, released in 2020.
2. More precisely text-davinci-003, with a “temperature” of 1 (in order to increase the randomness and to have new answers by asking the same question)
3. We won’t come back to this aspect, but this contextual memory allows the algorithm to attach a specific word to a pronoun, provided that it has not been mentioned too far in the text.
4. GPT-3 is a marvel of engineering, due to its impressive scale, almost giving reason to Anderson (2008), who evoked the end of theory, the victory of engineers over fundamental researchers.
5. We can mention BERT or Transformer, developed a few years later.
6. Unfortunately this alleged quote does not seem to appear anywhere in Lewis Carroll’s work. After all, as GPT-3 would say, if the quotation absolutely does not exist, what’s stopping us from inventing it?
7. Can we expect to read anything other than “I have doubts”, the algorithm having no awareness of what the truth might be?
8. It is also surprising that if the succession of words is so slow, there are not more errors, especially when a calculation is made. If answer 2 is given for the integral calculation, and then justified, he agrees that the final answer does indeed correspond to the first answer given.
9. However, when asked, the algorithm always answers that it is not human. This avoids reliving one of the first scenes of the television dystopia Westworld, where a man asking an android “Are you real?” is answered “If you can’t tell, does it matter?

10. Even if for many computer science researchers, “there’s no data like more data”, in other words “nothing is worth more data” if the model is not good enough (thus embracing Anderson’s (2008) assertion).
11. Hallucinations are a psychic phenomenon in which a waking subject experiences perceptions or sensations without any external object giving rise to them. This disconnection from reality is reminiscent of the behavior of this algorithm.
12. Many researchers have pointed out the irony of the name for a company that is not very open and transparent, much more eager to offer a platform to “democratize” the tool than to allow trusted third parties (academics) to better understand how the tool works.
13. In the Bristol museum there is a stone engraved with the Pablo Picasso quote “Bad artists imitate, great artists steal”, where Banksy has crossed out Picasso’s name to put his own.
14. This previous model is InstructGPT, “a more truthful and less toxic GPT-3.”