One of the goals of my report is to draw attention to the motives and issues that led Andrei Kolmogorov to the founding of our area.
In 1945, in a letter to Luzin, Kolmogorov announced the beginning of "research in the field of logical foundations of mathematics", where I see the germs of a very large new movement in the results of Turing and Church. It was his reversion to foundations started in 1920-30 with his works on intuitionism and logic of problems.
In 1947, Kolmogorov used his own money to buy 10 mechanical calculators «Felix » for his faculty at Moscow State University for the practical work of students in their mathematical studies.
At the end of the 1940s, abstract mathematical constructions began to be implemented in the form of universal computers in the USA, the USSR, and Great Britain.
In mathematics, Kolmogorov's interest in the emerging digital era was expressed in the transition from the general theory of computability at his seminar on recursion the early 1950s to attempts to clarify and specify the concept of algorithm, and not just a computable function: the general definition of algorithm in the Encyclopedia, Kolmogorov-Uspensky graph algorithms (1953). In 1950s his works and talks on theory of information, presented a drift from continuous representations of concepts in information theory to discrete and finite ones. At the Moscow Mathematical Society his talk on November 27, 1951 was entitled "Algebra of two-bit functions of two-bit variables and its application to the theory of relay-contact circuits" and on December 8, 1953: "Concepts of 'information' in mathematical statistics and in the Shannon theory of information transmission".
In 1960s a series of public speeches about cybernetics, computing, and artificial intelligence the formulation of the multiplication complexity problem resulted in Karatsuba Algorithm, and implementation of networks in 3D-work with Barzdin.
His report at the International Mathematical Congress in Nice in 1970 begins with the section "On the increasing role of finite mathematics".
And here I want to emphasize my feelings about Kolmogorov's texts and my talks with him in the last years of his life. Kolmogorov was interested in specific "finitnesses", possibly very large ones.
In his speeches of the 1960s Kolmogorov talked about different ranks of numbers: small numbers, where a person can sort through all the objects within a limit, medium number—any individual object is accessible to a person, large ones—a person can no longer reach a specific object, but its elements only, he also mentions super large objects—and «no need in them».
The desire to distinguish finite objects by their complexity—the degree of randomness led Kolmogorov to paper in Sankhya, 1963. He expressed dissatisfaction with the existence of a gap between the lower and upper bounds in this work. After 20 years republishing the paper he repeated that the problem of closing the gap is still open. This gap was closed 40 years later by Muchnik and the author.
60 years ago Kolmogorov introduced the main definition of the complexity of an object. It happened along with his experiments on predicting the continuation of real Russian texts- "human ChatGPT" (following C. Shannon's experiments with English of 1950). In 1970, the famous review prepared by students Levin and Zvonkin was published. Later we started with Zvonkin a research and practical work with kids on learning with objects small as Kolmogorov’s small numbers. Péter Gács, who obtained his (with János Körner) results on common information joined Kolmogorov’s group in Moscow.
In 1972 Levin built his theory of universal search problems (NP-completeness). (That year I was graduating from Moscow State University.) It is possible that consideration of practical search problems of today would lead Kolmogorov to questions about the complexity of solving these problems for Kolmogorov-simple inputs and complexity relative to a Big data oracle?
The participants of the conference probably presented many times in various audiences the ingenious in its simplicity definition of complexity by Kolmogorov and Kolmogorov's theorem about the existence of an optimal way to describe finite objects. At the same time, we say usually, apologetically, that the additive constant in the theorem is inevitable, due to a trivial consideration: any given finite object can be hidden inside a specific description method, thereby reducing the complexity of the object to (almost) zero.
At the same time, Kolmogorov himself, speaking about real-life texts, expressed confidence that their complexity could be measured with accuracy to hundreds, and possibly dozens: "One should, however, think that the various 'reasonable' options presented here will lead to estimates of 'complexity' that differ by hundreds rather than tens of thousands of bits. Therefore, such quantities as the 'complexity' of the text of the novel 'War and Peace' can be considered definite with practical unambiguity." (from "Three approaches...")
Levin expressed the same belief when designing his algorithmic language: "It seems plausible that the constants that arise when comparing the [complexity] functions below with functions defined in other natural ways will not exceed several hundred bits. The given design allows for several variations and, possibly, some improvements not noted here. We think that in one of these variants, the size of the constant does not exceed even a few dozen bits."
One can try to resolve the emerging substantive contradiction by talking, not about any methods of description, but about some "reasonable", or "natural" ones.
It seems that these approaches, however, will not solve the specified problem. It is unclear why the method of description in which the real object is "embedded" should be considered not natural. Algorithms with such "embeddedness" have become especially popular in the era of machine learning and ChatGPT. On the other hand, for real objects we get an arbitrary spread in the constant, the inaccuracy turns out to be comparable to the size of the object.
We believe that a fairly obvious modification of the original definition makes it possible to improve the situation, or at least to make statements of Kolmogorov and Levin formally correct. Namely, the sum of the usually determined complexity and length of the description method (encoded using binary characters) can be considered as a measure of the complexity of an object with this method of description. We can choose a man-made language for specifying computable functions, for example, one of the languages proposed by Levin, Chaitin, Tromp, etc., the language of Turing machines, the Game of Life, etc. The description of a universal function can be chosen very short, for example, several hundred binary characters. By modifying the language, you can, of course, reduce the universal algorithm to a few bits.
Another term in the additive constant arises due to the need to encode pairs (in a non-prefix coding). This term can be, for example, the doubled logarithm of the length of the simulated algorithm. It is clear, that for the methods of description we are interested in, this size will not exceed a few bits. For description methods of size comparable to the size of the Universe, we get dozens of bits.
So, in our presentations of Kolmogorov complexity for "real life situations" we can limit the constant by 200. The Kolmogorov-Levin idea can be implemented. It is enough only to limit ourselves to "real" methods of description instead of assuming the "naturalness" of the methods of description. The complexity becomes absolute with an accuracy of tens of bits.
Kolmogorov's considerations of large numbers and of complexity lead us to another idea. Researchers were concerned about the very reality of large mathematical objects, in particular- numbers. In 1935- it was Bernays, who questioned the possibility of writing the number 67**(257**729) in the Arabic (decimal) notation number system. Petr Vopěnka talked about different horizons of our perception. Alexander Yesenin-Volpin doubted the possibility of adding one by one to reach greater powers of two. (Cf. Kolmogorov's super-large objects.) It seems more natural to doubt the existence of numbers for which there is no description that fits in the universe. In fact it looks even more natural to measure existence or ‘realness’ of abstract objects (like numbers) by their complexities and consider an object as physically un-real if has big enough complexity.
And the last one. In mathematics, it is impossible to prove its consistency. But is it possible to prove that there is no contradiction proof of the size limited by the Universe? Steps in this direction were taken, for example, by Parikh, Dragalin, Sazonov, Pudlák. However, even here, by "mathematical inertia", the results were primarily asymptotic.
It seems the time has come to take the finiteness more seriously, as Kolmogorov started.
Let me return to the story and its continuation today. In 1980, Kolmogorov became the head of the Department of Mathematical Logic and Theory of Algorithms and invited me to help him in a seminar on complexity. His main interest was in finite objects and the degree of their randomness. This year Vereshchagin, Muchnik and Shen graduated from our department. The wonderful mathematician Andrei Muchnik left us in 2007, Vereshchagin and Shen continue the seminar with their students, and I will mention some of their recent results. Onoprienko joined our department a couple of years ago, she continues to work on Kolmogorov's logic of problems (Medvedev, Levin, Melikhov, Vereshchagin).