Our ability to construct collectively new knowledge, to innovate when confronted to multiple stakes, to develop artificial intelligence mechanisms, depends intrinsically on our capacity to confront our models with “facts” at any stage of the development process, and from multiple points of view. This confrontation is the acid test. It is basically statistical since “facts” always come in the form of finite series of “events” indexed in a space-time referential while our models tend to abstract our conceptualizations from any particular realization.
Till now, two main obstacles have prevented both scientifically and pragmatically any major progress in the mastering of these geneses :
- There is no generally accepted physical theory of probability as put it Kolmogorov himself. Even though probabilities are pervasive in our practices, they are not clearly distinguished from statistics. They denote some incomplete knowledge without providing any general indication about what they qualify and what this incompleteness consists in. This lack of a scientificity hampers our capacity de develop virtual environments especially when certification processes are at stake and make necessary agreements on responsibilities and relevant criteria.
- There is no general scientific framework to master the genesis of any model of scientific character, that is, a method that would fix the conditions to be respected at design time for a model to be granted a “refutable” character that would make possible to turn it into a reference at any stage of the development process, until evidence to the contrary.
These major flaws show through multiple presenting symptoms among which we would stress:
- Our incapacity to bridge the conceptual gap between quantum physics and classical physics,
- Our incapacity to give a general physical and measurable character to such concepts as entropy and complexity,
- The qualitative and fuzzy aspect of such key domains as “Systemic” or “System Engineering” which claim a scientific character while being unable to define their particular standing among the other disciplines, such as physics, computing, human sciences, and which eventually boil down to a “state of mind” whose efficiency is not demonstrated, despite massive investments.
We are convinced that overcoming these shortcomings requires coming up with constructive and systematically relativized methods which meet Schrödinger’s expectations: explicitly introducing in concept formalism human geneses, without losing in scientificity.
We regard the Method of Relativized Conceptualization (MRC) of Pr. M. Mugur Schächter as the appropriate starting point. MRC emerges from a long meditation on the very specific cognitive situation entailed by quantum physics. It expresses in a logical way the paradigms that its author pin pointed underneath the mathematic formalism. It results in a relativized method of knowledge building which eliminates by construction false absolutes and paradoxes surreptitiously introduced by our spontaneously absolutizing way of thinking.
On this basis, H. Boulouet has undertaken the construction of a formalized a Relativized Systemic (RS) which consists in a very specific algebra. It makes possible to construct on a factual/conceptual basis such notions until now physically undefined as “persistence”, “continuity”, “state”, “evolution”, “system”. This is achieved through adequate combinations of MRC “Basic or transferred description”, formalized in an ad hoc representation space, defined according to the Theory of Category formalism and endowed with two quite specific kinds of relativized operators “+” and “*”.
RS applies either to knowledge building or to anticipation processes (forecast, innovation, hypothesis …). It leads to a physical definition of the concept of “probability law” that makes explicit the inference mechanisms between our concepts of general application and the flow of events, as we can capture, describe or imagine them. It endows the Turing Machin with a clear physical meaning and thereby turns this purely mathematical concept into a key tool to qualify controllability. It carries along a relative definition of “entropy” and of “complexity” of general application which can be computed out of RS models and factually estimated.
RS is fundamentally a constructivist method in which any existing entity comes as a concept built from scattered traces in space and time. As such, RS may potentially foster a new and unifying approach to Artificial Intelligence algorithms. Relativized System Engineering (RSE), an RS development, puts forth in particular the different types of parameter that may be adjusted, manually or automatically adjusted to trigger different learning modes, possibly ran in parallel, depending on the stakes.
Such a conceptual construction, as any scientific construct with pragmatic ambitions, had to be estimated with regard to the situations which initially fostered its construction. As a small 5 peoples team (with V. Brindejonc, Eric Campo, B. Massy-de-la-Chesneraye, F. Fleuchey), we prototyped and successfully experimented in the automotive industry between 2005 and 2010 two first conceptual and tooled applications: a Relativized System Engineering (RSE) and a Relativized Information Management (RIM) methods applied to Product Lifecycle Management. RIM was initiated by F. Fleuchey in 2004 on his own basis, drawing on neuroscience, before coming within RS framework in 2009. RSE project on automated testing at integration time, VETESS, got labeled by the French pole of competitiveness “Vehicle of the future” and it received a public funding in 2008 (1 million euros). But the gap with existing standards, with common sense, and the established interests involved, have prevented us from going any further.
We nevertheless keep on thinking all this, after MRC, constitutes a major breakthrough, a real upheaval in our way of thinking “Reality”, with potentially tremendous impacts. And because we are convinced that it will bring a decisive competitive edge to those who will endorse these principles, we can’t help thinking that they will emerge again somewhere, sometime, possibly from other peoples we don’t even know, if we are not given the opportunity to further develop our activities.
We remain, as far as we are concerned, on the lockout for any opportunity, wherever it originates, that would make it possible to pursue this operational and scientific construction.