Cyberia, as conceptualized by Escobar and lived by all of us is a magical place. It is the place I met Pierre Lévy anew (sic!) three times.
“Hello Teodora, we follow each other on Twitter, I am happy to see you there!” was the first time I met Pierre in the KGC community (thanks Ellie and team for all this work and connections). That was in January. Little I knew back then that totally independent of this, in the quite space of reading throughout my PhD research time, I will again meet Pierre, this time through his writing.
And last, the third “anew” happened in the process of preparing what you are now readindg, dear reader, I found out that Pierre was thinking about our semiotic trails and the way we make sense of them with a view to collective intelligence long before (at least a decade) I realized the connection between our private and public lives from the perspective of data.
It was in 1992, yes, several yeras before Tim Berners-Lee shared that he still has a dream for the Web to be “less of a TV and more of a sea of interactive knowledge” (ref. Links, Fractals and Information Plumbing), that Pierre published “Les arbres de connaissances [the trees of knowledge]” – a system created with Michel Authier, composed of a cartography software and the exchange of knowledge among communities, generating a virtual encyclopedia that’s changing constantly. (ref. http://www.fronteiras.com/en/lecturers/pierre-levy)
Clearly, for all of us interested in the way meaning works its magic through social, algorithmic and cultural systems and layers of shared or colliding understandings, Pierre Levy’s work is a must-visit planet.
Having a Master’s Degree in History of Science by the Sorbonne University, a Ph.D. in Sociology in EHESS and a second Ph.D. in Communication, Sociology and Sciences of Information from the University of Grenoble, Pierre is working at the intersection of philosophy, religion, science, business and technology. An associate professor at the University of Montreal and the CEO of INLEKT Metadata Inc. , Pierre is also the inventor of IEML (Information Economy MetaLanguage) and the author of 15 books translated in several languages, including (in english):Collective Intelligence (1994), Becoming Virtual (1995), Cyberculture (1997) and The Semantic Sphere (2011), as we find on his personal blog.
But …. enough “frozen” concepts. Put your cyberspace suit and let’s move on to the outer space of Levy’s understanding of meaning, language and us the virtual humans in dire need of a semantic coordinate system.
When did you first feel the paradigm shift, the unseen-to-be fabrics of a back then emerging cyberculture?
I can date my awareness back to 1979, when I was a student in history at the Sorbonne University in Paris. At the time, I was taking Michel Serres’ course (https://en.wikipedia.org/wiki/Michel_Serres) on the history of science and philosophy of communication. Serres recommended reading an official report commissioned by the French government on “the computerization of society” to his students. (https://fr.wikipedia.org/wiki/Rapport_Nora-Minc)
The report foresaw a convergence of the telecommunications and computer industries. That same year I was taking a methodology course on the use of databases in historical research as well as a geography course where our teacher had talked to us about digital cartography.
All this made me realize that we were approaching the threshold of a new civilization. This is why my master’s thesis, the following year (1980), was entitled “Communication, teaching and knowledge in a computerized society”. I did it under the supervision of Michel Serres, and he encouraged me to pursue in this direction.
What is your definition of culture and if we talk about AI, what are we cultivating when taking care of our algorithmic fields (and their inevitably emerging marginalia)?
“Culture” refers to the interdependence between the symbolic, institutional and technical systems of a society. The media play an important role in shaping culture because they support our activity of symbolic manipulation.
Writing has made symbols durable, printing and electronic media have automated their transmission. AI, which is just the cutting edge of computing, differs from earlier media because it automates the transformation of symbols and accelerates the long-running historical trend toward externalization and socialization of our cognitive functions.
Humanists and social scientists should be much more involved in the development of IA and take the responsibility to design new avenues.
How is (and is) hermeneutic circle changing with the advent of the Web and further with the emergence of the algorithmic medium?
If we consider the new medium from the angle of coding, it is digital. If we consider it from an operative perspective, it is algorithmic. There is a third aspect that is very important: the emergence of a global interconnected memory where every text is virtually linked to any other.
If the hermeneutic circle is the necessity to understand the work in order to interpret the text and to understand the text in order to interpret the work, then yes, the hermeneutic circle is certainly complicated by the advent of the new medium. The limits of the work, the limits of the corpus, the limits of the library are now a problem. What is the new context? How far does the circle extend? On the other hand, to overcome the new problems, we have new means of automatic analysis (still very rudimentary, despite the advertising discourse of AI) and new means of collaboration. This makes me hope for a revolution in the humanistic sciences comparable to that of the natural sciences in the XVII and XVIII centuries after the triumph of printing.
How is the semantic sphere related to the semiosphere?
As early as the beginning of the 20th century, Lotman and several Russian theorists, but also Teilhard de Chardin, evoked a form of life more abstract than the organic biosphere – the life of symbolic meaning – a life that is mainly supported by the human species.
The semantic sphere I am talking about refers to the same object, with the difference that the semiosphere – or the noosphere – is conceived here as a scientifically knowable universe.
The necessary condition to explore this universe with the means of science is to adopt a calculable semantic coordinate system (like the geographic coordinate system) and to use the current computing power and the available data for something other than marketing and propaganda.
I conceive this coordinate system of the world of the mind as a language (capable of expressing and translating everything, like any language) with a mathematical grammar and a compact self-explanatory dictionary: it is IEML. By the way we already use language as a semantic coordinate system for our memory. But because natural languages are not computable, we have to use a specially designed tool to make the semantic sphere scientifically knowable.
You speak about semantic interoperability of formats vs. semantic interoperability of concepts. Can you please give an example or a metaphor?
Everybody agrees that RDF and other standards of the WWW consortium are about format, not about meaning. The meaning is not given by the format but by the content. In the case of the Semantic Web, the semantic content of a concept is given by two things: a reference to a URI and triples (or connections).
The reference to a URI is like the meaning of a proper noun. It is just a pointer to an object, it does not connect to all the other signified of the language, like the meaning of a common noun in a dictionary, which is recursively and circularly defined by other words. The triples, or connections, are limited to one single ontology. It looks a little like the semantic network inherent in a natural language, but the network of concepts of an ontology maps a small portion of the universe, is not part of a self-defining language and is not necessarily compatible with other ontologies (no semantic interoperability).
The reason why the semantic web uses referential meaning (URI) and local and rigid logical definitions (triples) instead of a real all-encompassing and supple language is well known: natural languages are not computable because they are not regular. By using a language that is as expressive as natural languages but regular – and therefore computable – we could solve the problem of semantic interoperability. We build knowledge graphs or ontologies in IEML (ultimately : triples), but every concept – category or relation – is grounded in a self-defining language instead of being grounded in URIs. Concepts of distinct ontologies are all ultimately grounded in the same IEML dictionary, who self-explains using the same grammar. This is not *against* the semantic web standards, because, of course, we may use IEML inside RDF files. It is just that “semantics” in IEML knowledge graphs does not mean what it means in the theory of meaning (mainly referential) of the semantic web.
I don’t have a problem with a calculable model of language in its pragmatic function, yet I struggle to see language as calculable when it comes to its poetic function…
The purpose of IEML is techno-scientific and not primarily poetic. But let’s discuss its poetic dimension anyway. It is true that IEML tends to be univocal (which is not the case of natural languages). This could of course limit its poetic power. On the other hand, the formal and sensitive aspects of poetry could be greatly enhanced by IEML: expression by icons, animated images, sounds, music, etc. In addition, IEML texts can be transformed by algorithms much more easily than texts in natural languages. All this opens new possibilities for literature in the continuity of the Oulipo movement ( https://en.wikipedia.org/wiki/Oulipo ). You can control and create a lot of effects with a language that is understood by computers… Semiosis is in the process of transformations, interactions and interpretations of signs. By *adding* to our semantic ecosystem a language with new properties, and designed to exploit all the potentialities of the digital medium, we can only make semiosis more complex, not poorer.
In practical terms, could we say that we talk (declare) with IEML what we want to model – the world in our head (and in the shared understanding of our community/tradition) to make it shareable, reusable and ultimately semantically interoperable.
Yes, exactly. It is a tool for formal modelling which is as supple and expressive as natural languages that is also shareable and interoperable. By the way, it will be manipulated with words in natural languages, icons, etc. An understanding of the (simple and regular) grammar is recommended, but you won’t have to learn new words to use IEML. Users will write and read it in their mother tongues.
What is the tacit knowledge lying in between levels of abstraction?
Well, tacit knowledge is inexhaustible and infinite in all directions and between all levels. This is nice because it means that the process of creating explicit knowledge from tacit knowledge will never end.
Pierre, in your article “For a paradigm shift in artificial intelligence”, in the paragraph “Semantics in AI” you talk about data and metadata, saying “In computer science, references or real individuals (the realities we are talking about) become data while general categories become headings, fields or metadata that are used to classify and find data.”
But isn’t one of the main propositions (and breakthroughs) of the Semantic Web related technologies that every data piece is a first class citizen?
The distinction between data and metadata is obviously relative. In the IEML grammar, there is a distinction between a “semantic category” (the metadata) and a “reference” (the data). But the reference may contain a semantic category and the semantic category is obviously also a data since it is manipulated by algorithms. Conceptual distinction does not mean hard separation.
Can the Open World Assumption satisfy the imperative of interpretative openness?
The open world assumption is not enough. Interpretative openness is above all a social and cultural issue. For me any text (in the most general sense) may be the basis for unlimited open-ended interpretations. In what corpus do you interpret this text? From what point of view? With what practical agenda, with what questions in mind?, etc. If we are able to make sense of the multiplicity of interpretations and compare them, maybe we will be less tempted to reduce their diversity.
[section] Who’s gonna interview the interviewer?
Pierre to Teodora: Why is the digital metamorphosis of text so interesting for you? (What desire do you fulfill in this intellectual pursuit?)
Thank you for this question. I never thought about intellectual pursuit as being driven by a desire. By now I have been framing it as a need to make the web a better text and help people do more with words. Yet, it is actually driven by my own desires:
- The desire to connect to the many meanings of one and the same thing and to the quantum essence of the state we exist in through words.
- The desire to dance with words in the endless process of sense making without being binded in understandings that see the world in only one perspective.
- The desire to connect to differences without wanting to make “e pluribus unum”, but rather „Ne varietatem timeamus“.
A thought Experiment Instead of An Epilogue
And last, Pierre, a thought experiment instead of an epilogue. We are archeologists. We explore not Antiquity but Early Cyberage. The question is:
What myths is Early Cyberage led by and are there universal realia out (t)here?
There is clearly a myth of artificial intelligence: an autonomous machine gifted with super-human cognitive abilities. I do not share this myth. Machines are there to augment and feed human intelligence from the data produced by humans (and by the rest of the universe). My North Star points toward a human collective intelligence capable of reflecting itself in the mirror of cyberspace. Universal realia should be negotiated in this context.
Further reading to continue your internal dialogue.
Some of Pierre’s writings:
The rabbit hole is calling you (us :) – https://dev.intlekt.io/
Thank for reading!