Amit Sheth, LexisNexis Ohio Eminent Scholar at Wright State University and executive director of Kno.e.sis, has been walking the semantic web technologies talk for more than a decade.
On his exciting and challenging journey toward more meaning in the man-machine collaboration (symbiosis as you will often read him calling it), Dr. Sheth wears several hats at a time.
At Kno.e.sis, the Ohio Center of Excellence in Knowledge-Enabled Computing which has the largest US academic research group in the area of Semantic Web, Dr. Sheth and his team are working on a wide range of computer science technical advancements to find semantic solutions for real-world problems in the fields of social networking, health-care, life sciences, national defence, manufacturing science.
Wearing his innovator hat, prof. Sheth is constantly advancing the steep road to bringing computing from processing big data to processing human experience in a meaningful way. The first to file and be granted a patent for “creating a semantic web and its applications in browsing, searching, profiling, personalization and advertising” in 2001, Sheth is currently working towards his vision of Computing for Human Experience.
A bright scholar (Sheth’s work has been cited by 31,622 publications) Amit Sheth himself inspires and prepares bright scholars. The work with his students is a close to his heart and mind part of his professional journey. As an educator, Dr. Sheth is intensely devoted to teaching and mentoring. At the Wright State University he guides students through pioneering ideas in the semantic web technologies applications world.
Meaning is Web-like, So Are Relationships
I first met Dr. Sheth and this strong sense of commitment to helping others on their semantic web journey through a comment under a post of mine:

The Philosophy of Meaning, excerpt from Cool URIs don’t change
Back then, Dr. Sheth shared:
Computing has been all about translating a rather complex and messy real-world into well structured (if possible mathematically or statically characterized), simplified computational representation. So use of simpler data representation such as links, trees, hierarchies, horn clauses/description logics, etc was in vogue.
Recently, however, we are seeing significant increase in our capability to model more of real world complexity and have the ability to meet demanding computing power needs. This has meant move towards graphs, probabilistic representations, layered learning and ability to factor in many parameters/features/attributes. So there is a reason to be optimistic about recognizing and computing.
A year later, I am honored to have Dr. Sheth in a Dialogue about his work, his vision of the semantic web technologies and his current work. Busy with 50 diplomas to review and a WWW2017 Semantics & Knowledge Track conference with Ramanathan V. GuhaGuha (until recently, of Google) to chair, Dr. Sheth did find the time, space and mind to share thoughts about the Semantic Web, man-machine symbiosis and computing for better human experience.
The Semantics of the Cost of a Pizza and the State of Semantic Web Technologies Today
Dr. Sheth, what attracted you to semantic computing in the first place?
I have an interesting anecdote to share that is relevant to this question. In my first job at Honeywell’s research division, I worked on DDTS, one of the three major heterogeneous distributed databases management systems of 1980s. Jim Larson, my mentor during that job, introduced me to the concept of federated databases – I had my most cited paper on Federated Databases with Jim. Around that time, I got an opportunity to give the very first tutorial on heterogeneous distributed database integration at the 1987 International Conference on Data Engineering, so I started learning and talking about the data(-base) integration, and realized the limitations of the syntactic and representational approach and the need for semantics.
During a visit to Venice, I ended up paying 8,500 lira (which included a service charge and two taxes) compared to the advertised price of 3,000 lira – and that got me thinking about what is the semantics (meaning) of the cost of pizza—the advertized menu price or the price the customer pays for the same item!
In 2001, you were awarded the first patent for commercial semantic web application in browsing, searching, profiling, personalization and advertising.
Today, more than a decade later, where do you see semantic web technologies and their application?
I address this point in my blog post 15 years of Semantic Search and Ontology-enabled Semantic Applications. Consider the three reference points: the first patent involving the semantic web and its applications (filed in 2000, well before the best cited Scientific American article that advocated AI-based agents that can plan a trip for you—something that is still not possible), the associated keynote I gave also in 2000, and related papers (Semantic Content on the Web, Semantic Enhancement Engine). These showed that:
(a) we can scalably create and maintain ontologies (what we also called WorldModel and roughly the same as what many refer to as a knowledge graph today) in a largely automated manner by assimilating structured data and knowledge from multiple sources;
(b) we can use these ontologies/knowledge graph in conjunction with lexical/linguistic techniques and machine learning for better information extraction, semantic annotation, and semantic enrichment; and
(c) we can build rich semantic applications, including semantic search/browsing/personalization/advertisement which also encompasses an associated dynamically-built rich media reference object (what some today call an information box but with relevant information and knowledge from an ontology/knowledge graph and other related context).
It would be interesting to compare the Semantic Web search engine from the year 2000 (Semantic search engine that reads your mind) and the semantic search that everyone started to pay attention to when Google rolled out its version in 2013. For me, this reaffirmed my earlier experience with federated database, which was that the most pioneering ideas and early implementations which arise in academia or a startup (even if it was a commercial system with customers) take 10 to 15 years for a big company to bring to the market when a large customer base is ready to appreciate and adopt it.
Your latest project “Semantic, Cognitive, and Perceptual Computing: Advances toward Computing for Human Experience” is about human-centered computing. What is it that you find most difficult in the development of this idea and its further real-world applications?
From the nontechnical perspective, the difficulty is that currently scientific and technical work is giving more attention to using AI and related technology to replace humans and there is less effort and attention in thinking of computing for human experience.
From the technical perspective, we are trying to build increasingly more intelligent systems where intelligence is defined in reference to the human brain. In spite of progress in cognitive science/systems, brain informatics/science, neuroscience etc., understanding of how the human brain manifests intelligences and taking inspiration from it while differentiating between and then synthesizing the components that we label as semantics/semiotics, cognition, and perception is challenging, and we are just taking early steps. As for the real-world applications, I do have a better intuition and handle—we are pursuing applications for personalized digital health solutions for asthma (in children), dementia, and others involving physical (sensor/IoT), and cyber and social data (see the kHealth project) where we see clear applications of this line of research.
Teaching and Researching in the field of Semantic Web Technologies
As an educator you have a lot of successful students, winning awards and with remarkable citation record. What do you start your classes with when you meet them for the first time?
While my students would typically take the classes I offer (Web Information Systems, Semantic Web, Web 3.0, including social and sensor data analysis) and I emphasize some core courses, my colleagues offer (e.g., algorithms), this question does not have a simple answer.
A professor’s approach would likely start with how we select our research (especially PhD) students. Here is my approach: What do professors actually care about in selecting research students? Then, each PhD student is still treated very differently as they come with different preparations (BS vs. MS, programming skills, language/communication skills, social skills—all are important components for what makes a student successful) and I would pay attention to addressing their deficiencies—both technical (including software engineering) and nontechnical.
I also use their placement at top industry and national labs as a very critical component in their development, and use my network to ensure they get to work with good mentors during their internships (my PhD students do 3 or 4 internships). Our very rich, team-oriented research ecosystems on high impact projects with real-world data, robust prototypes and systems, exceptional computational and physical infrastructure, active collaboration with domain experts and end users in the projects, excellent funding allowing all the travel a student needs to do, etc. do the rest to rest of the magic.
In the end, it comes down to spending time with them. I have an open door policy for students—they do not need appointments with me, and I am available to them until 9:30pm everyday.
Many of the projects you’ve worked on have resulted in commercial projects, what is the path from research towards business deployment? How do you recognize a technology is worth researching and further developing?
Usually my choice of research does not consider commercialization at the start. It focuses on investigating the topics we get to define and design. In other words, I prefer to build my sandbox and hope that others see the value and come and play.
We prefer to be either first, second, or third in working on a topic, and avoid incremental work on well-defined and established topics. We also pick more problem or application-driven work with high impact potential. For example, we started looking at the issue of the use of social media to study prescription drug abuse two years before the White House declared a national initiative to curb it. It so happened that we got our NIH funding on the topic in the same week the White House announced that initiative, but by then we already had a two year head start and preliminary work in the area, allowing us to be the first to carry out the research on that topic (through analysis of social big data as opposed to traditional qualitative interviews and surveys). I also recognized and likely defined or used for the first time the concepts of smart data (coined in 2004), semantic sensor web (coined 2007), citizen sensing (coined 2008), and semantic perception (initiated 2010), and our team carried out early prototypes and in some cases, high value application demonstrations. Most end up with a tool or application used at least in research, and when the right market opportunity becomes clear, I try to commercialize it or engineer a technology transfer. Around one in five end up seeing robust operational use in the real-world, and one in ten projects results in commercialization.
Because I have done startups by licensing technology, my students and I developed in my university project, and continue to advise/co-found startups now. I have a good sense of when a technology has commercial potential. In such a case, I use my knowledge of technology transfer/licensing/commercialization knowledge and experience to find an appropriate path, even though now I shy away from personal involvement in operational matters given that Kno.e.sis now is quite large with 60+ funded researchers (mainly graduate students and postdocs).
Semantic Web Matters
Where are we on our way to a fully blown Semantic Web as an interoperability technology?
The success of Semantic Web as an interoperability technology—especially that based on relevant W3C standards (RSF/RDFS, SPARQL, OWL)—is modest. Given that a Semantic Web language has a higher level of expressiveness, it clearly offers advantage over alternatives, at least in their ability to capture more semantics representationally (e.g., using a conceptual/semantic data model with a need to store metadata and mappings that are then stored in relational databases, or using graphs with unlabeled edges or without types). However, it takes time (as it did for relational databases) for the technologies that use these standards to mature before one gets features, performance, and reliability needed to support real-world applications. The higher the expressiveness, the more corresponding engineering challenges there are in getting the performance needed at scale, and the higher the learning curve in effectively using them with added benefits when used right.
Nevertheless, Semantic Web technology maturity is less of an issue compared to the fundamental challenge related to semantics necessary for supporting interoperability. For example, consider all the works in today’s ontology alignment work (including the the way annual competition on this topic is set up), which still has an overwhelming focus on same_as or equality relationships between concepts/entities. Relationships are at the heart of semantics (and by extension Semantic Web), so the real challenge—that of capturing richer forms of link/relationship semantics is still not addressed well. Examples of such challenges are: what two objects are the same (and misuse of same-as), how are they related, how to disambiguate them, how to map them, especially when the two objects are related or similar but not the same, or related differently in different context, and so on. So I feel we are making progress towards a broader support from semantics by moving from keywords/string to entities/thing to relationships/domain models/events—but it is taking time.
The most important impact of the Semantic Web is neither the technologies such as triple stores and reasoning, nor is it the interoperability and integration (a long-standing challenge where we have only seen incremental advances)—it is that of the ascendency of knowledge and impressive gains the use of knowledge has given to AI technologies, including NLP, Machine Learning, and their applications such as chat bots and question answering. First, the Semantic Web provided the background for the development of large, reusable, standards-based (e.g., LOD) or community and collective effort-based (e.g., schema.org) knowledge bases or knowledge graphs. The second is the realization that further advances in highly successful AI technologies such as NLP and ML will not come without the use of (background or domain) knowledge.
On Cognition, Intuition, Machine Learning and Man-Machine Symbiosis
What is most hard for algorithms to mimic when it comes to cognition?
Intuition. Secondarily, perception with anticipation.
What do you think is the worst misunderstanding when it comes to machines/machine learning (ML)?
I am not sure what is the worst misunderstanding, but one of the important misunderstanding is that ML does much for intelligence, or is in itself sufficient to solve some of the more complex problems that humans are good at solving. Learning patterns from data, using training data that humans provide to assist with classification, all these are important, and these have been shown to scalably solve a number of interesting and useful problems. But this is far from intelligence. I will just make two points: (a) a number of perceptive experts in ML (including Pedro Dominguez and Oren Etzioni) and natural language processing (NLP) have increasingly talked about the importance of using knowledge to improve ML and NLP, and (b) ML which is a bottom up technique/process must be combined with top down technique/process.
With respect to the former point, i.e., importance of exploiting knowledge, I am reminded of our work during 1999-2002 related to our semantic search/browsing/personalization/advertisement service developed by Taalee, when we combined a knowledge-based classifier (that utilized extensive populated ontology or background knowledge) with multiple machine learning techniques to demonstrate much better semantic annotation capability compared to what we can do with ML only (see Figure 3 of 2002 publication on Semantic Enhancement Engine). With respect to the latter, I am reminded of work on cognitive models, which has served as inspiration for some of our work on semantic perception which we started around 2007. A good contemporary example of what inspires our ongoing work of incorporating intelligence in computing by combining top down and bottom up process is Top Brain, Bottom Brain. And while we continue to improve computational intelligence, I am a firm believer in man and machines working together, rather than trying to replace humans by machines, as emphasized in my perspective on Computing for Human Experience and by the following picture which I used in my Asian Semantic Web 2008 keynote (see Figure 3 of 2002 publication on Semantic Enhancement Engine).

What do machines have to do with collective intelligence?
Intelligence, and hence collective intelligence is hard to define. In a rather narrow form, collective intelligence captured in the form of structured knowledge can be exploited by machines to mimic, exhibit, or gain human-like intelligence. But what a machine (algorithm, software system with the support of sensors) can capture from human activities (including decision making) that exhibit intelligence is quite limited for now. Once we advance our ability to observe and then understand multimodal observations (humans are much better at using different senses and modalities than machines do so far) and improve our multidisciplinary research (leveraging computer science with cognitive science and other disciplines), we can work towards improving the abilities machines can exhibit in this respect.
Now that computer power needs are met, what is the next big challenge in digitally mapping the complexity and “messiness” of our world?
We keep on coming up with more challenges, and are always in need of more computer power, so I won’t say that the computer power issue has been met. For example, the growth of data and its associated complexity (variety, velocity, etc.) have far outstripped our ability to process it —less than 0.5% ever gets analyzed, and even less is actually used to drive timely decisions or lead to actions. Nevertheless, I would agree with the underlying point in your question that we can afford to address other challenges. I can think of at least two major challenges:
-
- Raise the expressiveness in our computational system: for too long we have been simplifying the complexity of real-world problems because we have models and representations of limited expressiveness or limited computing power; we can now afford to capture more nuances, context, and complexities of the real world in our computational world. We are moving from relational databases to graph databases, and from logic to probabilistic graph models; all these afford more expressiveness.
- Deal with the multimodal data at a semantic level to support cognitive systems: our senses capture observations/data of various modality and our brain is able to process them simultaneously, using one modality to help process another modality (e.g., the words we hear in a speech with something we see in the visual representation the speaker is sharing).
Who’s gonna interview the interviewer?
Teodora: And before we end our Dialogue with the Quick Favourites sections, I would be grateful if you ask me a question or two.
Amit Sheth: Hope you are enjoying the exhilaration and managing the exhaustion that comes with new motherhood. I recently read a book authored by a very successful woman that is perhaps quite relevant, Unfinished Business. Do you have thoughts on this area?
Thank you for a wonderful question. Exhilaration and exhaustion are excruciating and extraordinary beautiful at the same time :) I am grateful to be blessed with an Alexander The Great not-yet and to be able to read and write from time to time. I think I am doing great with varying success.
Speaking of unfinished business and being honest and brave enough to confess: I am learning to thrive on unfinished business. Now, I know this sounds terrifying and at odds with the concept that unfinished tasks use too much (brain) processing power, yet there’s something hilarious about circles not yet circled, flying around, waiting to be closed when the time comes.
The short answer to your question is: I am learning patience and surrendering to the greater force called life, to the invisible hand, even to the open world assumption, to get back to the semantic web paradigm. With the men/women inequality and the idea that Mommy is the only person to care for the baby – I am not the kind of person that would even admit that.
To tell the truth I’ve never even thought that I live in a men’s world, I live in the Universe where all stars are beautiful and serve a purpose, no matter their gender or brightness. And all these stars sooner or later come to the diapers world :)
Now Back to Dr. Sheth with his Quick Favourites: a Semantic Web technology, a paradox and a data challenge
Favourite Semantic Web technology application
MediaAnywhere and other semantic search, browsing, personalization applications we built in the 1999-2002 time frame, especially given how things are unfolding more recently by building and using knowledge graphs to develop similar capabilities now. I discuss this in some details in 15 years of Semantic Search and Ontology-enabled Semantic Applications. My more recent favorites are of course with the startups I work with—each of these has semantic technology at its core whether or not we use the W3C Semantic Web standards: ezDI (which uses an extensive knowledge graph of medical concepts to significantly improve clinical natural language understanding), Cognovi Labs (based on Kno.e.sis’ Twitris technology, which uses semantics supported by background knowledge bases for annotating, analyzing, and getting actionable insights, primarily from social data- see the surprising success in predicting #Brexit in this TechCrunch article), and Edamam, which uses the Semantic Web for the nutrition space. Of course, I am thrilled to see other applications that apply knowledge enabled semantic techniques at a Web scale, such as Google’ semantic search.
Favourite paradox in computing
Expressiveness versus computability. Many compromise the former in favor of latter. For me, if the richness of the real world is not captured in the model, representation, or what is computed upon, I find the results of computation uninteresting and not useful enough.
Favourite data challenge that hasn’t been overcome yet
Ability to decide in a robust manner how two objects or concepts are related in the context of interest. In other words, robust disambiguation techniques, especially for data that are schematically (even by modality) different but semantically (by meaning or use) the same, related, or relevant.