Posted:February 28, 2017

CognontoNew Technique of Reciprocal Mapping Adds 40% to Scope

We are pleased to announce that KBpedia, the large-scale dedicated knowledge graph for knowledge-based artificial intelligence (or KBAI), was released in a greatly expanded version today. The new KBpedia v.1.40 was expanded by 40% to now 54,000 concepts via a new method called reciprocal mapping that I covered in an article last week.

Knowledge graphs, technically known as ontologies, are normally expanded by mapping external knowledge systems to concepts that already reside in the target graph. This poses problems when the new source has different structure or much greater detail than the target graph has on its own. I likened this in my article last week to the “Swiss cheese problem” of gaps in coverage when aligning or combining knowledge graphs.

Reciprocal mapping is a new artificial intelligence method for identifying detailed structure in source knowledge bases, and then in identifying the proper placement points for that new structure within the target knowledge graph. These candidate placements are then tested against a series of logic and consistency tests to ensure the placements and the scope of the added structure in the now-expanded knowledge graph remain coherent. Candidates that pass these tests are then manually vetted for final acceptance before committing to a new build.

This reciprocal mapping method was applied to the source of “clean” Wikipedia categories against the KBpedia target. After all logic and consistency tests, KBpedia was expanded by nearly 15,000 new categories. The same process was used to also add missing definitions and new synonyms to KBpedia.

Frédérick Giasson, Cognonto’s CTO, developed some new graph embedding techniques coupled with machine learning to automate the generation of candidates for this reciprocal mapping process. This two-step process of standard mappings followed by reciprocal mappings can be applied to any external knowledge base. The technique means we can achieve the ‘highest common denominator’ capturing the full structure of source and target knowledge bases when mapping. Reciprocal mapping overcomes prior gaps when integrating enterprise knowledge into computable knowledge bases.

The new version 1.40 of the online KBpedia may be browsed, searched and inspected on the Cognonto Web site. The site also provides further documentation on how to browse the graph and how to search it.

Knowledge graphs are under constant change and need to be extended with specific domain information for particular enterprise purposes. The combinatorial aspects of adding new external schema or concepts to an existing store of concepts can be extensive. KBpedia, with its already tested and logical knowledge structure, is a computable foundation for guiding and testing new mappings. Such expanded versions may be tailored for any domain and enterprise need.

The KBpedia knowledge structure combines six (6) public knowledge bases — Wikipedia, Wikidata, OpenCyc, GeoNames, DBpedia and UMBEL — into an integrated whole. These core KBs are supplemented with mappings to more than a score of additional leading vocabularies. The entire KBpedia structure is computable, meaning it can be reasoned over and logically sliced-and-diced to produce training sets and reference standards for machine learning and artificial intelligence. KBpedia greatly reduces the time and effort traditionally required for data preparation and tuning common to AI tasks. KBpedia was first released in October 2016, though it has been under active development for more than six years.

Posted by AI3's author, Mike Bergman Posted on February 28, 2017 at 4:09 am in Cognonto, KBpedia, Knowledge-based Artificial Intelligence | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/2026/new-kbpedia-release-greatly-expands-knowledge-structure/
The URI to trackback this post is: https://www.mkbergman.com/2026/new-kbpedia-release-greatly-expands-knowledge-structure/trackback/
Posted:February 22, 2017

Mobius Band I - MC EsherAn Advance Over Simple Mapping or Ontology Merging

The technical term for a knowledge graph is ontology. As artifacts of information science, artificial intelligence, and the semantic Web, knowledge graphs may be constructed to represent the general nature of knowledge, in which case they are known as upper ontologies, or for domain or specialized purposes. Ontologies in these senses were first defined more than twenty years ago, though as artifacts they have been used in computer science since the 1970s. The last known census of ontologies in 2007 indicated there were more than 10,000 then in existence, though today’s count is likely in excess of 40,000 [1]. Because of the scope and coverage of these general and domain representations, and the value of combining them for specific purposes, key topics of practical need and academic research have been ontology mappings or ontology mergers, known collectively as ontology alignment. Mapping or merging make sense when we want to combine existing representations across domains of interest.

At Cognonto, ontology alignment is a central topic. Our KBpedia knowledge structure is itself the result of mapping nearly 30 different information sources, six of which are major knowledge bases such as Wikipedia and Wikidata. When applied to domain problems, mapping of enterprise schema and data is inevitably an initial task. Mapping techniques and options are thus of primary importance to our work with knowledge-based artificial intelligence (KBAI). When mapping to new sources we want to extract the maximum value from each contributing source without devolving to the tyranny of the lowest common denominator. We further need to retain the computability of the starting KBpedia knowledge structure, which means maintaining the logic, consistency, and coherence when integrating new knowledge.

We are just now completing an update to KBpedia that represents, we think, an important new option in ontology alignment. We call this option reciprocal mapping, and it represents an important new tool in our ontology mapping toolkit. I provide a high-level view and rationale for reciprocal mapping in this article. This article accompanies a more detailed use case by Fred Giasson that discusses implementation details and provides sample code.

The Mapping Imperative and Current Mappings

Cognonto, like other providers of services in the semantic Web and in artificial intelligence applied to natural languages, views mapping as a central capability. This importance is because all real-world knowledge problems amenable to artificial intelligence best express their terminology, concepts, and relations between concepts in a knowledge graph. Typically, that mapping relies upon a general knowledge graph, or upper ontology, or multiples of them, as the mapping targets for translating the representation of the domain of interest to a canonical form. Knowledge graphs, as a form, can represent differences in semantic meaning well (you say car, I say auto) while supporting inference and reasoning. For these problems, mapping must be seen as a core requirement.

There is a spectrum of approaches for how to actually conduct these mappings. At the simplest and least accurate end of the spectrum are string matching methods, sometimes supplemented by regular expression processing and heuristic rules. An intermediate set of methods uses concepts already defined in a knowledge base as a way to “learn” representations of those concepts; while there are many techniques, two that Cognonto commonly applies are explicit semantic analysis and word embedding. Most of these intermediate methods require some form of supervised machine learning or other ML techniques. At the more state-of-the-art end of the spectrum are graph embeddings or deep learning, which also capture context and conceptual relationships as codified in the graph.

Aside from the simple string match approaches, all of the intermediate and state-of-the-art methods use machine learning. Depending on the method, these machine learners require developing either training sets or corpuses as a reference basis for tuning the learners. These references need to be manually scoped, as in the case of training corpuses for unsupervised learning, or manually scored into true and false positives and negatives for training sets for supervised learning. Cognonto uses all of these techniques, but importantly supplements them with logic tests and scripts applied to the now-modified knowledge graph to test coherence and consistency issues that may arise. The coherency of the target knowledge graph is tested as a result of the new mappings.

Items failing those tests are fixed or dropped. Though not a uniform practice by others, Cognonto also adheres to a best practice that requires candidate mappings to be scored and manually inspected before final commitment to the knowledge graph. The knowledge graph is a living artifact, and must also be supported by proper governance and understood workflows. Properly constructed and maintained knowledge graphs can power much work from tagging to classification to question answering. Knowledge graphs are one of the most valuable information assets an enterprise can create.

We have applied all of these techniques in various ways to map the six major knowledge bases that make up the core of KBpedia, plus to the other 20 common knowledge graphs mapped to KBpedia. These mappings are what is contained in our current version 1.20 of KBpedia, the one active on the Cognonto Web site. All of these nearly 30 mappings represent using the basis of KBpedia (A) as the mapping target for the contributing KBs (B). All version 1.20 mappings are of this B A form.

Reciprocal Mappings to Extend Scope

This is well and good, and is the basis for how we have populated what is already in the KBpedia knowledge graph, but what of the opposite? In other words, there are concepts and local graph structure within the contributing KBs that do not have tie-in points (targets) within the existing KBpedia. This is particularly true for Wikipedia with its (mostly) comprehensive general content. From the perspective of Wikipedia (B), KBpedia v 1.20 (A) looks a bit like Swiss cheese with holes and gaps in coverage. Any source knowledge graph is likely to have rich structure and detail in specific areas beyond what the target graph may contain. 

As part of our ongoing KBpedia improvement efforts, the next step in a broader plan in support of KBAI, we have been working for some time on a form of reverse mapping. This process analyzes coverage in B to augment the scope of A (KBpedia). This kind of mapping, which we call reciprocal mapping, poses new challenges because candidate new structure must be accurately placed and logically tested against the existing structure. Fred recently wrote up a use case that covers this topic in part.

We are now nearly complete with this reciprocal mapping from Wikipedia to KBpedia (B (A+B) ). Reciprocal mapping results in new concepts and graph structure being added to KBpedia (now A+B). It looks like this effort, which we will finalize and release soon, will add nearly 40% to the scope of KBpedia.

This expansion effort is across-the-board, and is not focused or limited to any particular domain, topic or vocabulary. KBpedia’s coverage will likely still be inadequate for many domain purposes. Nonetheless, the effort does provide a test bed for showing how external vocabularies may be mapped and what benefits may arise from an expanded KBpedia scope, irrespective of domain. We will report here and on the Cognonto Web site upon these before and after results when we release.

Knowledge graphs are under constant change and need to be extended with specific domain information for particular domain purposes. KBpedia is perhaps not the typical ontology mapping case, since its purpose is to be a coherent, consistent, feature-rich structure to support machine learning and artificial intelligence. Yet, insofar as domain knowledge graphs aspire to support similar functionality, our new methods for reciprocal mapping may be of interest. In any case, effective means at acceptable time and cost must be found for enhancing or updating knowledge graphs, and reciprocal mapping is a new tool in the arsenal to do so.

The Reciprocal Mapping Process

Wikipedia is a particularly rich knowledge base for the reciprocal mapping process, though other major KBs also are suitable, including smaller domain-specific ones. In order to accomplish the reciprocal mapping process, we observed there were several differences for this reverse mapping case from our standard B A mapping. First, categories within the source KB are the appropriate basis for mapping to the equivalent concept (class) structure in KBpedia. The category structure in the source KB is the one which also establishes a similar graph structure. Second, whatever the source knowledge base, we need to clean its categories to make sure they correspond to the actual subsumption bases reflective in the target KBpedia. Cleaning involves removing administrative and so-called “compound” categories, or ones of convenience but not reflective of natural classes. In the case of Wikipedia, this cleaning results in the elimination of about 80% of the starting categories. Third, we also need to capture structural differences in the source knowledge graph (B). Possible category matches fall into three kinds: 1) leaf categories, which represent child extensions to existing KBpedia terminal nodes; 2) near-leaf categories, which also are extensions to existing KBpedia terminal nodes, but which also are parents to additional child structure in the source; and 3) core categories, which tie into existing intermediate nodes in KBpedia that are not terminal nodes. By segregating these structural differences, we are able to train more precise placement learners.

From our initial simple mapping (B A) we already have thousands of KBpedia reference concepts mapped to related Wikipedia categories. What we want to do is to use this linkage to propose a series of new sub-classes that we could add to KBpedia based on the sub-categories that exist in Wikipedia for each of these mappings. The challenge we face by proceeding in this way is that our procedure potentially creates tens of thousands of new candidates. Because the Wikipedia category structure has a completely different purpose than the KBpedia knowledge graph, and because Wikipedia’s creation rules are completely different than KBpedia, many candidates are inconsistent or incoherent to include in KBpedia. A cursory inspection shows that most of the candidate categories need to be dropped. Reviewing hundred of thousands of new candidates manually is not tenable; we need an automatic way to rank potential candidates.

The way we automate this process is to use an SVM classifier trained over graph-based embedding vectors generated using the DeepWalk method [2]. DeepWalk learns the sub-category patterns that exists in the Wikipedia category structure in an unsupervised manner. The result is to create graph embedding vectors for each candidate node. Our initial  B A maps enable us to quickly create training sets with thousands of pre-classified sub-categories. We split 75% of the training set for training, and 25% for cross-validation. We also employ some hyperparameter optimization techniques to converge to the best learner configuration.Once these three steps are completed, we classify all of the proposed sub-categories and create a list of potential sub-class-of candidates to add into KBpedia, which is then validated by a human. These steps are more fully described in the detailed use case.

We use visualization and reference “gold” standards as means to further speed the time to get to a solution. We visualize interim passes and tests using the TensorFlow Projector web application. Here, for example, is a 3D representation of the some of the clusters in the concepts:

SuperTypes View

Another way we might visualize things is to investigate the mappings by topic. Here, for example, we highlight the Wikipedia categories that have the word “wine” in them.

Wine Concepts View

While the visualizations are nice and help us to understand the mapping space, we are most interested in finding the learner configurations that produce the “best” results. Various statistics such as precision, recall, accuracy and others can help us determine that. The optimization target is really driven by the client. If you want the greatest coverage and can accept some false positives, then you might favor recall. If you only want a smaller set of correct results, then you would likely favor precision. Other objectives might emphasize other measures such as accuracy or AUM.

The reference “gold” standards in the scored training sets provide the basis for computing all of these statistics. We score the training sets as to whether a given mapping is true or false (correct or not). (False mappings need to be purposefully introduced.) Then, when we parse the test candidates against the training set, we note whether the learner result is either positive or negative (indicated as correct or indicated as not correct). When we match the test to the training set, we thus get one of four possible scores: true positives (TP), false positives (FP), true negatives (TN) and false negatives (FN). Those four simple scoring categories are sufficient for calculating any of the statistical measures.

We capture the reciprocal mapping process using a repeatable pipeline with the reporting of these various statistical measures, enabling rapid refinements in parameters and methods to achieve the best-performing model, according to how we have defined “best” per the discussion above. Once appropriate candidate categories are generated using this optimized model, the results are then inspected by a human to make selection decisions. We then run these selections against the logic and coherency tests for the now-modified graph, and keep or modify or drop the final candidate mappings depending on how they meet the tests. The semi-automatic methods in this use case can be applied to extending KBpedia with any external schema, ontology or vocabulary.

This semi-automated process takes 5% of the time it would normally take to conduct this entire process by comparable manual means. We know, since we have been doing the manual approach for nearly a decade.

The Snake Eating Its Tail

As we add each new KB to KBpedia or expand its scope through these mapping methods, verified to pass all of our logic and satisfiability tests, we continue to have a richer structure for testing coherence and consistency for the next iteration. The accretive process that enables these growing knowledge bases means there is more structure, more assertions, and more relations to test new mappings. The image that comes to mind is that of ouroboros, one of the oldest images of fertility. The snake or serpent eating its tail signifies renewal.

KBpedia has already progressed through this growth and renewal process hundreds of times. Our automated build scripts mean we can re-generate KBpedia on a commodity machine from scratch in about 45 minutes. If we add all of the logic, consistency and satisfiability checks, a new build can be created in about two hours.This most recent reciprocal mapping effort adds about 40% more nodes to KBpedia’s current structure. Frankly, this efficient and clean build structure is remarkable for one of the largest knowledge structures around with about 55,000 nodes and 20 million mapped instances.  Remarkably, using the prior KBpedia as the starting structure, we have been able to achieve this expansion with even better logical coherence of the graph in just a few hundred hours of effort.

The mapping methods discussed herein can extend KBpedia using most any external source of knowledge, one which has a completely different structure than KBpedia and one which has been built completely differently with a different purpose in mind than KBpedia. A variety of machine learning methods can reduce the effort required to add new concepts or structure by 95% or more. Machine learning techniques can filter potential candidates automatically to reduce greatly the time a human reviewer has to spend to make final decisions about additions to the knowledge graph. A workable and reusable pipeline leads to fast methods for testing and optimizing parameters used in the machine learning methods. The systematic approach to the pipeline and the use of positive and negative training sets means that tuning the approach can be fully automated and rapidly vetted.

This is where the real value of KBpedia resides. It is already an effective foundation for guiding and testing new domain extensions. KBpedia now shows itself to be the snake capable of consuming (mapping to), and thereby growing from, nearly any new knowledge base.


[1] A simple Google search of https://www.google.com/search?q=filetype:owl (OWL is the Web Ontology Language, one of the major ontology formats) shows nearly 39,000 results, but there are multiple ontology languages available, such as RDF, RDFS, and others (though use of any of these languages does not necessarily imply the artifact is a vocabulary or ontology).
[2] Perozzi, B., Al-Rfou, R., & Skiena, S. (2014, August). Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 701-710). ACM.
Posted:February 15, 2017

CognontoFirst in a New Series Explaining Non-Machine Learning Applications and Use Cases

This article is the first installment in a new, occasional series describing non-machine learning use cases and applications for Cognonto’s KBpedia knowledge graph. Most of these articles will center around the general use and benefits of knowledge graphs, but best practices and other applications will also be discussed. Prior to this, our use cases have centered on machine learning and knowledge-based artificial intelligence (KBAI). These prior use cases, plus the ones from this series, may be found on the Cognonto Web site under the Use Cases main menu item.

This kick-off article deals with browsing the KBpedia knowledge structure, found under the Knowledge Graph main menu link on the Cognonto Web site. KBpedia combines six public knowledge bases — Wikipedia, Wikidata, GeoNames, OpenCyc, DBpedia and UMBEL — and their concepts, entity types, attributes and relations. (Another 20 general vocabularies are also mapped into the KBpedia structure.) KBpedia is organized as a knowledge graph. This article describes the various components of the graph and how to browse and inspect them. Since client knowledge graphs also bridge off of the initial KBpedia structure, these same capabilities apply to client versions as well.

The example we present herein is based on the concept of ‘currency‘, which you may interactively inspect for yourself online.

Uses of the Knowledge Graph

The uses for browsing a knowledge graph include:

  • Learning about individual concepts and entities
  • Discovering related concepts and entities
  • Understanding the structure and typologies of the knowledge graph
  • Tracing conceptual lineages
  • Exploring inferences based on the logical assertions in the graph
  • General grazing and discovery, and many more, plus
  • Parallel access to structured and semantic search.

These uses, of course, do not include the work-related tasks in natural language processing or knowledge-based artificial intelligence.

KBpedia and the Graph

This combined KBpedia knowledge structure contains more than 39,000 reference concepts (RCs), organized into a knowledge graph as defined by the KBpedia Knowledge Ontology. KKO is a logically organized and computable structure that supports inference and reasoning.

About 85% of the RCs are themselves entity types — that is, 33,000 natural classes of similar entities such as astronauts or zoo animals — that are organized into about 30 “core” typologies that are mostly disjoint (non-overlapping) with one another. By definition an entity type is also a ‘reference concept’, or RC. 

KBpedia’s typologies provide a powerful means for slicing-and-dicing the knowledge structure. The individual entity types provide the tie-in points to about 20 million individual entities. The remaining RCs are devoted to other logical divisions of the knowledge graph, specifically attributes, relations and topics.

It is this structure, plus often connections to another 20 leading external vocabularies, that forms the basis of the KBpedia Knowledge Graph.

For the standard Cognonto browser, each RC concept has a record with potentially eight (8) main panels or sections, each of which is described below:

  • Header
  • Core Structure
  • Extended Linkages
  • Typologies
  • Entities
  • Aspect-related Entities
  • Broader Concepts
  • Narrower Concepts.

Panels are only displayed when there are results for them.

Header

Each entry begins with a header:

Header to Knowledge Graph Entry

Above the header to the left is the listing for the current KBpedia version and its date of release. Next to it is a link for sending an email to a graph administrator should there be a problem with the current entry. Above the header to the right is the search box, itself the topic of another application case.

The Header consists of these possible entries:

  • prefLabelURIimage — the prefLabel is the name or “title” for the RC. While the name has no significant meaning in and of itself (the meaning for the RC is a result of all specifications and definitions, including relations to other objects, for the concept), the prefLabel does provide a useful shorthand or handle for referring to the concept. The URI is the full Web reference to the concept, such as http://kbpedia.org/kko/rc/Currency. If there is an image for the RC, it is also displayed
  • semset — the entries here, also known as altLabels, are meant to inclusively capture all roughly equivalent references to the concept, including synonyms, slang, jargon and acronyms
  • definition — the readable text definition or description of the RC; some live links may be found in the definition.

Core Structure

The Core Structure for KBpedia is the next panel. Two characteristics define what is a core contributor to the KBpedia structure: 1) the scale and completeness of the source; and 2) its contribution of a large number of RCs to the overall KKO knowledge graph. The KBs in the core structure play a central role in the scope and definition of KBpedia. This core structure of KBpedia is supplemented by mappings to about 20 additional external linkages, which are highly useful for interoperability purposes, but do not themselves contribute as much to the RC scope of the KKO graph. The Core Structure is derived from the six (6) main knowledge bases — OpenCyc, UMBEL, GeoNames, DBpedia, Wikipedia and Wikidata.

The conceptual relationships in the KBpedia Knowledge Ontology (KKO) are largely drawn from OpenCyc, UMBEL, or Wikipedia, though any of the other sources may contribute local knowledge graph structure. Additional reference concepts are contributed primarily from GeoNames. Wikidata contributes the bulk of the instance data, though instance records are actually drawn from all sources. DBpedia and Wikidata are also the primary sources for attribute characterizations of the instances. Instance data, by definition, are not part of the core structure.

Here is the Core Structure panel:

Core Structure for a Knowledge Graph Entry

The Core Structure panel, like the other panels, has a panel title followed by a brief description. The Core Structure panel lists the equivalent class (owl:equivalentClass), parent super classes (kko:superClassOf), child sub classes (rdfs:subClassOf), or a closely related concept (kko:isCloselyRelated) (not shown). These relationships define the edges between the nodes in the graph structure, and are also the basis for logical inferencing.

Sub-classes and super-classes may be determined either as direct assertions or those that are inferred from parent-child relationships in the Knowledge Graph. An inferred relationship includes any of the parent or child ancestors; the direct is the immediate child or parent. Picking one of these links restricts the display to the concepts related to that category. Like familial relationships, the closer the concept is to its lineage relation, the likely closer are the shared attributes or characteristics of the concepts. Such lineage inferences arise from the relations in the KBpedia Knowledge Ontology (KKO).

Each of the related concepts is presented as a live link, which if clicked, will take you to a new entry for that concept. Some of the icons and information for equivalent classes are discussed under other panels below.

External Linkages

In addition to the Core Structure, KBpedia RCs are linked to thousands of classes defined in nearly 20 external ontologies used to describe all kinds of public and private datasets. Some of the prominent external vocabularies include schema.org, the major structured data system for search engines, and Dublin Core, a key vocabulary from the library community. Other external vocabularies cover music, organizations, projects, social media, and the like.

Here is how the External Linkages panel looks, which has many parallels to the Core Structure panel:

External Linkages for a Knowledge Graph Entry

 

The external links, like the core ones, are shown as live links with an icon associated to each source. For RCs that are entity types, the entry might also display the count of entities (orange background with count) or related-aspect entities (blue background with count) linked to that RC (either directly or inferred, depending on the option chosen). Clicking on the specific RC link will take you to that reference concept. Clicking on the highlighted background will take you to a listing of the entities for that RC (based on either its direct or inferred option).

Also, like the short descriptions on each of these panels, clicking the more link expands the description available:

Getting More Information

Entities

Entities are distinct, nameable, individual things. There are more than 20 million of them in the baseline KBpedia.

Entities may be physical objects or conceptual works or discrete ideas, so long as they may be characterized by attributes shared by other instances within similar kinds or types. Entities may be parts of other things, so long as they have a distinct identity and character. Entities with shared attributes that are the essences of the things may be grouped into natural types, called entity types. These entity types may be further related to other entity types in natural groupings or hierarchies depending on the attributes and their essences that are shared among them.

Here is how the general Entities panel appears:

Entities for a Knowledge Graph Entry

In this case for currency, there are 2003 instances (individual entities) in the current KBpedia knowledge base. The first few of these are shown in the panel, with the live links then taking you to the an entity report for that instance. Similarly, you can click the Browse all entities button, which then allows you to scroll through the entire listing of entities. Here is how that subsidiary page, in part, appears:

Entities Listing for a Knowledge Graph Entry

Nearly 85%, or 33,000, of the reference concepts within the KBpedia Knowledge Ontology (KKO) are entity types, these natural classes of entities. They are key leverage points for inteoperability and mapping. Instances (or entities) are related to the KKO graph via the rdfs:type predicate, which assigns an entity to one or more parental classes. It is through this link that you view the individual entities.

Aspect-related Entities

Entities may also be characterized according to one or more of about 80 aspects. Aspects help to group related entities by situation, and not by identity nor definition. Aspects thus provide a secondary means for organizing entities independent of their nature, but helpful for placing the entity in real-world contexts. Not all aspects relate to a given entity.

The Aspects panel has a similar presentation to the other panels:

Aspects for a Knowledge Graph Entry

If an entity with a related aspect occurs in the knowledge system, its aspect label will be shown with then a listing of the top entities for that aspect. Each of these entities is clickable, which will take you to the standard entity record. A button to Browse all entities means there are more entities for that aspect than the short listing will allow; click on it to be able to paginate through the full listing of related entities.

Note, as well, on this panel that we are also highlighting the down arrow at the upper right of the panel. Clicking that causes the entire panel to collapse, leaving only the title. Clicking on the arrow again causes the panel to expand. This convention applies to all of the panels discussed here.

Typologies

About 85% of all of the reference concepts (RCs) in KBpedia represent classes of entities, which themselves are organized into about 30 core typologies. Most of these typologies are disjoint (lack overlap) from one another, which provides an efficient mechanism for testing subsets and filtering entities into smaller groups for computational purposes. (Another 30 or so SuperTypes provide extended organization of these entities.)

The Typologies panel follows some of the standard design of the other panels. Only the typologies to which the current entry belongs, in this case currency, are shown:

Typologies for a Knowledge Graph Entry

As noted, the major groupings of types reside in core typologies, which is where the largest degree of disjointedness occurs. There are some upper typologies (such as Living Things over Plants, Animals, etc.) that are used mostly for organizational purposes; these are the extended ones. The core typologies are the key ones to focus upon for distinguishing large groupings of entities.

Concept Hierarchies

The last panel section for a concept presents both the parental (Broader) and child (Narrower) concepts for the current entry (again, in this case, currency). Broader concepts represent the parents (or grandparental lineage in the case of inference) for the current reference concept. The broader concept relationship is expressed using the transitive kko:superClassOf property. This property is the inverse of the rdfs:subClassOf property. Narrower concepts represent the children (or grandchild lineages in the case of inference) for the current RC. The narrower concept relationship is expressed using the transitive rdfs:subClassOf property. This property is the inverse of the kko:superClassOf property.

Here is the side-by-side panel presentation for these relationships:

Broader and Narrower Classes for a Knowledge Graph Entry

Like some of the prior panels, it is possible to toggle between direct and inferred listings of these related concepts. If the RC is an entity type, it may also show counts for all entities subsumed under that type (orange color) or that have aspects of that type (blue color). Clicking on these count icons will take you to a listing of these entities.

Client Variations

This browsing and discovery use case is based on the standard configuration and the baseline KBpedia. Client variants may change the design and functionality of the application. More importantly, however, client applications are invariably extensions to the base KBpedia knowledge structure. These sometimes have some typologies removed because they are not relevant, but more likely have been expanded with the mapping of domain schema, vocabularies, and instances. In these cases, the actual content to be browsed may differ significantly from what is shown.

This article is part of an occasional series describing non-machine learning use cases and applications for Cognonto’s KBpedia knowledge graph. Most center around the general use and benefits of knowledge graphs, but best practices and other applications are also discussed. Prior machine learning use cases, and the ones from this series, may be found on the Cognonto Web site under the Use Cases main menu item.

Posted by AI3's author, Mike Bergman Posted on February 15, 2017 at 7:38 pm in Cognonto, KBpedia, Semantic Web Tools | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/2022/browsing-the-kbpedia-knowledge-graph/
The URI to trackback this post is: https://www.mkbergman.com/2022/browsing-the-kbpedia-knowledge-graph/trackback/
Posted:February 6, 2017

Charles Sanders PeirceApplying the Mindset of His Universal Categories to New Problems

Last year I described in an article, The Importance of Being Peirce, how Charles Sanders Peirce, the late 19th century logician and polymath of the first order, provided a very powerful framework with his universal categories to capture the needs of knowledge representation. That article outlined Peirce’s categories of Firstness, Secondness and Thirdness, and how they informed those needs, especially in the areas of context, meaning and perspective. These areas, grounded in the idea of Thirdness, have been missing linchpins in nearly all upper ontologies to date. As we come to understand knowledge graphs as a central feature of knowledge-based artificial intelligence (KBAI), how we bring these concepts into our representations of the world is of utmost importance [1].

In this article, I want to expand on that theme by talking about how this Peircean mindset can help inform answers to new problems, problems that Peirce did not directly address himself. Indeed, the problems that set this context are machine learning and natural language understanding, all driven by computers and electronic data unimagined in Peirce’s day. Because my views come from my own context, something that Peirce held as an essence of Thirdness, I can not fairly say that my views are based on Peirce’s own views. Who knows if he would endorse my views more than a century after his death? But, my take on these matters is the result of much reading, thought, repeat reading and study of Peirce’s writings. So while I can not say my views are based on Peirce, I can certainly say that my views are informed by him. And they continue to be so.

As we use Peircean principles to address new problems, I think it is important to describe how Peirce’s views are informing that process. This explanation is hard to convey because it tries to explicate one of the most subtle aspects of Thirdness, what I call herein mindset. Thus, while last year’s noted article covers the what of Peirce’s universal categories, this article attempts to explain the how we think about and develop that mindset [2].

Peirce is Not Frozen in Amber

There are philosophers, logicians and scholars who study Peirce as a passion, many for a living. There is a society devoted to Peirce, many Web sites such as Arisbe at the University of Indiana, online forums including for biosemiotics, annual conferences, and many individuals with their own Web sites and writings who analyze and pronounce strong views as to what Peirce meant and how he should be interpreted. Though Peirce was neglected by many during the heyday of analytical philosophy throughout the 20th century, that is rapidly changing. The reason for Peirce’s ascendancy, I think, is exactly due to the Internet, with then ties to knowledge representation and artificial intelligence. Peircean views are directly relevant to those topics. His writings in logic, semiosis (signs), pragmatics, existential graphs, classification, and how to classify are among the most direct of this relevancy.

But relevant does not mean agreed upon and researchers understand Peirce through their own lenses, as the idea of Peirce’s Thirdness affirms. Most Peircean scholars acknowledge changes in Peirce’s views over time, particularly from his early writings in the 1860s to those after the turn of the century and up until his death in 1914. Where Peirce did undergo major changes or refinements in understanding, Peirce himself was often the first to explain those changes. Peirce also had strong views about the need to be precise with naming things, best expressed by his article on The Ethics of Terminology [3]. His views led him to often use obscure terms or his own constructions to avoid sloppy understanding of common terms; he also proposed a variety of defining terms throughout the life of many of his concepts in his quest for precision. So even if the ideas and concepts remained essentially unchanged, his terminology did not. Further, when his friend William James began writing on pragmatics, a term first proferred by Peirce, but explained by James in ways not wholly agreed by him, Peirce shifted the definition of his own concept to the term pragmaticism.

I can appreciate Peirce’s preference for precision in how he describes things. I can also appreciate scholars sometimes concentrating more on literalness than meaning. But the use of obfuscatory terms or concentrating on labels over the conceptual is a mistake. When looking for precise expression for new ideas I try to harken to key Peircean terms and concepts, but I sometimes find alternative descriptions within Peirce’s writings that communicate better to modern sensibilities. Concepts attempt to embody ideas, and while it is useful to express those concepts with clear, precise and correct terminology, it is the idea that is real, not the label. In Peirce’s worldview, the label is only an index. I concur. In the semantic Web, this is sometimes referred to as “things, not strings.”

The Nature of Knowledge

“. . . the machinery of the mind can only transform knowledge, but never originate it, unless it be fed with facts of observation.” (CP 5.392) (see How to Make Our Ideas Clear.)

That we live in an age of information and new technologies and new developments is a truth clear to all. These developments lead to a constant barrage of new facts. What we believe and how we interpret that new information is what we call knowledge. New facts connect to or change our understanding of old “facts”; those connections, too, are a source of new knowledge. Our: 1) powers of observation and learning and discovery; 2) interactions and consensus-building with communities; and 3) the methods of scientific inquiry, all cause us to test, refine and sometimes revise or discard what we thought to be prior truths. Knowledge is thus dynamic, constantly growing, and subject to revision.

Peirce was a firm believer in reality and truth. But because we never can have all of the facts, and how we understand or act upon those facts is subject to our own contexts and perspectives (the Thirdness under which real things are perceived), “truth” as we know it can never be absolute. We can be confident in our beliefs about the correctness of assertions, but we can never be absolutely certain. While in some objective reality there is indeed absolute truth, we can only approximate an understanding of this truth. This view was a central tenet of Peirce’s mindset, which he called fallibilism:

“. . . I used for myself to collect my ideas under the designation fallibilism; and indeed the first step toward finding out is to acknowledge you do not satisfactorily know already; so that no blight can so surely arrest all intellectual growth as the blight of cocksureness; and ninety-nine out of every hundred good heads are reduced to impotence by that malady — of whose inroads they are most strangely unaware!” (CP 1.13)

Just because our knowledge may be incomplete or false, Peirce does not advocate anything goes. There is objective reality and there is truth, whether we can see or touch or think about it. Right beliefs are grounded in shared community concepts of truth; testing and falsifying our assumptions helps peel back the layers of truth, improving our basis for right action.

The quest for truth is best embodied in the scientific method, where we constantly revise and test what we think we know. When the fact emerges that does not conform to this worldview, we need to stand back and test anew the assumptions that went into our belief. Sometimes that testing causes us to change our belief. New knowledge and innovation are grounded in this process. Terminology is also important in this task. Our ways of testing and communicating knowledge are dependent on how accurately we are capturing the objective truth and how well we can describe and communicate it to others, who are also pursuing the same quest for truth. But, because the information needed for such knowledge is never complete, nor is the societal consensus for how we describe what we observe about the truths for which we quest, our understanding is always incomplete and fallible.

We thus hardly live in an either-or world. Shades of gray, what information is available, and differences of perspective, context and shared meaning, each affect what we call knowledge. Binary or dyadic upper ontologies (in the domain of knowledge representation), the most common form, can by definition not capture these nuances. Peirce’s most effective argument for Thirdness resides in providing perspective to dyadic structures. A thirdness is required to stand apart from the relation, or to express relations dealing with relations, such as to give. The ability to embrace this thirdness is the major structural choice within KBpedia.

We also hardly live in a world of complete information. A key reason why two agents or parties may not agree or share the same knowledge of an idea is the difference in the information available or employed by each of them. This difference can be one of scope, the nature of the information, or the nature of the agent. Differences in information pepper Peirce’s examples and arguments. Peirce had a very precise view of information as the product of the characteristics of a subject, which he called depth, times the outward relations of that subject, which he called breadth. The nature of information is that it is not equal at all times to all agents or interpreters. In the realm of semantic technologies, the logical framework to capture this truth of the real world is known as the open world assumption (or OWA) [4]. It is a topic we have written about for years. Though the OWA terminology was not available in Peirce’s time, the idea is certainly part of his mindset.

Being Informed by the Categories

Peirce has historically been known best as the father of pragmatism (pragmaticism, see above). The central ideas behind Peircean pragmatism are how to think about signs and representations (semiosis), how to logically reason and handle new knowledge (abduction), statistics, making economic and efficient research choices, how to categorize, and the importance and process of the scientific method. All of these contributions are grounded in Peirce’s universal categories of Firstness, Secondness and Thirdness. And herein lies the key to being informed by Peirce when it comes to representing new knowledge, categorization, or problem-solving: It is the mindset of Thirdness and the nature of Firstness and Secondness that provides guidance to knowledge-based artificial intelligence.

I continue to assemble examples of Firstness, Secondness and Thirdness across Peirce’s writings. I probably have assembled 100 such trichotomies, parts of which I’ve published before [5]. Each of these trichotomies is embedded in Peirce’s writings, which need to be read and re-read to appreciate the viewpoint behind each specific triad. It is through such study that the mindset of the universal categories may be grokked. I’ve also spoken in practical terms for how I see this mindset applied to questions of categorization and emerging knowledge in knowledge bases, knowledge representation, and artificial intelligence (KBAI) [2].

From the perspective of KBAI, being informed by Peirce thus means that, firstly, we need to embrace terminology that is precise for concepts and relations to communicate effectively within our communities. Secondly, we need to capture the right particular things of concern in our knowledge domain and connect them using those relations. This mindset naturally leads to a knowledge graph structure. And, thirdly, we need to organize our knowledge domain by general types based on logical, shared attributes, but also embrace a process for expanding that structure with acceptable effort to deal with new information or emergent knowledge. Changes in Firstness or Secondness are reasoned over in Thirdness, beginning the process anew.

And that leads to a final observation about mindset, especially with regard to Thirdness. Continuity is an aspect of Thirdness, and discovery of new knowledge is itself a process. Concepts around space and time also become clearer when they can be embedded in a understanding of continuity. Peirce effectively argues for why three is the highest n-ary relation that can not be decomposed, or reduced to a simpler form. The brilliance of Peirce’s mindset is that first, second and third are a sufficient basis to bootstrap how to represent the world. Knowledge can not be represented without an explicit thirdness.


[1] Most references to Peirce herein are from the electronic edition of The Collected Papers of Charles Sanders Peirce, reproducing Vols. I-VI, Charles Hartshorne and Paul Weiss, eds., 1931-1935, Harvard University Press, Cambridge, Mass., and Arthur W. Burks, ed., 1958, Vols. VII-VIII, Harvard University Press, Cambridge, Mass. The citation scheme is volume number using Arabic numerals followed by section number from the collected papers, shown as, for example, CP 1.208.
[2] In a still earlier article, A Foundational Mindset: Firstness, Secondness, Thirdness” (M.K. Bergman, 2016, AI3:::Adaptive Information blog, March 21, 2016), in its concluding sections I attempted to explain the how of applying Peirce’s universal categories to the questions of categorization and representing new and emerging information.
[3] See further CP 2.219-226. Also an earlier article that helps provide Peirce’s views on communications is, How to Make Our Ideas Clear.
[4] I’ve written much over the years on OWA. See especially M.K. Bergman, 2009, “The Open World Assumption: Elephant in the Room,” AI3:::Adaptive Information blog, December 21, 2009.
[5] See further the citation in [2].
Posted:January 25, 2017

CognontoEight Cognonto Use Cases are Now Available

Since its initial release in September, we have continued to refine Cognonto’s KBpedia knowledge structure that integrates six major knowledge bases (Wikipedia, Wikidata, OpenCyc, GeoNames, DBpedia and UMBEL), plus mappings to another 20 leading ones. KBpedia provides a foundation for knowledge-based artificial intelligence (KBAI) by supporting the (nearly) automatic creation of training corpuses and positive and negative training sets and feature sets for deep, unsupervised and supervised machine learning.

Our most recent efforts have been to expand the scope and completeness of KBpedia, largely based on filling gaps in the current structure using local Wikipedia categories. This ongoing effort is making sure that the overall KBpedia structure represents the best amalgam of structure and content from KBpedia’s contributing knowledge bases. There should be an announcement of a new KBpedia release arising from these current efforts soon.

However, in the process of enhancing the Cognonto Mapper for this expansion, two new use cases have resulted from our efforts. The first use case outlines how we have used the DeepWalk graph embedding model to expand KBpedia using Wikipedia category information. The second use case, again using DeepWalk, is a fast method for accurate concept disambiguation.

With these two additions, Cognonto now has eight diverse use cases:

Each use case is summarized according to the problem and our approach to solving it and the benefits that result. The use cases themselves present general workflows and code snippets for how the use case was tackled.

We will continue to publish use cases using Cognonto’s technologies and KBpedia as they arise. Also, stay tuned for the expanded KBpedia release.