Posted:July 13, 2020

KBpediaNew KBpedia ID Property and Mappings Added to Wikidata

Wikidata editors last week approved adding a new KBpedia ID property (P8408) to their system, and we have therefore followed up by adding nearly 40,000 mappings to the Wikidata knowledge base. We have another 5000 to 6000 mappings still forthcoming that we will be adding in the coming weeks. Thereafter, we will continue to increase the cross-links, as we partially document below.

This milestone is one I have had in my sights for at least the past few years. We want to both: 1) provide a computable overlay to Wikidata; and 2) increase our dependence and use of Wikidata’s unparalleled structured data resources as we move KBpedia forward. Below I give a brief overview of the status of Wikidata, share some high-level views of our experience in putting forward and then mapping a new Wikidata property, and conclude with some thoughts of where we might go next.

The Status of Wikidata

Wikidata is the structured data initiative of the Wikimedia Foundation, the open-source group that oversees Wikipedia and many other notable Web-wide information resources. Since its founding in 2012, Wikidata’s use and prominence have exceeded expectations. Today, Wikidata is a multi-lingual resource with structured data for more than 95 million items, characterized by nearly 10,000 properties. Items are being added to Wikidata at a rate of nearly 5 million per month. A rich ecosystem of human editors and bots patrols the knowledge base and its entries to enforce data quality and consistency. The ecosystem includes tools for bulk loading of data with error checks, search including structured SPARQL queries, and navigation and visualization. Errors and mistakes in the data occur, but the system ensures such problems are removed or corrected as discovered. Thus, as data growth has occurred, so has quality and usefulness improved.

From KBpedia’s standpoint, Wikidata represents the most complete complementary instance data and characterization resource available. As such, it is the driving wheel and stalking horse (to mix eras and technologies) to guide where and how we need to incorporate data and its types. These have been the overall motivators for us to embrace a closer relationship with Wikidata.

As an open governance system, Wikidata has adopted its own data models, policies, and approval and ingest procedures for adopting new data or characterizations (properties). You might find it interesting to review the process and ongoing dialog that accompanied our proposal for a KBpedia ID as a property in Wikidata. As of one week ago, KBpedia ID was assigned Wikidata property P8408. To date, more than 60% of Wikidata properties have been such external identifiers, and IDs are the largest growing category of properties. Since most properties that relate to internal entity characteristics have already been identified and adopted, we anticipate mappings to external systems will continue to be a dominant feature of the growth in Wikidata properties to come.

Our Mapping Experience

There are many individuals that spend considerable time monitoring and overseeing Wikidata. I am not one of them. I had never before proposed a new property to Wikidata, and had only proposed one actual Q item (Q is the standard prefix for an entity or concept in Wikidata) for KBpedia prior to proposing our new property.

Like much else in the Wikimedia ecosystem, there are specific templates put in place for proposing a new Q item or P proposal (see the examples of external identifiers, here). Since there are about 10,000 times more Q items than properties, the path for getting a new property approved is more stringent.

Then, once a new property is granted, there are specific paths like QuickStatements or others that need to be followed in order to submit new data items (Q ids) or characteristics (property by Q ids). I made some newbie mistakes in my first bulk submissions, and fortunately had a knowledgeable administrator (@Epidosis) help guide me through making the fixes. For example, we had to back off about 10,000 updates because I had used the wrong form for referencing a claim. Once reclaimed, we were able to again upload the mappings.

As one might imagine, updates and changes are being submitted by multiple human agents and (some) bots at all times into the system. The facilities like QuickStatements are designed to enable batch uploads, and allow re-submissions due to errors. You might want to see what is currently active on the system by checking out this current status.

With multiple inputs and submitters, it takes time for large claims to be uploaded. In the case our our 40,000 mappings, we also accompanied each of those with a source and update data characterization, leading to a total upload of more than 120,000 claims. We split our submissions over multiple parts or batches, and then re-submitted if initial claims error-ed out (for example, if the base claim had not been fully registered, the next subsidiary claims might error due to lack of the registered subject; upon a second pass, the subject would be there and so no error). We ran our batches at off times for both Europe and North America, but the total time of the runs still took about 12 hours. 

Once loaded, the internal quality controls of Wikidata kick in. There are both bots and human editors that monitor concepts, both of which can flag (and revert) the mapping assignments made. After three days of being active on Wikidata, we had a dozen reverts of initial uploaded mappings, representing about 0.03% of our suggested mappings, which is gratifyingly low. Still, we expect to hear of more such errors, and we are committed to fix all identified. But, at this juncture, it appears our initial mappings were of pretty high quality.

We had a rewarding and learning experience in uploading mappings to Wikidata and found much good will and assistance from knowledgeable members. Undoubtedly, everything should be checked in advance to ensure quality assertions when preparing uploads to Wikidata. But, if done, the system and its editors also appear quite capable to identify and enforce quality control and constraints as encountered. Overall, I found the entire data upload process to be impressive and rewarding. I am quite optimistic of this ecosystem continuing to improve moving forward.

The result of our external ID uploads and mappings can be seen in these SPARQL queries regarding the KBpedia ID property on Wikidata:

As of this writing, the KBpedia ID is now about the 500th most prevalent property on Wikidata.

What is Next?

Wikidata is clearly a dynamic data environment. Not only are new items being added by the millions, but existing items are being better characterized and related to external sources. To deal with the immense scales involved requires automated quality checking bots with human editors committed to the data integrity of their domains and items. To engage in a large-scale mapping such as KBpedia also requires a commitment to the Wikidata ecosystem and model.

Initiatives that appear immediately relevant to what we have put in place relating to Wikidata include to:

  • Extend the current direct KBpedia mappings to fix initial mis-assignments and to extend coverage to remaining sections of KBpedia
  • Add additional cross-mappings that exist in KBpedia but have not yet been asserted in Wikidata (for example, there are nearly 6,000 such UNSPSC IDs)
  • Add equivalent class (P1709) and possible superproperties (P2235) and subproperties (P2236) already defined in KBpedia
  • Where useful mappings are desirable, add missing Q items used in KBpedia to Wikidata
  • And, most generally, also extend mappings to the 5,000 or so shared properties between Wikidata and KBpedia.

I have been impressed as a user of Wikidata for some years now. This most recent experience also makes me enthused about contributing data and data characterizations directly.

To Learn More

The KBpedia Web site provides a working KBpedia explorer and demo of how the system may be applied to local content for tagging or analysis. KBpedia splits between entities and concepts, on the one hand, and splits in predicates based on attributes, external relations, and pointers or indexes, all informed by Charles Peirce‘s writings related to knowledge representation. KBpedia was first released in October 2016 with some open source aspects, and was made fully open in 2018. KBpedia is partially sponsored by Cognonto Corporation. All resources are available under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.

Posted:June 15, 2020

KBpediaNew Version Finally Meets the Hurdle of Initial Vision

I am pleased to announce that we released a powerful new version of KBpedia today with e-commerce and logistics capabilities, as well as significant other refinements. The enhancement comes from adding the United Nations Standard Products and Services Code as KBpedia’s seventh core knowledge base. UNSPSC is a comprehensive and logically organized taxonomy for products and services, organized into four levels, with third-party crosswalks to economic and demographic data sources. It is a leading standard for many industrial, e-commerce, and logistics applications.

This was a heavy lift for us. Given the time and effort involved, Fred Giasson, KBpedia’s co-editor, and I decided to also tackle a host of other refinements we had on our plate. All told, we devoted many thousands of person-hours and more than 200 complete builds from scratch to bring this new version to fruition. Proudly I can say that this version finally meets the starting vision we had when we first began KBpedia’s development. It is a solid baseline to build from for all sorts of applications and to make broad outreach for adoption in 2020. Because of the extent of changes in this new version, we have leapfrogged KBpedia’s version numbering from 2.21 to 2.50.

KBpedia is a knowledge graph that provides a computable overlay for interoperating and conducting machine learning across its constituent public knowledge bases of Wikipedia, Wikidata, GeoNames, DBpedia, schema.org, OpenCyc, and, now, UNSPSC. KBpedia now contains more than 58,000 reference concepts and their mappings to these knowledge bases, structured into a logically consistent knowledge graph that may be reasoned over and manipulated. KBpedia acts as a computable scaffolding over these broad knowledge bases with the twin goals of data interoperability and knowledge-based artificial intelligence (KBAI).

KBpedia is built from a expandable set of simple text ‘triples‘ files, specified as tuples of subject-predicate-object (EAVs to some, such as Kingsley Idehen) that enable the entire knowledge graph to be constructed from scratch. This process enables many syntax and logical tests, especially consistency, coherency, and satisfiability, to be invoked at build time. A build may take from one to a few hours on a commodity workstation, depending on the tests. The build process outputs validated ontology (knowledge graph) files in the standard W3C OWL 2 semantic language and mappings to individual instances in the contributing knowledge bases.

As Fred notes, we continue to streamline and improve our build procedures. Major changes like what we have just gone through, be it adding a main source like UNSPSC or swapping out or adding a new SuperType (or typology), often require multiple build iterations to pass the system’s consistency and satisfiability checks. We need these build processes to be as easy and efficient as possible, which also was a focus of our latest efforts. One of our next major objectives is to release KBpedia’s build and maintenance codes, perhaps including a Python option.

Incorporation of UNSPSC

Though UNSPSC is consistent with KBpedia’s existing three-sector economic model (raw products, manufactured products, services), adding it did require structural changes throughout the system. With more than 150,000 listed products and services in UNSPSC, incorporating it needed to balance with KBpedia’s existing generality and scope. The approach was to include 100% of the top three levels of UNSPSC — segments, families, and classes — plus more common and expected product and service ‘commodities’ in its fourth level. This design maintains balance while providing a framework to tie-in any remaining UNSPSC commodities of interest to specific domains or industries. This approach led to integrating 56 segments, 412 families, 3700+ classes, and 2400+ commodities into KBpedia. Since some 1300 of these additions overlapped with existing KBpedia reference concepts, we checked, consolidated, and reconciled all duplicates.

We fully specified and integrated all added reference concepts (RCs) into the existing KBpedia structure, and then mapped these new RCs to all seven of KBpedia’s core knowledge bases. Through this process, for example, we are able to greatly expand the coverage of UNSPSC items on Wikidata from 1000 or so Q (entity) identifiers to more than 6500. Contributing such mappings back to the community is another effort our KBpedia project will undertake next.

Lastly with respect to UNSPSC, I will be providing a separate article on why we selected it as KBpedia’s products and services template, and how we did the integration and what we found as we did. For now, the quick point is that UNSPSC is well-structured and organized according to the three-sector model of the economy, which matches well with Peirce’s three universal categories underlying our design of KBpedia.

Other Major Refinements

These changes were broad in scope. Effecting them took time and broke open core structures. Opportunities to rebuild the structure in cleaner ways arise when the Tinkertoys get scattered and then re-assembled. Some of the other major refinements the project undertook during the builds necessary to create this version were to:

  • Further analyze and refine the disjointedness between KBpedia’s 70 or so typologies. Disjoint assertions are a key mechanism for sub-set selections, various machine learning tasks, querying, and reasoning
  • Increase the number of disjointedness assertions 62% over the prior version, resulting in better modularity. (However, note the actual RCs affected by these improvements is lower than this percentage since many were already specified in prior disjoint pools)
  • Add 37% more external mappings to the system (DBpedia and UNSPSC, principally)
  • Complete 100% of the definitions for RCs across KBpedia
  • Greatly expand the altLabel entries for thousands of RCs
  • Improve the naming consistency across RC identifiers
  • Further clean the structure to ensure that a given RC is specified only once to its proper parent in an inheritance (subsumption) chain, which removes redundant assertions and improves maintainability, readability, and inference efficiency
  • Expand and update the explanations within the demo of the upper KBpedia Knowledge Ontology (KKO) (see kko-demo.n3). This non-working ontology included in the distro makes it easier to relate the KKO upper structure to the universal categories of Charles Sanders Peirce, which provides the basic organizational framework for KKO and KBpedia, and
  • Integrate the mapping properties for core knowledge bases within KBpedia’s formal ontology (as opposed to only offering as separate mapping files); see kbpedia-reference-concepts-mappings.n3 in the distro.

Current Status of the Knowledge Graph

The result of these structural and scope changes was to add about 6,000 new reference concepts to KBpedia, then remove the duplicates, resulting in a total of more than 58,200 RCs in the system. This has increased KBpedia’s size about 9% over the prior release. KBpedia is now structured into about 73 mostly disjoint typologies under the scaffolding of the KKO upper ontology. KBpedia has fully vetted, unique mappings (nearly all one-to-one) to these key sources:

  • Wikipedia – 53,323 (including some categories)
  • DBpedia – 44,476
  • Wikidata – 43,766
  • OpenCyc – 31,154
  • UNSPSC – 6,553
  • schema.org – 842
  • DBpedia ontology – 764
  • GeoNames – 680
  • Extended vocabularies – 249.

The mappings to Wikidata alone link to more than 40 million unique Q instance identifiers. These mappings may be found in the KBpedia distro. Most of the class mapping are owl:equivalentClass, but a minority may be subClass or superClass or isAbout predicates as well.

KBpedia also includes about 5,000 properties, organized into a multi-level hierarchy of attributes, external relations, and representations, most derived from Wikidata and schema.org. Exploiting these properties and sub-properties is also one of the next priorities for KBpedia.

To Learn More

The KBpedia Web site provides a working KBpedia explorer and demo of how the system may be applied to local content for tagging or analysis. KBpedia splits between entities and concepts, on the one hand, and splits in predicates based on attributes, external relations, and pointers or indexes, all informed by Charles Peirce‘s prescient theories of knowledge representation. Mappings to all external sources are provided in the linkages to the external resources file in the KBpedia downloads. (A larger inferred version is also available.) The external sources keep their own record files. KBpedia distributions provide the links. However, you can access these entities through the KBpedia explorer on the project’s Web site (see these entity examples for cameras, cakes, and canyons; clicking on any of the individual entity links will bring up the full instance record. Such reach-throughs are straightforward to construct.) See further the Github site for further downloads.

KBpedia was first released in October 2016 with some open source aspects, and was made fully open in 2018. KBpedia is partially sponsored by Cognonto Corporation. All resources are available under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.

Posted:December 15, 2019

'Dazzle' image by Shigeki Matsuyama, as found on https://www.vice.com/en_uk/article/wnp3mn/dazzle-camouflage-room-sized-optical-illusion The Choice Between Class and Instance Depends on Your Point of View

Readers of this blog know that I use the open-source Protégé ontology editor to build and maintain our knowledge graphs. Besides the usefulness of the tool, there is also an informative user mail list that discusses the Protégé application and modeling choices that may arise when using it [1]. A recent thread, ‘How to Relate Different Classes,’ is but one example of an issue one might encounter on this list [2]. As one of the frequent commenters on the list, Michael DeBellis, noted about this thread [3], “I think this is a common issue with modeling, what to make a class and what to make an instance.”

Michael is indeed correct that the distinction between classes and instances is a frequent topic, one that I have touched upon in various ways through the years. The liveliness of this recent thread convinced me it would be helpful to pull together how one chooses to use a class or instance in their knowledge graphs. The topic is also critical to the questions of knowledge representation and interoperability, two key uses for knowledge graphs. So, let’s look at this question of class v instance from the aspects of the nature of knowledge, modeling, and practical considerations.

Epistemological Issues

Epistemology is simply the study of the nature of knowledge. It gets at the questions of what is knowledge? what is belief? what is justification for action? how can we acquire and validate knowledge? is knowledge infallible? are there different kinds of knowledge?

Charles Sanders Peirce and his theory of signs was intimately related to these questions, as well to how we express and convey knowledge to others. Since, as humans, we communicate through our language as symbols, what we mean and intend to convey when expressing these symbols is also of utmost importance to how we understand and refine knowledge as a community process. My recent book has a number of chapters mostly if not exclusively related to these topics [4,5]. Many of the points in this section are drawn from these chapters.

We can illustrate some of the tricky epistemology issues associated with the nature of language using the example of the ‘toucan’ bird often used in discussions of semantic technologies. When we see something, or point to something, or describe something in words, or think of something, we are, of course, using proxies in some manner for the actual thing. If the something is a ‘toucan’ bird, that bird does not reside in our head when we think of it. The ‘it’ of the toucan is a ‘re-presentation’ of the real, dynamic toucan. The representation of something is never the actual something but is itself another thing — that is, a sign — that conveys to us the idea of the real something. In our daily thinking we rarely make this distinction. (For which we should be thankful, otherwise, our flow of thoughts would be wholly jangled.) Nonetheless, the difference is real, and we should be conscious of it when we are trying to be precise in representing knowledge.

How we ‘re-present’ something is also not uniform or consistent. For the toucan bird, perhaps we make caw-caw bird noises or flap our arms to indicate we are referring to a bird. Perhaps we point at the bird. Alternatively, perhaps we show a picture of a toucan or read or say aloud the word “toucan” or see the word embedded in a sentence or paragraph, as in this one, that also provides additional context. How quickly or accurately we grasp the idea of ‘toucan’ is partly a function of how closely associated one of these accompanying signs may be to the idea of toucan bird. Probably all of us would agree that arm flapping is not nearly as useful as a movie of a toucan in flight or seeing one scolding from a tree branch to convey the ‘toucan’ concept.

The question of what we know and how we know it fascinated Peirce over the course of his intellectual life. He probed this relationship between the real or actual thing, the object, with how that thing is represented and understood. (Also understand that Peirce’s concept of the object may embrace individual or particular things to classifications or generalities.) This triadic relationship between immediate object, representation, and interpretation forms a sign and is the basis for the process of sign-making and understanding, what Peirce called semiosis [6].

Even the idea of the object, in this case, the toucan bird, is not necessarily so simple. The real thing itself, an actual toucan bird, has characters and attributes. How do we ‘know’ this real thing? Bees, like many insects, may perceive different coloration for the toucan because they can see in the ultraviolet spectrum, while we do not. On the other hand, most mammals in the rainforest would also not perceive the reds and oranges of the toucan’s feathers, which we readily see. The ‘toucan’ object is thus perceived differently by bees, humans, and other animals. Beyond physical attributes, this actual toucan may be healthy, happy, or sad, nuances beyond our perception that only some fellow toucans may perceive. Though humans, through our ingenuity, may create devices or technologies that expand our standard sensory capabilities to make up for some of these perceptual gaps, our technology will never make our knowledge fully complete. Given limits to perceptions and the information we have on hand, we can never completely capture the nature of the dynamic object, the real toucan bird.

Things get murkier still when we try to convey to others what we mean by the ‘toucan’ bird. For example, when we inspect what might be a description of a toucan on Wikipedia, we see that the term more broadly represents the family of Ramphastidae, which contains five genera and forty different species. The picture we use to refer to ‘toucan’ may be, say, that of the keel-billed toucan (Ramphastos sulfuratus).Keel-billed Toucan However, if we view the images of a list of toucan species, we see just how physically divergent various toucans are from one another. Across all species, average sizes vary by more than a factor of three with great variation in bill sizes, coloration, and range. Further, if I assert that the picture of the toucan is that of my pet keel-billed toucan, Pretty Bird, then we can also understand that this representation is for a specific individual bird, and not the physical keel-billed toucan species as a whole. The point is not a lesson on toucans, but an affirmation that distinctions between what we think we may be describing occurs over multiple levels. The meaning of what we call a ‘toucan’ bird is not embodied in its label or even its name, but in the accompanying referential information that places the referent into context.

If, in our knowledge graph we intend to convey all of these broader considerations, then we are best defining ‘toucan’ as a class. On the other hand, if we are discussing the individual Pretty Bird toucan or are describing ‘toucan’ and average attributes in relation to a wider context of many types of other birds including eagles and wrens, then perhaps treating the ‘toucan’ as an instance is the better approach. Context and what we intend to convey are essential components to how we need to represent our knowledge. Whether something is an ‘instance’ or a ‘class’ is but the first of the distinctions we need to convey, and those may often vary by context.

Modeling Issues

Because these principles are universal, let’s shift our example to ‘truck’ [7]. In the English language, one of the ways we distinguish between an instance and a class is guided by the singular and plural (though English is notorious for its many different plural forms and exceptions). The attributes we assign to a term differ whether we are discussing ‘trucks’, which we think about more in terms of transport purpose, brands, model, and model year; or are discussing a ‘truck’, which has a particular driver, engine, transmission and mileage. Here is one way to look at such ‘truck’ distinctions (for this discussion, we’ll skip the ABox and TBox, another modeling topic importantly using description logics [8]):

Different Views of 'Truck'

To accommodate the twin views of class and individual, we could double the number of entities in our knowledge graphs by separately modeling single instances or plural classes, but that rapidly balloons the size of our graphs. What is more efficient is an approach that would enable us to combine both the organization of concepts and their relations and set members with the description and characterization of these concepts as things unto themselves. As our examples of ‘toucans’ and ‘trucks’ show, this dual treatment is a natural and common way to refer to things for most any domain of interest. Further, class and sub-class relationships enable us to construct tree-like hierarchies over which we can infer or inherit attributes and characteristics between parents and children.

For modeling purposes, we also want our graphs to be decidable, which importantly means we can reason over our knowledge graphs with an expectation that we can get definitive answers (even if the answer is “don’t know”) in a reasonable computation time. It is for these reasons that we have chosen the standard OWL 2 as the representation language for our knowledge graphs (in addition to other benefits [9]). A proper OWL 2 knowledge graph is decidable, and it handles both class and instance views using the metamodeling technique of “punning” [10]. Objects in OWL 2 are named with IRIs (internationalized Web links). The trick with “punning” is to evaluate the object based on how it is used contextually; the IRI is shared but its referent may be viewed as either a class or instance based on context. Any entity declared as a class and with an asserted object or data property is punned. Thus, objects used both as concepts (classes) and individuals (instances) are allowed and standard OWL 2 reasoners may be used against them.

Other Practical Issues

We’ve already discussed context, inference, and decidability, but I thought Igor Toujilov highlighted another important benefit in the mail thread of using class over instance declarations in a knowledge graph. The example he provided was based on drug development [11]:

However from my point of view (software engineering), many modern drugs are developed as a specialisation of existing drugs, i.e. by bringing new features to existing drugs. So, some new drug can be considered as a subclass of an existing drug. This is similar to object-orientated design in software: to bring new features, establish a subclass and implement it.

For example, methylphenidate can be considered as a superclass of Ritalin. If an earlier version of your ontology represents methylphenidate as an individual, then it would be difficult to represent Ritalin in later versions without breaking backward compatibility with existing interoperable applications.

This example shows that the preferable approach in ontology development is: use classes instead of individuals, if there is any chance you would need subclasses in the future.

Since knowledge is constantly dynamic and growing, it would seem prudent advice to allow for expansion of the things in your knowledge graph. Classes are the better choice in this instance (pun intended).

Like any language, there is a trade-off in OWL 2 between expressivity and reasoning efficiency [12]. Some prefer a less-constrained RDF and RDFS construct for their knowledge graphs. This approach allows virtually any statement to be asserted and is a least-common denominator for dealing with data encountered in the wild. However, one loses the punning and decidability advantages of OWL 2, and has a less-powerful framework for staging training sets and corpora for machine learning, another key motivation for our own knowledge graphs.

One could also choose a more powerful modeling language such as Datalog or Common Logic to gain the advantages of OWL 2, plus more. We have nothing critical to say about making such a choice. For our use cases, though, we do like the broader use and tools afforded by the use of OWL 2 and other W3C standards. Finding your own ‘sweet spot’ means understanding some of these knowledge representation trade-offs in context with your anticipated applications.


[2] Protégé user email list, ‘How to Relate Different Classes’, https://mailman.stanford.edu/pipermail/protege-user/2019-November/010890.html, Nov 9, 2019.
[4] Bergman, M. K. Information, Knowledge, Representation. in A Knowledge Representation Practionary: Guidelines Based on Charles Sanders Peirce (ed. Bergman, M. K.) 15–42 (Springer International Publishing, 2018). doi:10.1007/978-3-319-98092-8_2.
[5] Bergman, M. K. A KR Terminology. in A Knowledge Representation Practionary: Guidelines Based on Charles Sanders Peirce (ed. Bergman, M. K.) 129–149 (Springer International Publishing, 2018). doi:10.1007/978-3-319-98092-8_7.
[6] Peirce actually spelled it “semeiosis.” While it is true that other philosophers such as Ferdinand de Saussure also employed the shorter term “semiosis.” I also use this more common term due to greater familiarity.
[7] Bergman, M. K. Metamodeling in Domain Ontologies. AI3:::Adaptive Information https://www.mkbergman.com/913/metamodeling-in-domain-ontologies/ (2010).
[8] See, for example, my four-part series on description logics, beginning with Bergman, M. K. Making Linked Data Reasonable using Description Logics, Part 1, AI3:::Adaptive Information https://www.mkbergman.com/474/making-linked-data-reasonable-using-description-logics-part-1/ (2009).
[9] See Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, Bijan Parsia, Peter Patel-Schneider and Ulrike Sattler, 2008. “OWL2: The Next Step for OWL,” see http://www.comlab.ox.ac.uk/people/ian.horrocks/Publications/download/2008/CHMP+08.pdf; and also see the OWL 2 Quick Reference Guide by the W3C, which provides a brief guide to the constructs of OWL 2, noting the changes from OWL 1.
[10] “Punning” was introduced in OWL 2 and enables the same IRI to be used as a name for both a class and an individual. However, the direct model-theoretic semantics of OWL 2 DL accommodates this by understanding the class Truck and the individual Truck as two different views on the same IRI, i.e., they are interpreted semantically as if they were distinct. See further Pascal Hitzler et al., eds., 2009. OWL 2 Web Ontology Language Primer, a W3C Recommendation, 27 October 2009; see http://www.w3.org/TR/owl2-primer/.
[12] OWL has historically been described as trying to find the proper tradeoff between expressive power and efficient reasoning support. See, for example, Grigoris Antoniou and Frank van Harmelen, 2003. “Web Ontology Language: OWL,” in S. Staab and R. Studer, eds., Handbook on Ontologies in Information Systems, Springer-Verlag, pp. 76-92. See http://www.few.vu.nl/~frankh/postscript/OntoHandbook03OWL.pdf.

Posted by AI3's author, Mike Bergman Posted on December 15, 2019 at 11:59 pm in Ontology Best Practices, Peircean Principles | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/2286/knowledge-representation-is-a-tricky-business/
The URI to trackback this post is: https://www.mkbergman.com/2286/knowledge-representation-is-a-tricky-business/trackback/
Posted:April 9, 2019

A Knowledge Representation Practionary$25 to SpringerLink Subscribers

After resolving some technical glitches, Springer has finally made available my new book, A Knowledge Representation Practionary: Guidelines Based on Charles Sanders Peirce, in paperback form. The 464 pp book is available for $24.99 under Springer’s MyCopy program.

MyCopy is available to SpringerLink subscribers who have access to the computer science collection. Most universities and larger tech firms and knowledge organizations are current subscribers. You should be able to login to your local library, go to SpringerLink, and then search for A Knowledge Representation Practionary. If you are a qualified subscriber, you will see the image to the right on the results page.MyCopy Banner (If you are not a subscriber, you should be able to find a friend or colleague who is and repeat this process using their account.) After choosing to buy, you will be guided through the standard transaction screens.

As Springer states, “MyCopy books are only offered to patrons of a library with access to at least one Springer Nature eBook subject collection and are strictly for individual use only.” 

Though only printed in monochrome, my figures render well and the quality is quite high. The cover is in color and the paper quality is high. I waited to get a copy myself before I could recommend it. I think the overall quality is quite good.

To learn more about my AKRP book, including a listing of the table of contents, see the initial announcement. Also, of course, Springer subscribers have access to the free eBook, and hardcopy and other versions are available. Unfortunately, I think prices are unreasonably high for non-Springer subscribers. Please let me know if you need to find a cheaper alternative.

BTW, if you read and like my book (or even otherwise!), I encourage you to provide your rating to Goodreads.

Posted:January 2, 2019

A Knowledge Representation PractionaryAvailable in Print or E-book Forms

I’m pleased that shortly before Christmas my new book, A Knowledge Representation Practionary: Guidelines Based on Charles Sanders Peirce (Springer), became available in hardcopy form. The e-book had been available for about two weeks prior to that.

The 464 pp book is available from Springer or Amazon (or others). See my earlier announcement for book details and the table of contents.

Individuals with a Springer subscription may get a softcover copy of the e-book for $24.99 under Springer’s MyCopy program. The standard e-book is available for $129 and hardcover copies are available for $169; see the standard Springer order site. Students or individuals without Springer subscriptions who can not afford these prices should contact me directly for possible alternatives. I will do what I can to provide affordable choices.