Posted:May 7, 2019

UMBEL - Upper Mapping and Binding Exchange Layer KBpedia is a More than Capable Successor

After nearly a dozen years of active service, we are retiring UMBEL (Upper Mapping and Binding Exchange Layer) at the end of the year. UMBEL was one of the first knowledge graphs designed to help disparate content interoperate on the Internet. It was based on the (also now-retired) OpenCyc version of the Cyc knowledge structure. Its development and use heavily influenced and informed the KBpedia knowledge graph, an improved successor designed to also support machine learning and knowledge-based artificial intelligence, or KBAI.

As of this announcement, all further development on UMBEL has ceased. It will be formally retired on December 31, 2019.

While Fred Giasson and I are ceasing our roles as editors of UMBEL and are ending our support of its Web sites and code, we would also be pleased to transfer any rights to the system, including UMBEL’s Web addresses and intellectual property, to any entity that is willing to keep these resources active. There may be users that may want to see the continuation of the system for their own purposes. The specific resources available are:

Realistically, with the emergence of KBpedia, we do not anticipate these resources to be in demand; nonetheless, we would be pleased to discuss transfer of these assets. Please contact me directly if you are interested.

UMBEL has been a great source of learning and a basis for many of our customer engagements over the past decade. UMBEL helped point us the way to ‘super types‘, the importance of disjointedness, the essential need for broad alternative labels (‘semsets‘), and the importance of a modular architecture and typologies. It is with much friendship and thanks that we bid adieu to UMBEL and its users and customers.

Posted by AI3's author, Mike Bergman Posted on May 7, 2019 at 9:46 pm in Big Structure, Ontologies, UMBEL | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/2229/umbel-to-be-retired-by-end-of-2019/
The URI to trackback this post is: https://www.mkbergman.com/2229/umbel-to-be-retired-by-end-of-2019/trackback/
Posted:April 4, 2017

The Grim ReaperAI Brings an End to an Era

OpenCyc is no longer available to the public. Without notice and with only some minor statements on Web pages, Cycorp has pulled OpenCyc from the marketplace. It appears this change occurred in March 2017. After 15 years, the abandonment of OpenCyc represents the end of one of the more important open source knowledge graphs of the early semantic Web.

OpenCyc was the first large-scale, open-source knowledge base provided in OWL format. OpenCyc preceded Wikipedia in a form usable by the semantic Web, though it never assumed the prominent position that DBpedia did in terms of helping to organize semantic Web content.

OpenCyc was first announced in July 2001, with the first release occurring in 2002. By release 0.9, OpenCyc had grown to include some 47,000 concepts in an OWL distribution. By the time of OpenCyc’s last version 4.0 release in mid-2012, the size of the system had grown to some 239,000 concepts. This last version also included significant links to DBpedia, WordNet and UMBEL, among other external sources. This last version included references to about 19 K places, 26 K organizations, 13 K persons, and 28 K business-related things. Over the course of its lifetime, OpenCyc was downloaded at least 60,000 times, perhaps more than 100,000, and was a common reference in many research papers and other semantic Web projects [1].

At the height of its use, the distribution of OpenCyc not only included the knowledge graph, but also a Java-based inference engine, a browser for the knowledge base, and a specification of the CycL language and a specification of the Cyc API for application development.

The company has indicated it may offer a cloud option in the future for research or educational purposes, but the date and plans are unspecified. Cycorp will continue to support its ResearchCyc and EnterpriseCyc versions.

Reasons for the Retirement

Cycorp’s Web site states OpenCyc was discontinued because OpenCyc was “fragmented” and was confused by the technical community with the other versions of Cyc. Current verbiage also indicates that OpenCyc was an “experiment” that “proved to be more confusing than it was helpful.” We made outreach to senior Cycorp officials for additional clarification as to the reasons for its retirement but have not received a response.

I suspect the reasons for the retirement go deeper than this. As recently as last summer, senior Cycorp officials were claiming a new major release of OpenCyc was “imminent”.  There always appeared to be a tension within the company about the use and role of an open source version. Key early advocates for OpenCyc, including John De Oliveira, Stephen Reed and Larry Lefkowitz, are no longer with the company. The Cyc Foundation established to support the open source initiative was quietly shut down in 2015. The failure last year of the major AI initiative known as Lucid.ai, which was focused on a major commercialization push behind Cyc and reportedly to be backed by “hundreds of millions of dollars” of venture capital that never materialized, also apparently took its toll on company attention and resources.

Whatever the reasons, and there are likely others, it is hard to see how a 15-year effort could be characterized as experimental. While versions of OpenCyc v 4.0 can still be downloaded from third parties, including a fork, it is clear this venerable contributor to the early semantic Web will soon be available no longer, third parties or not.

Impact on Cognonto

OpenCyc is one of the six major core knowledge bases that form the nucleus of Cognonto‘s KBpedia knowledge structure. This linkage to OpenCyc extends back to UMBEL, another of the six core knowledge bases. UMBEL is itself a subset extraction of OpenCyc [2].

As we began the KBpedia effort, it was clear to us that major design decisions within Cyc (all versions) were problematic to our modeling needs [3]. Because of its common-sense nature, Cyc places a major emphasis on the “tangibility” of objects, including “partial tangibility”. We also found (in our view) major modeling issues in how Cyc handles events v actions v situations. KBpedia’s grounding in the logic and semiosis of Charles Sanders Peirce was at odds with these basic ontological commitments.

I have considered at various times writing one or more articles on the differences we came to see with OpenCyc, but felt it was perhaps snarky to get into these differences, given the different purposes of our systems. We continue to use portions of OpenCyc with important and useful subsumption hierarchies, but have also replaced the entire upper structure better reflective of our approach to knowledge-based artificial intelligence (KBAI). We will continue to retain these existing relations.

Thus, fortunately, given our own design decisions from some years back, the retirement of OpenCyc will have no adverse impact on KBpedia. However, UMBEL, as a faithful subset of OpenCyc designed for possible advanced reasoning, may be impacted. We will await what form possible new Cycorp initiatives takes before making any decisions regarding UMBEL. Again, however, KBpedia remains unaffected.

Fare Thee Well!

So, it is with sadness and regret that I bid adieu to OpenCyc. It was a noble effort to help jump-start the early semantic Web, and one that perhaps could have had more of an impact had there been greater top-level commitment. But, like many things in the Internet, generations come and go at ever increasing speed.

OpenCyc certainly helped guide our understanding and basis for our own semantic technology efforts, and for that we will be eternally grateful to the system and its developers and sponsors. Thanks for a good ride!


[1] You can see some of these statistics yourself from the Wayback Machine of the Internet Archive using the URLs of http://www.opencyc.org/, http://www.cyc.com/, https://sourceforge.net/projects/opencyc/ and http://cycfoundation.org.
[2] The intent of UMBEL is to provide a lightweight scaffolding for relating concepts on the Web to one another. About 99% of UMBEL is a direct subset extraction of OpenCyc. This design approach was purposeful to allow systems linked to UMBEL to further reach through to Cyc (OpenCyc, but other versions as well) for advanced reasoning.
[3] I discuss some of these design decisions in M.K. Bergman, 2016. “Threes All the Way Down to Typologies,” blog post on AI3:::Adaptive Information, October 3, 2016.

Posted by AI3's author, Mike Bergman Posted on April 4, 2017 at 1:18 pm in KBpedia, Ontologies, UMBEL | Comments (2)
The URI link reference to this post is: https://www.mkbergman.com/2034/fare-thee-well-opencyc/
The URI to trackback this post is: https://www.mkbergman.com/2034/fare-thee-well-opencyc/trackback/
Posted:July 18, 2016

NLP and ML Gold StandardsReference Standards are Not Just for Academics

It is common — if not nearly obligatory — for academic researchers in natural language processing (NLP) and machine learning (ML) to compare the results of their new studies to benchmark, reference standards. I outlined some of the major statistical tests in a prior article [1]. The requirement to compare research results to existing gold standards makes sense: it provides an empirical basis for how the new method compares to existing ones, and by how much. Precision, recall, and the combined F1 score are the most prominent amongst these statistical measures.

Of course, most enterprise or commercial projects are done for proprietary purposes, with results infrequently published in academic journals. But, as I argue in this article, even though enterprise projects are geared to the bottom line and not the journal byline, the need for benchmarks, and reference and gold standards, is just as great — perhaps greater — for commercial uses. But there is more than meets the eye with some of these standards and statistics. Why following gold standards makes sense and how my company, Structured Dynamics, does so are the subjects of this article.

A Quick Primer on Standards and Statistics

The most common scoring methods to gauge the “accuracy” of natural language or supervised machine learning analysis involves statistical tests based on the ideas of negatives and positives, true or false. We can measure our correct ‘hits’ by applying our statistical tests to a “gold standard” of known results. This gold standard provides a representative sample of what our actual population looks like, one we have characterized in advance whether results in the sample are true or not for the question at hand. Further, we can use this same gold standard over and over again to gauge improvements in our test procedures.

‘Positive’ and ‘negative’ are simply the assertions (predictions) arising from our test algorithm of whether or not there is a match or a ‘hit’. ‘True’ and ‘false’ merely indicate whether these assertions proved to be correct or not as determined by the reference standard. A false positive is a false alarm, a “crying wolf”; a false negative is a missed result. Combining these thoughts leads to a confusion matrix, which lays out how to interpret the true and false, positive and negative results:

Correctness Test Assertion
Positive Negative
True TP
True Positive
TN
True Negative
False FP
False Positive
FN
False Negative

These four characterizations — true positive, false positive, true negative, false negative — now give us the ability to calculate some important statistical measures.

The first metric captures the concept of coverage. In standard statistics, this measure is called sensitivity; in IR (information retrieval) and NLP contexts it is called recall. Basically it measures the ‘hit’ rate for identifying true positives out of all potential positives, and is also called the true positive rate, or TPR:

\mathit{TPR} = \mathit{TP} / P = \mathit{TP} / (\mathit{TP}+\mathit{FN})

Expressed as a fraction of 1.00 or a percentage, a high recall value means the test has a high “yield” for identifying positive results.

Precision is the complementary measure to recall, in that it is a measure for how efficient whether positive identifications are true or not:

\text{precision}=\frac{\text{number of true positives}}{\text{number of true positives}+\text{false positives}}

Precision is something, then, of a “quality” measure, also expressed as a fraction of 1.00 or a percentage. It provides a positive predictive value, as defined as the proportion of the true positives against all the positive results (both true positives and false positives).

Thus, recall gives us a measure as to the breadth of the hits captured, while precision is a statement of whether our hits are correct or not. We also see why false positives need to be a focus of attention in test development: they directly lower precision and efficiency of the test.

That precision and recall are complementary and linked is reflected in one of the preferred overall measures of IR and NLP statistics, the F-score, which is the adjusted (beta) mean of precision and recall. The general formula for positive real β is:

F_\beta = (1 + \beta^2) \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{(\beta^2 \cdot \mathrm{precision}) + \mathrm{recall}}.

which can be expressed in terms of TP, FN and FP as:

F_\beta = \frac {(1 + \beta^2) \cdot \mathrm{true\ positive} }{(1 + \beta^2) \cdot \mathrm{true\ positive} + \beta^2 \cdot \mathrm{false\ negative} + \mathrm{false\ positive}}\,

In many cases, the harmonic mean is used, which means a beta of 1, which is called the F1 statistic:

F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}}

But F1 displays a tension. Either precision or recall may be improved to achieve an improvement in F1, but with divergent benefits or effects. What is more highly valued? Yield? Quality? These choices dictate what areas of improvement need to receive focus. As a result, the weight of beta can be adjusted to favor either precision or recall.

Accuracy is another metric that can factor into this equation, though it is a less referenced measure in the IR and NLP realms. Accuracy is the statistical measure of how well a binary classification test correctly identifies or excludes a condition:

\text{accuracy}=\frac{\text{number of true positives}+\text{number of true negatives}}{\text{number of true positives}+\text{false positives} + \text{false negatives} + \text{true negatives}}

An accuracy of 100% means that the measured values are exactly the same as the given values.

All of the measures above simply require the measurement of false and true, positive and negative, as do a variety of predictive values and likelihood ratios. Relevance, prevalence and specificity are some of the other notable measures that depend solely on these metrics in combination with total population [2].

Not All Gold Standards Shine

Gold standards that themselves contain false positives and false negatives, by definition, immediately introduce errors. These errors make it difficult to test and refine existing IR and NLP algorithms, because the baseline is skewed. And, because gold standards also often inform training sets, errors there propagate into errors in machine learning. It is also important to include true negatives in a gold standard, in the likely ratio expected by the overall population, so as to improve overall accuracy [3].

There is a reason that certain standards, such as the NYT Annotated Corpus or the Penn Treebank [4], are often referenced as gold standards. They have been in public use for some time, with many errors edited from the systems. Vetted standards such as these may have inter-annotator agreements [5] in the range of 80% to 90% [4].  More typical use cases in biomedical notes [6] and encyclopedic topics [7] tend to show inter-annotator agreements in the range of 75% to 80%.

A proper gold standard should also be constructed to provide meaningful input to performance statistics. Per above, we can summarize these again as:

  • TP = standard provides labels for instances of the same types as in the target domain; manually scored
  • FP = manually scored for test runs based on the current configuration; test indicates as positive, but deemed not true
  • TN = standard provides somewhat similar or ambiguous instances from disjoint types labeled as negative; manually scored
  • FN = manually scored for test runs based on the current configuration; test indicates as negative, but deemed not true.

It is further widely recognized that the best use for a reference standard is when it is constructed in exact context to its problem domain, including the form and transmission methods of the message. A reference standard appropriate to Twitter is likely not a good choice to analyze legal decisions, for example.

So, we can see many areas by which gold, or reference, standards may not be constructed equally:

  1. They may contain false positives
  2. They have variable inter-annotator agreement
  3. They have variable mechanisms, most with none, for editing and updating the labels
  4. They may lack sufficient inclusion of negatives
  5. They may be applied to an out–of-context domain or circumstance.

Being aware of these differences and seeking hard information about them are essential considerations whenever a serious NLP or ML project is being contemplated.

Seemingly Good Statistics Can Lead to Bad Results

We may hear quite high numbers for some NLP experiments, sometimes in the mid-90% to higher range. Such numbers sound impressive, but what do they mean and what might they not be saying?

We humans have a remarkable ability to see when things are not straight, level or plumb. We have a similar ability to spot errors in long lists and orders of things. While a claimed accuracy of even, say, 95% sounds impressive, applied to a large knowledge graph such as UMBEL [8], with its 35,000 concepts, translates into 1,750 misassignments. That sounds like a lot, and it is. Yet misassignments of some nature occur within any standard. When they occur, they are sometimes glaringly obvious, like being out of plumb. It is actually pretty easy to find most errors in most systems.

Still, for the sake of argument, let’s accept we have applied a method that has a claimed accuracy of 95%. But, remember, this is a measure applied against the gold standard. If we take the high-end of the inter-annotator agreements for domain standards noted above, namely 80%, then we have this overall accuracy within the system:

.80 x .95 = 0.76

Whoa! Now, using this expanded perspective, for a candidate knowledge graph the size of UMBEL — that is, about 35 K items — we could see as many as 8,400 misassignments. Those numbers now sound really huge, and they are. They are unacceptable.

A couple of crucial implications result from this simple analysis. First, it is essential to take a holistic view of the error sources across the analysis path, including and most especially the reference standards. (They are, more often than not IMO, the weak link in the analysis path.) And, second, getting the accuracy of reference standards as high as possible is crucial to training the best learners for the domain problem. We discuss this second implication next.

How to Get the Standards High

There is a reason the biggest Web players are in the forefront of artificial intelligence and machine learning. They have the resources — and most critically the data — to create effective learners. But, short of the biggest of Big Data, how can smaller players compete in the NLP and machine learning front?

Today, we have high-quality (but with many inaccuracies) public data sets ranging from millions of entity types and concepts in all languages with Wikipedia data, and a complementary set of nearly 20 million entities in Wikidata, not to mention thousands more of high-quality public datasets. For a given enterprise need, if this information can be coherently organized, structured to the maximum extent, and subject to logic and consistency tests for typing, relationships, and attributes, we have the basis to train learners with standards of unprecedented accuracy. (Of course, proprietary concepts and entity data should also figure prominently into this mix.) Indeed, this is the premise behind Structured Dynamics’ efforts in knowledge-based artificial intelligence.

KBAI is based on a curated knowledge base eating its own tail, working through cycles of consistency and logic testing to reduce misassignments, while continually seeking to expand structure and coverage. There is a network effect to these efforts, as adding and testing structure or mapping to new structures and datasets continually gets easier. These efforts enable the knowledge structure to be effectively partitioned for training specific recognizers, classifiers and learners, while also providing a logical reference structure for adding new domain and public data and structure.

This basic structure — importantly supplemented by the domain concepts and entities relevant to the customer at hand — is then used to create reference structures for training the target recognizers, classifiers and learners. The process of testing and adding structure identifies previously hidden inconsistencies. As corrected, the overall accuracy of the knowledge structure to act in a reference mode increases. At Structured Dynamics, we began this process years ago with the initial UMBEL reference concept structure. To that we have mapped and integrated a host of public data systems, including OpenCyc, Wikipedia, DBpedia, and, now, Wikidata. Each iteration broadens our scope and reduces errors, leading to a constantly more efficient basis for KBAI.

An integral part of that effort is to create gold standards for each project we engage. You see, every engagement has its own scope and wrinkles. Besides domain data and contexts, there are always specific business needs and circumstances that need to be applied to the problem at hand. The domain coverage inevitably requires new entity or relation recognizers, or the mapping of new datasets. The nature of the content at hand may range from tweets to ads to Web pages or portions or academic papers, with specific tests and recognizers from copyrights to section headings informing new learners. Every engagement requires its own reference standards. Being able to create these efficiently and with a high degree of accuracy is a competitive differentiator.

SD’s General Approach to Enterprise Standards

Though Structured Dynamics’ efforts are geared to enterprise projects, and not academic papers, the best practices of scientific research still apply. We insist upon the creation of gold standards for every discrete recognizer, classifier or learner we undertake for major clients. This requirement is not a hard argument to make, since we have systems in place to create initial standards and can quantify the benefits from the applied standards. Since major engagements often involve the incorporation of new data and structure, new feature recognizers, or bespoke forms of target content, the gold standards give us the basis for testing all wrinkles and parameters. The cost advantages of testing alternatives efficiently is demonstrable. On average, we can create a new reference standard in 10-20 labor hours (each for us and the client).

Specifics may vary, but we typically seek about 500 true positive instances per standard, with 20 or so true negatives. (As a note, there are more than 1,900 entity and relation types in Wikidata — 800 and 1,100 types, respectively — that meet this threshold. However, it is also not difficult to add hundreds of new instances from alternative sources.) All runs are calibrated with statistics reporting. In fact, any of our analytic runs may invoke the testing statistics, which are typically presented like this for each run:

True positives:  362
False positives:  85
True negatives:  19
False negatives:  45

+--------------+------------+
| key          | value      |
+--------------+------------+
| :precision   | 0.8098434  |
| :recall      | 0.8894349  |
| :specificity | 0.1826923  |
| :accuracy    | 0.7455969  |
| :f1          | 0.84777516 |
| :f2          | 0.8722892  |
| :f0.5        | 0.82460135 |
+--------------+------------+

When we are in active testing mode we are able to iterate parameters and configurations quickly, and discover thrusts that have more or less effect on desired outcomes. We embed these runs in electronic notebooks using literate programming to capture and document our decisions and approach as we go [9]. Overall, the process has proven (and improved!) to be highly effective.

We could conceivably lower the requirement for 500 true positive instances as we see the underlying standards improve. However, since getting this de minimus of examples has become systematized, we really have not had reason for testing and validating smaller standard sizes. We are also not seeking definitive statistical test values but a framework for evaluating different parameters and methods. In most cases, we have seen our refererence sets grow over time as new wrinkles and perspectives emerge that require testing.

In all cases, the most important factor in this process has been to engage customers in manual review and scoring. More often than not we see client analysts understand and detect patterns that then inform improved methods. Both us, as the contractor, and the client gain a stake and an understanding of the importance of reference standards.

Clean, vetted gold standards and training sets are thus a critical component to improving our client’s results — and our own knowledge bases — going forward. The very practice of creating gold standards and training sets needs to receive as much attention as algorithm development because, without it, we are optimizing algorithms to fuzzy objectives.


[1] M.K. Bergman, 2015. “A Primer on Knowledge Statistics,” AI3:::Adaptive Information blog, May 18, 2015.
[2] By bringing in some other rather simple metrics, it is also possible to expand beyond this statistical base to cover such measures as information entropy, statistical inference, pointwise mutual information, variation of information, uncertainty coefficients, information gain, AUCs and ROCs. But we’ll leave discussion of some of those options until another day.
[3] George Hripcsak and Adam S. Rothschild, 2005. “Agreement, the F-measure, and Reliability in Information Retrieval.” Journal of the American Medical Informatics Association 12, no. 3 (2005): 296-298.
[4] See Eleni Miltsakaki, Rashmi Prasad, Aravind K. Joshi, and Bonnie L. Webber, 2004. “The Penn Discourse Treebank,” in LREC. 2004. For additional useful statistics and an example of high inter-annotator agreement, see Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel, 2006. “OntoNotes: the 90% Solution,” in Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pp. 57-60. Association for Computational Linguistics, 2006.
[5] Inter-annotator agreement is the degree of agreement among raters or annotators of scoring or labeling for reference standards. The phrase embraces or overlaps a number of other terms for multiple-judge systems, such as inter-rater agreement, inter-observer agreement, or inter-rater reliability. See also Ron Artstein and Massimo Poesio, 2008. “Inter-coder Agreement for Computational Linguistics,” Computational Linguistics 34, no. 4 (2008): 555-596. Also see Kevin A. Hallgren, 2012. “Computing Inter-rater Reliability for Observational Data: An Overview and Tutorial,” Tutorials in Quantitative Methods for Psychology 8, no. 1 (2012): 23.
[6] Philip V. Ogren, Guergana K. Savova, and Christopher G. Chute, 2007. “Constructing Evaluation Corpora for Automated Clinical Named Entity Recognition,” in Medinfo 2007: Proceedings of the 12th World Congress on Health (Medical) Informatics; Building Sustainable Health Systems, p. 2325. IOS Press, 2007. This study shows inter-annotator agreement of .75 for biomedical notes.
[7] Vaselin Stoyanov and Claire Cardie, 2008. “Topic identification for fine-grained opinion analysis.” In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pp. 817-824. Association for Computational Linguistics, 2008. shows inter-annotator agreement of ~76% for fine-grained topics. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin 2010. “Automatic evaluation of topic coherence.” In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 100-108. Association for Computational Linguistics, 2010, shows inter-annotator agreement in the .73 to .82 range.
[8] UMBEL (Upper Mapping and Binding Exchange Layer) is a logically organized knowledge graph of about 35,000 concepts and entity types that can be used in information science for relating information from disparate sources to one another. This open-source ontology was originally developed by Structured Dynamics, which still maintains it. It is used to assist data interoperability and the mapping of disparate datasets.
[9] Fred Giasson, Structured Dynamics’ CTO, has been writing a series of blog posts on literate programming and the use of Org-mode as an electronic notebook. I have provided a broader overview of SD’s efforts in this area; see M.K. Bergman, 2016. “Literate Programming for an Open World,” AI3:::Adaptive Information blog, June 27, 2016.

Posted by AI3's author, Mike Bergman Posted on July 18, 2016 at 10:59 am in Knowledge-based Artificial Intelligence, UMBEL | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/1964/gold-standards-in-enterprise-knowledge-projects/
The URI to trackback this post is: https://www.mkbergman.com/1964/gold-standards-in-enterprise-knowledge-projects/trackback/
Posted:June 6, 2016

An 'Accordion-like' DesignDesign is Aimed to Improve Computability

In the lead up to our most recent release of UMBEL, I began to describe our increasing reliance on the use of typologies. In this article, I’d like to expand on our reasons for this design and the benefits we see.

‘Typology’ is not a common term within semantic technologies, though it is used extensively in such fields as archaeology, urban planning, theology, linguistics, sociology, statistics, psychology, anthropology and others. In the base semantic technology language of RDF, a ‘type’ is what is used to declare an instance of a given class. This is in keeping with our usage, where an instance is a member of a type.

Strictly speaking, ‘typology’ is the study of types. However, as used within the fields noted, a ‘typology’ is the result of the classification of things according to their characteristics. As stated by Merriam Webster, a ‘typology’ is “a system used for putting things into groups according to how they are similar.” Though some have attempted to make learned distinctions between typologies and similar notions such as classifications or taxonomies [1], we think this idea of grouping by similarity is the best way to think of a typology.

In Structured Dynamics‘ usage as applied to UMBEL and elsewhere, we are less interested in the sense of ‘typology’ as comparisons across types and more interested in the classification of types that are closely related, what we have termed ‘SuperTypes’. In this classification, each of our SuperTypes gets its own typology. The idea of a SuperType, in fact, is exactly equivalent to a typology, wherein the multiple entity types with similar essences and characteristics are related to one another via a natural classification. I speak elsewhere how we actually go about making these distinctions of natural kinds [2].

In this article, I want to stand back from how a typology is constructed to deal more about their use and benefits. Below I discuss the evolution of our typology design, the benefits that accrue from the ‘typology’ approach, and then conclude with some of the application areas to which this design is most useful. All of this discussion is in the context of our broader efforts in KBAI, or knowledge-based artificial intelligence.

Evolution of the Design

I wish we could claim superior intelligence or foresight in how our typology design came about, but it was truthfully an evolution of needing to deal with pragmatic concerns in our use of UMBEL over the past near-decade. The typology design has arisen from the intersection of: 1) our efforts with SuperTypes, and creating a computable structure that uses powerful disjoint assertions; 2) an appreciation of the importance of entity types as a focus of knowledge base terminology; and 3) our efforts to segregate entities from other key constructs of knowledge bases, including attributes, relations and annotations. Though these insights may have resulted from serendipity and practical use, they have brought a new understanding of how best to organize knowledge bases for artificial intelligence uses.

The Initial Segreation into SuperTypes

We first introduced SuperTypes into UMBEL in 2009 [3]. The initiative arose because we observed about 90% of the concepts in UMBEL were disjoint from one another. Disjoint assertions are computationally efficient and help better organize a knowledge graph. To maximize these benefits we did both top-down and bottom-up testing to derive our first groupings of SuperTypes into 29 mostly disjoint types, with four non-disjoint (or cross-cutting and shared) groups [3]. Besides computational efficiency and its potential for logical operations, we also observed that these SuperTypes could also aid our ability to drive display widgets (such as being able to display geolocational types on maps).

All entity classes within a given SuperType are thus organized under the SuperType (ST) itself as the root. The classes within that ST are then organized hierarchically, with children classes having a subClassOf relation to their parent. By the time of UMBEL’s last release [4], this configuration had evolved into the following 31 largely disjoint SuperTypes, organized into 10 or so clusters or “dimensions”:

Constituents
Natural Phenomena
Area or Region
Location or Place
Shapes
Forms
Situations
Time-related
Activities
Events
Times
Natural Matter
Atoms and Elements
Natural Substances
Chemistry
Organic Matter
Organic Chemistry
Biochemical Processes
Living Things
Prokaryotes
Protists & Fungus
Plants
Animals
Diseases
Agents
Persons
Organizations
Geopolitical
Artifacts
Products
Food or Drink
Drugs
Facilities
Information
Audio Info
Visual Info
Written Info
Structured Info
Social
Finance & Economy
Society
Current SuperType Structure of UMBEL

We also used the basis in SuperTypes to begin cleaving UMBEL into modules, with geolocational types being the first to be separated. We initially began splitting into modules as a way to handle UMBEL’s comparatively large size (~ 30K concepts). As we did so, however, we also observed that most of the SuperTypes could be also be segregated into modules. This architectural view and its implications were another reason leading to the eventual typology design.

A Broadening Appreciation for the Pervasiveness of Entity Types

The SuperType tagging and possible segregation of STs into individual modules led us to review other segregations and tags. Given that the SuperTypes were all geared to entities and entity types — and further represented about 90% of all concepts in UMBEL — we began to look at entities as a category with more care and attention. This analysis took us back to the beginnings of entity recognition and tagging in natural language processing. We saw the progression of understanding from named entities and just a few entity types, to the more recent efforts in so-called fine-grained entity recognition [5].

What was blatantly obvious, but which had been previously overlooked by us and other researchers investigating entity types, was that most knowledge graphs (or upper ontologies) were themselves made of largely entity types [5]. In retrospect, this should not be surprising. Most knowledge graphs deal with real things in the world, which, by definition, tend to be entities. Entities are the observable, often nameable, things in the world around us. And how we organize and refer to those entities — that is, the entity types — constitutes the bulk of the vocabulary for a knowledge graph.

We can see this progression of understanding moving from named entities and fine-grained entity types, all the way through to an entity typology — UMBEL’s SuperTypes — that then becomes the tie-in point for individual entities (the ABox):


Evolving Sophistication of Entity Types

Evolving Sophistication of Entity Types

The key transition is moving from the idea of discrete numbers of entity types to a system and design that supports continuous interoperability through an “accordion-like” typology structure.

The General Applicability of ‘Typing’ to All Aspects of Knowledge Bases

The “type-orientation” of a typology was also attractive because it offers a construct that can be applied to all other (non-entity) parts of the knowledge base. Actions can be typed; attributes can be typed; events can be typed; and relations can be typed. A mindset around natural kinds and types helps define the speculative grammar of KBAI (a topic of a next article). We can thus represent these overall structural components in a knowledge base as:

Knowledge Base Grammar

Typology View of a Knowledge Base

The shading is in reference to that which is external to the scope of UMBEL.

The intersection of these three factors — SuperTypes, an “accordion” design for continuous entity types, and overall typing of the knowledge base — set the basis for how to formalize an overall typology design.

Formalizing the Typology Design

We have first applied this basis to typologies for entities, based on the SuperTypes. Each SuperType becomes its own typology with natural boundaries and a hierarchical structure. No instances are allowed in the typology; only types.

Initial construction of the typology first gathers the relevant types (concepts) and automatically evaluates those concepts for orphans (unconnected concepts) and fragments (connected portions missing intermediary parental concepts). For the initial analysis, there are likely multiple roots, multiple fragments, and multiple orphans. We want to get to a point where there is a single root and all concepts in the typology are connected. Source knowledge bases are queried for the missing concepts and evaluated again in a recursive manner. Candidate placements are then written to CSV files and evaluated with various utilities, including crucially manual inspection and vetting. (Because the system bootstraps what is already known and structured in the system, it is important to build the structure with coherent concepts and relations.)

Once the overall candidate structure is completed, it is then analyzed against prior assignments in the knowledge base. ST disjoint analysis, coherent inferencing, and logical placement tests again prompt the creation of CSV files that may be viewed and evaluated with various utilities, but, again, ultimately manually vetted.

The objective of the build process is a fully connected typology that passes all coherency, consistency, completeness and logic tests. If errors are subsequently discovered, the build process must be run again with possible updates to the processing scripts. Upon acceptance, each new type added to a typology should pass a completeness threshold, including a definition, synonyms, guideline annotations, and connections. The completed typology may be written out in both RDF and CSV formats. (The current UMBEL and its typologies are available here.)

Integral to the design must be build, testing and maintenance routines, scripts, and documentation. Knowledge bases are inherently open world [6], which means that the entities and their relationships and characteristics are constantly growing and changing due to new knowledge underlying the domain at hand. Such continuous processing and keeping track of the tests, learnings and workflow steps place a real premium on literate programming, about which Fred Giasson, SD’s CTO, is now writing [7].

Because of the very focused nature of each typology (as set by its root), each typology can be easily incorporated or excised from a broader structure. Each typology is rather simple in scope and simple in structure, given its hierarchical nature. Each typology is readily maintained and built and tested. Typologies pose relatively small ontological commitments.

Benefits of the Design

The simple bounding and structure of the typology design makes each typology understandable merely by inspecting its structure. But the typologies can also be read into programs such as Protégé in order to inspect or check complete specifications and relationships.

Because each typology is (designed to be) coherent and consistent, new concepts or structures may be related to any part of its hierarchical design. This gives these typologies an “accordion-like” design, similar to the multiple levels and aggregation made possible by an accordion menu:

An 'Accordion'-like Design

An ‘Accordion’ Design Accommodates Different Granularities

The combination of logical coherence with a flexible, accordion structure gives typologies a unique set of design benefits. Some have been mentioned before, but to recap they are:

Computable

Each type has a basis — ranging from attributes and characteristics to hierarchical placement and relationship to other types — that can inform computability and logic tests, potentially including neighbor concepts. Ensuring that type placements are accurate and meet these tests means that the now-placed types and their attributes may be used to test the placement and logic of subsequent candidates. The candidates need not be only internal typology types, but may also be used against external sources for classification, tagging or mapping.

Because the essential attributes or characteristics across typologies in an entire domain can differ broadly — such as entities v attributes, living v inanimate things, natural things v man-made things, ideas v physical objects, etc. — it is possible to make disjointedness assertions between entire groupings of natural classes. Disjoint assertions, combined with logical organization and inference, provide a typology design that lends itself to reasoning and tractability.

The internal process to create these typologies also has the beneficial effect of testing placements in the knowledge graph and identifying gaps in the structure as informed by fragments and orphans. This computability of the structure is its foundational benefit, since it determines the accuracy of the typology itself and drives all other uses and services.

Pluggable and Modular

Since each typology has a single root, it is readily plugged into or removed from the broader structure. This means the scale and scope of the overall system may be easily adjusted, and the existing structure may be used as a source for extensions (see next). Unlike more interconnected knowledge graphs (which can have many network linkages), typologies are organized strictly along these lines of shared attributes, which is both simpler and also provides an orthogonal means for investigating type-class membership.

Interoperable

The idea of nested, hierarchical types organized into broad branches of different typologies also provides a very flexible design for interoperating with a diversity of world views and degrees of specificity. A typology design, logically organized and placed into a consistent grounding of attributes, can readily interoperate with these different world views. So far, with UMBEL, this interoperable basis is limited to concepts and things, since only the entity typologies have been initially completed. But, once done, the typologies for attributes and relations will extend this basis to include full data interoperability of attribute:value pairs.

Extensible

A typology design for organizing entities can thus be visualized as a kind of accordion or squeezebox, expandable when detail requires, or collapsed to more coarse-grained when relating to broader views. The organization of entity types also has a different structure than the more graph-like organization of higher-level conceptual schema, or knowledge graphs. In the cases of broad knowledge bases, such as UMBEL or Wikipedia, where 70 percent or more of the overall schema is related to entity types, more attention can now be devoted to aspects of concepts or relations.

Each class within the typology can become a tie-in point for external information, providing a collapsible or expandable scaffolding (the ‘accordion’ design). Via inferencing, multiple external sources may be related to the same typology, even though at different levels of specificity. Further, very detailed class structures can also be accommodated in this design for domain-specific purposes. Moreover, because of the single tie-in point for each typology at its root, it is also possible to swap out entire typology structures at once, should design needs require this flexibility.

Testable and Maintainable

The only sane way to tackle knowledge bases at these structural levels is to seek consistent design patterns that are easier to test, maintain and update. Open world systems must embrace repeatable and largely automated workflow processes, plus a commitment to timely updates, to deal with the constant, underlying change in knowledge.

Listing of Broad Application Areas

Some of the more evident application areas for this design — and in keeping with current client and development activities for Structured Dynamics — are the following:

  • Domain extension — the existing typologies and their structure provide a ready basis for adding domain details and extensions;
  • Tagging — there are many varieties of tagging approaches that may be driven from these structures, including, with the logical relationships and inferencing, ontology-based information tagging;
  • Classification — the richness of the typology structures means that any type across all typologies may be a possible classification assignment when evaluating external content, if the overall system embracing the typologies is itself coherent;
  • Interoperating datasets — the design is based on interoperating concepts and datasets, and provides more semantic and inferential means for establishing MDM systems;
  • Machine learning (ML) training — the real driver behind this design is lowering the costs for supervised machine learning via more automated and cost-effective means of establishing positive and negative training sets. Further, the system’s feature richness (see next) lends itself to unsupervised ML techniques as well; and
  • Rich feature set — a design geared from the get-go to emphasize and expose meaningful knowledge base features [8] perhaps opens up many new fruitful avenues for machine learning and other AI. More expressed structure may help in the interpretability of latent feature layers in deep learning. In any case, more and coherent structure with testability can only be goodness for KBAI going forward.

One Building Block Among Many

The progressions and learning from the above were motivated by the benefits that could be seen with each structural change. Over nearly a decade, as we tried new things, structured more things, we discovered more things and adapted our UMBEL design accordingly. The benefits we see from this learning are not just additive to benefits that might be obtained by other means, but they are systemic. The ability to make knowledge bases computable — while simultaneously increasing the features space for training machine learners — at much lower cost should be a keystone enabler at this particular point in AI’s development. Lowering the costs of creating vetted training sets is one way to improve this process. 

Systems that can improve systems always have more leverage than individual innovations. The typology design outlined above is the result of the classification of things according to their shared attributes and essences. The idea is that the world is divided into real, discontinuous and immutable ‘kinds’. Expressed another way, in statistics, typology is a composite measure that involves the classification of observations in terms of their attributes on multiple variables. In the context of a global KB such as Wikipedia, about 25,000 entity types are sufficient to provide a home for the millions of individual articles in the system.

As our next article will discuss, Charles Sanders Peirce’s consistent belief that the real world can be logically conceived and organized provides guidance for how we can continue to structure our knowledge bases into computable form. We now have a coherent base for treating types and natural classes as an essential component to that thinking. These insights are but one part of the KB innovations suggested by Peirce’s work.


[1] See, for example, Alberto Marradi, 1990. “Classification, Typology, Taxonomy“, Quality & Quantity 24, no. 2 (1990): 129-157.
[2] M.K. Bergman, 2015. “‘Natural Classes’ in the Knowledge Web,” AI3:::Adaptive Information blog, July 13, 2015.
[3] M.K. Bergman, 2009. “‘SuperTypes’ and Logical Segmentation of Instances,” AI3:::Adaptive Information blog, September 2, 2009.
[4] umbel.org, “New, Major Upgrade of UMBEL Released,” UMBEL press release, May 10, 2016 (for UMBEL v. 1.50 release)
[5] M.K. Bergman, 2016. “How Fine Grained Can Entity Types Get?,” AI3:::Adaptive Information blog, March 8, 2016.
[6] M.K. Bergman, 2012. “The Open World Assumption: Elephant in the Room,” AI3:::Adaptive Information blog, December 21, 2009.
[8] M.K. Bergman, 2015. “A (Partial) Taxonomy of Machine Learning Features,” AI3:::Adaptive Information blog, November 23, 2015.
Posted:May 11, 2016

UMBEL - Upper Mapping and Binding Exchange Layer Version 1.50 Fully Embraces a Typology Design, Gets Other Computability Improvements

The year since the last major release of UMBEL (Upper Mapping and Binding Exchange Layer) has been spent in a significant re-think of how the system is organized. Four years ago, in version 1.05, we began to split UMBEL into a core and a series of swappable modules. The first module adopted was in geographical information; the second was in attributes. This design served us well, but it was becoming apparent that we were on a path of multiple modules. Each of UMBEL’s major so-called ‘SuperTypes‘ — that is, major cleavages of the overall UMBEL structure that are largely disjoint from one another, such as between Animals and Facilities — were amenable to the module design. This across-the-board potential cleavage of the UMBEL system caused us to stand back and question whether a module design alone was the best approach. Ultimately, after much thought and testing, we adopted instead a typology design that brought additional benefits beyond simple modularity.

Today, we are pleased to announce the release of these efforts in UMBEL version 1.50. Besides standard release notes, this article discusses this new typology design, and explains its uses and benefits.

Basic UMBEL Background

The Web and enterprises in general are characterized by growing, diverse and distributed information sources and data. Some of this information resides in structured databases; some resides in schema, standards, metadata, specifications and semi-structured sources; and some resides in general text or media where the content meaning is buried in unstructured form. Given these huge amounts of information, how can one bring together what subsets are relevant? And, then for candidate material that does appear relevant, how can it be usefully combined or related given its diversity? In short, how does one go about actually combining diverse information to make it interoperable and coherent?

UMBEL thus has two broad purposes. UMBEL’s first purpose is to provide a general vocabulary of classes and predicates for describing and mapping domain ontologies, with the specific aim of promoting interoperability with external datasets and domains. UMBEL’s second purpose is to provide a coherent framework of reference subjects and topics for grounding relevant Web-accessible content. UMBEL presently has about 34,000 of these reference concepts drawn from the Cyc knowledge base, organized into 31 mostly disjoint SuperTypes.

The grounding of information mapped by UMBEL occurs by common reference to the permanent URIs (identifiers) for UMBEL’s concepts. The connections within the UMBEL upper ontology enable concepts from sources at different levels of abstraction or specificity to be logically related. Since UMBEL is an open source extract of the OpenCyc knowledge base, it can also take advantage of the reasoning capabilities within Cyc.

UMBEL in Linked Open Data

Diagram showing linked data datasets. UMBEL is near the hub, below and to the right of the central DBpedia.

UMBEL’s vocabulary is designed to recognize that different sources of information have different contexts and different structures, and meaningful connections between sources are not always exact. UMBEL’s 34,000 reference concepts form a knowledge graph of subject nodes that may be related to external classes and individuals (instances and entities). Via this coherent structure, we gain some important benefits:

  • Mapping to other ontologies — disparate and heterogeneous datasets and ontologies may be related to one another by mapping to the UMBEL structure
  • A scaffolding for domain ontologies — more specific domain ontologies can be made interoperable by using the UMBEL vocabulary and tieing their more general concepts into the UMBEL structure
  • Inferencing — the UMBEL reference concept structure is coherent and designed for inferencing, which supports better semantic search and look-ups
  • Semantic tagging — UMBEL, and ontologies mapped to it, can be used as input bases to ontology-based information extraction (OBIE) for tagging text or documents; UMBEL’s “semsets” broaden these matches and can be used across languages
  • Linked data mining — via the reference ontology, direct and related concepts may be retrieved and mined and then related to one another
  • Creating computable knowledge bases — with complete mappings to key portions of a knowledge base, say, for Wikipedia articles, it is possible to use the UMBEL graph structure to create a computable knowledge source, with follow-on benefits in artificial intelligence and KB testing and improvements, and
  • Categorizing instances and named entities — UMBEL can bring a consistent framework for typing entities and relating their descriptive attributes to one another.

UMBEL is written in the semantic Web languages of SKOS and OWL 2. It is a class structure used in linked data, along with other reference ontologies. Besides data integration, UMBEL has been used to aid concept search, concept definitions, query ranking, ontology integration, and ontology consistency checking. It has also been used to build large ontologies and for online question answering systems [1].

Including OpenCyc, UMBEL has about 65,000 formal mappings to DBpedia, PROTON, GeoNames, and schema.org, and provides linkages to more than 2 million Wikipedia pages (English version). All of its reference concepts and mappings are organized under a hierarchy of 31 different SuperTypes, which are mostly disjoint from one another. Development of UMBEL began in 2007. UMBEL was first released in July 2008. Version 1.00 was released in February 2011.

Summary of Version 1.50 Changes

These are the principal changes between the last public release, version 1.20, and this version 1.50. In summary, these changes include:

  • Removed all instance or individual listings from UMBEL; this change does NOT affect the punning used in UMBEL’s design (see Metamodeling in Domain Ontologies)
  • Re-aligned the SuperTypes to better support computability of the UMBEL graph and its resulting disjointedness
  • These SuperTypes were eliminated with concepts re-assigned: Earthscape, Extraterrestrial, Notations and Numbers
  • These new SuperTypes were introduced: AreaRegion, AtomsElements, BiologicalProcesses, Forms, LocationPlaces, and OrganicChemistry, with logically reasoned assignments of RefConcepts
  • The Shapes SuperType is a new ST that is inherently non-disjoint because it is shared with about half of the RefConcepts
  • The Situations is an important ST, overlooked in prior efforts, that helps better establish context for Activities and Events
  • Made re-alignments in UMBEL’s upper structure and introduced additional upper-level categories to better accommodate these refinements in SuperTypes
  • A typology was created for each of the resulting 31 disjoint STs, which enabled missing concepts to be identified and added and to better organize the concepts within each given ST
  • The broad adoption of the typology design for all of the (disjoint) SuperTypes also meant that prior module efforts, specifically Geo and Attributes, could now be made general to all of UMBEL. This re-integration also enabled us to retire these older modules without affecting functionality
  • The tests and refinements necessary to derive this design caused us to create flexible build and testing scripts, documented via literate programming (using Clojure)
  • Updated all mappings to DBpedia, Wikipedia, and schema.org
  • Incorporated donated mappings to five additional LOV vocabularies [2]
  • Tested the UMBEL structure for consistency and coherence
  • Updated all prior UMBEL documentation
  • Expanded and updated the UMBEL.org Web site, with access and demos of UMBEL.

UMBEL’s SuperTypes

The re-organizations noted above have resulted in some minor changes to the SuperTypes and how they are organized. These changes have made UMBEL more computable with a higher degree of disjointedness between SuperTypes. (Note, there are also organizational SuperTypes that work largely to aid the top levels of the knowledge graph, but are explicitly designed to NOT be disjoint. Important SuperTypes in this category include Abstractions, Attributes, Topics, Concepts, etc. These SuperTypes are not listed below.)

UMBEL thus now has 31 largely disjoint SuperTypes, organized into 10 or so clusters or “dimensions”:

Constituents
Natural Phenomena
Area or Region
Location or Place
Shapes
Forms
Situations
Time-related
Activities
Events
Times
Natural Matter
Atoms and Elements
Natural Substances
Chemistry
Organic Matter
Organic Chemistry
Biochemical Processes
Living Things
Prokaryotes
Protists & Fungus
Plants
Animals
Diseases
Agents
Persons
Organizations
Geopolitical
Artifacts
Products
Food or Drink
Drugs
Facilities
Information
Audio Info
Visual Info
Written Info
Structured Info
Social
Finance & Economy
Society

These disjoint SuperTypes provide the basis for the typology design described next.

The Typology Design

After a few years of working with SuperTypes it became apparent each SuperType could become its own “module”, with its own boundaries and hierarchical structure. Since across the UMBEL structure nearly 90% of the reference concepts are themselves entity classes, if these are properly organized, we can achieve a maximum of disjointness, modularity, and reasoning efficiency. Our early experience with modules pointed the way to a design for each SuperType that was as distinct and disjoint from other STs as possible. And, through a logical design of natural classes [3] for the entities in that ST, we could achieve a flexible, ‘accordion-like’ design that provides entity tie-in points from the general to the specific for each given SuperType. The design is effective for being able to interoperate across both fine-grained and coarse-grained datasets. For specific domains, the same design approach allows even finer-grained domain concepts to be effectively integrated.

All entity classes within a given SuperType are thus organized under the SuperType itself as the root. The classes within that ST are then organized hierarchically, with children classes having a subClassOf relation to their parent. Each class within the typology can become a tie-in point for external information, providing a collapsible or expandable scaffolding (the ‘accordion’ design). Via inferencing, multiple external sources may be related to the same typology, even though at different levels of specificity. Further, very detailed class structures can also be accommodated in this design for domain-specific purposes. Moreover, because of the single tie-in point for each typology at its root, it is also possible to swap out entire typology structures at once, should design needs require this flexibility.

We have thus generalized the earlier module design to where every (mostly) disjoint SuperType now has its own separate typology structure. The typologies provide the flexible lattice for tieing external content together at various levels of specificity. Further, the STs and their typologies may be removed or swapped out at will to deal with specific domain needs. The design also dovetails nicely with UMBEL’s build and testing scripts. Indeed, the evolution of these scripts via literate programming has also been a reinforcing driver for being able to test and refine the complete ST and typologies structure.

Still a Work in Progress

Though UMBEL retains its same mission as when the system was first formulated nearly a decade ago, we also see its role expanding. The two key areas of expansion are in UMBEL’s use to model and map instance data attributes and in acting as a computable overlay for Wikipedia (and other knowledge bases). These two areas of expansion are still a work in progress.

The mapping to Wikipedia is now about 85% complete. While we are testing automated mapping mechanisms, because of its central role we also need to vet all UMBEL-Wikipedia mapping assignments. This effort is pointing out areas of UMBEL that are over-specified, under-specified, and sometimes duplicative or in error. Our goal is to get to a 100% coverage point with Wikipedia, and then to exercise the structure for machine learning and other tests against the KB. These efforts will enable us to enhance the semsets in UMBEL as well as to move toward multilingual versions. This effort, too, is still a work in progress.

Despite these desired enhancements, we are using all aspects of UMBEL and its mappings to both aid these expansions and to test the existing mappings and structure. These efforts are proving the virtuous circle of improvements that is at the heart of UMBEL’s purposes.

Where to Get UMBEL and Learn More

The UMBEL Web site provides various online tools and Web services for exploring and using UMBEL. The UMBEL GitHub site is where you can download the UMBEL Vocabulary or the UMBEL Reference Concept ontology, both under a Creative Commons Attribution 3.0 license. Other documents and backup are also available from that location.

Technical specifications for UMBEL and its various annexes are available from the UMBEL wiki site. You can also download a PDF version of the specifications from there. You are also welcomed to participate on the UMBEL mailing list or LinkedIn group.


[2] Courtesy of Jana Vataščinová (University of Economics, Prague) and Ondřej Zamazal (University of Economics, Prague, COSOL project).
[3] See, for example, M.K. Bergman, 2015. “‘Natural Classes’ in the Knowledge Web,” AI3:::Adaptive Information blog, July 13, 2015.

Posted by AI3's author, Mike Bergman Posted on May 11, 2016 at 8:55 am in Linked Data, Structured Dynamics, UMBEL | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/1946/new-major-upgrade-of-umbel-released/
The URI to trackback this post is: https://www.mkbergman.com/1946/new-major-upgrade-of-umbel-released/trackback/