I have been assembling for some time a listing of semantic Web-related software applications and tools. My first partial listing had about 50 sources. I recently noted the W3C’s semantic Web wiki listing of about 70 sources. I then came across the EU’s AKT (Advanced Knowledge Technologies) project, which also has about 75 tools compiled. Protégé also has a fairly long list of plugins, but not unfortunately well organized. Complicating matters still was the listing of natural language processing tools listed at the Natural Language Software Registy, another fantastic resource particularly in the annotation and information extraction arena.
Semantic Web tool sets span from comprehensive engineering environments to specific converters and editors and the like. The entire workflow extends from getting the initial content, annotating or tagging it according to existing or built ontologies, reconciling heterogeneities, and then storing and managing the RDF or OWL with subsequent querying and inferencing.
There are certainly more tools extant, and I made some choices to exclude some marginal tools (Sourceforge, for example, has more than 200 semantic Web-related projects, but the vast majority appear moribund with no actual software to download).
Thus, listed below, are today’s current, most comprehensive list of 175 semantic Web software tools and applications. I am now further characterizing these offline as to open source v. proprietary and categorizing according to SW-related workflow. I may later post those expansions.
I also welcome tool suggestions. I think the ESW tools listing is the best place ongoing for such a compilation, but so far I am not liking what I am seeing in vendors using hype to characterize their tools versus more dispassionate descriptions by practitioners.
NAME (URL) | DESCRIPTION |
3store | A core C library that uses MySQL to store its raw RDF data and caches, forming an important part of the infrastructure required to support a range of knowledgeable services |
4Suite 4RDF | The 4Suite 4RDF is an open-source platform for XML and RDF processing implemented in Python with C extensions |
ActiveRDF | ActiveRDF is a library for accessing RDF data from Ruby programs. It can be used as data layer in Ruby-on-Rails. You can address RDF resources, classes, properties, etc. programmatically, without queries |
Adaptiva | A user-centred ontology building environment, based on using multiple strategies to construct an ontology, minimising user input by using adaptive information extraction |
Aduna Metadata Server | The Aduna Metadata Server automatically extracts metadata from information sources, like a file server, an intranet or public web sites. The Aduna Metadata Server is a powerful and scalable store for metadata |
AeroText | Entity extraction engine from Lockheed Martin |
AJAX Client for SPARQL | AJAX Client for SPARQL is a simple AJAX client that can be used for running SELECT queries against a service and then integrating them with client-side Javascript code |
AKT Research Map | A competence map for members of the AKT project |
AKT-Bus | An open, lightweight, Web standards-based communication infrastructure to support interoperability among knowledge services. |
AllegroGraph | Franz Inc’s AllegroGraph is a system to load, store and query RDF data. It includes a SPARQL interface and RDFS reasoning. It has a Java and a Prolog interface |
Alembic | The Alembic Workbench project from Mitre has as its goal the creation of a natural language engineering environment for the development of tagged corpora |
Almo | An ontology-based workflow engine in Java |
Altova SemanticWorks | Visual RDF and OWL editor that auto-generates RDF/XML or nTriples based on visual ontology design |
Amilcare | An adaptive information extraction tool designed to support document annotation for the Semantic Web. |
ANNIE – Open Source Information Extraction | An open-source robust information extraction system |
Aperture | Aperture is a Java framework for extracting and querying full-text content and metadata from various information systems (e.g. file systems, web sites, mail boxes) and the file formats (e.g. documents, images) occurring in these systems |
Applications of FCA in AKT | Formal Concept Analysis (FCA) is used in a variety of application scenarios in AKT in order to perform concept-based domain analysis and automatically deduce a taxonomy lattice of that domain. |
Aqua | AQUA is a system which answer questions written in English. It combines several technologies Natural Language Processing, Logic, Information Retrieval and Ontologies. |
ARC | ARC is a lightweight, SPARQL-enabled RDF system for mainstream Web projects. It is written in PHP and has been optimized for shared Web environments |
Armadillo | Exploits the redundancies apparent in the Internet, combining many information sources to perform document annotation with minimal human intervention. |
ArtEquAkt | A system that automatically extracts information about artists from the web, populates an ontology, then uses the knowledge to generate personalised biographies. |
Automatic Support for Enterprise Modelling and Workflow | Knowledge management using multi-modelling techniques and how modelling activities may be assisted with automation based on formal methods. |
BBN OWL Validator | BBN OWL Validator |
Bibster | A semantics-based bibliographic peer-to-peer system |
Bossam | Bossam, a rule-based OWL reasoner (free, well-documented, closed-source) |
Brahms | Brahms is a fast main-memory RDF/S storage, capable of storing, accessing and querying large ontologies. It is implemented as a set of C++ classes |
BuddySpace | Instant messaging with custom map visualizations, semantics of presence (beyond ‘offline’/’online’/’away’ status) and value-added web services (group alerts, bots, inferences via personal profiles) |
Callisto | The Callisto annotation tool was developed to support linguistic annotation of textual sources for any Unicode-supported language with annotation support from jATLAS |
CASD | A tool for producing system architecture diagrams from service and data descriptions. |
Cerebra Server | A technology platform that is used by enterprises to build model-driven applications and highly adaptive information integration infrastructure; company recently bought by webMethods |
COCKATOO | A knowledge acquisition tool which can be used to produce a set of cases for use with a Case-Based Reasoning system. |
COHSE – Conceptual Open Hypermedia Services Environment | COHSE researches methods to improve significantly the quality, consistency and breadth of linking of WWW documents at retrieval and authoring time. |
CS AKTiveSpace | CS AKTiveSpace is a smart browser interface for a Semantic Web application that provides ontologically motivated information about the UK computer science research community. |
ClassAKT | A text classification web service for classifying documents according to the ACM Computing Classification System. |
Compendium | Compendium is a semantic, visual hypertext tool for supporting collaborative domain modelling and real time meeting capture |
ConRef | A service discovery system which uses ontology mapping techniques to support different user vocabularies |
ConcepTool | A system to model, analyse, verify, validate, share, combine, and reuse domain knowledge bases and ontologies, reasoning about their implication. |
Corese | Corese stands for Conceptual Resource Search Engine. It is an RDF engine based on Conceptual Graphs (CG) and written in Java. It enables the processing of RDF Schema and RDF statements within the CG formalism, provides a rule engine and a query engine accepting the SPARQL syntax |
cwm | The Closed World Machine (CWM) data manipulator, rules processor and query system mostly using using the Notation 3 textual RDF syntax. It also has an incomplete OWL Full and a SPARQL access. It is written in Python |
Cypher | Cypher Generates RDF and SeRQL representation of natural language statements and phrases |
D2R Server | D2R Server, turns relational databases into SPARQL endpoints, based on Jena’s Joseki |
D3E – Digital Document Discourse Environment | D3E enables the easy conversion of websites or structured documents into interactive discussion sites |
Deep Query Manager | Search federator from deep Web sources |
DOME | A programmable XML editor which is being used in a knowledge extraction role to transform Web pages into RDF, and available as Eclipse plug-ins. DOME stands for DERI Ontology Management Environment |
DOSE | A distributed platform for semantic annotation |
Drive | Drive is an RDF parser written in C# for the .NET platform |
ekoss.org | A collaborative knowledge sharing environment where model developers can submit advertisements |
Ellogon | Ellogon is a multi-lingual, cross-platform, general-purpose language engineering environment, based on the earlier TIPSTER approach |
Endeca | Facet-based content organizer and search platform |
Eprep | An add-on for the Eprints document archive which uses text extraction to automatically create the bibliographic metadata needed for the submission of a new document. |
eServices | The e-Services framework provides advanced scholarly services (in particular visualisations) using distributed metadata. |
Euler | Euler is an inference engine supporting logic based proofs. It is a backward-chaining reasoner enhanced with Euler path detection. It has implementations in Java, C#, Python, Javascript and Prolog. Via N3 it is interoperable with W3C Cwm |
ExtrAKT | ExtrAKT is a tool for extracting ontologies from Prolog knowledge bases. |
F-Life | F-Life is a tool for analysing and maintaining life-cycle patterns in ontology development. |
FaCT++ | FaCT++ is an OWL DL Reasoner implemented in C++ |
Fastr | Fastr is a parser for term and variant recognition. Fastr take as input a corpus and a list of terms and ouputs the indexed corpus in which terms and variants are recognized |
Floodsim | A prototype system which demonstrates the benefits of applying semantically rich service descriptions (expressed using Semantic Web technologies) to Web Services. |
FOAF-o-matic | Online FOAF generator |
FOAM | Framework for ontology alignment and mapping |
Foxtrot | Foxtrot is a recommender system which represents user profiles in ontological terms, allowing inference, bootstrapping and profile visualization. |
FreeLing | FreeLing is an open source language analysis tool suite. The FreeLing package consists of a library providing language analysis services (such as morphological analysis, date recognition, PoS tagging, etc.) The current version (1.2) of the package provides tokenizing, sentence splitting, morphological analysis, NE detection, date/number/currency recognition, PoS tagging, and chart-based shallow parsing |
GATE – General Architecture for Text Engineering | GATE is a stable, robust, and scalable open-source infrastructure which allows users to build and customise language processing components, while it handles mundane tasks like data storage, format analysis and data visualisation. |
Gnowsis | A semantic desktop environment |
GrOWL | Open source graphical ontology browser and editor |
HAWK | OWL repository framework and toolkit |
Heart of Gold | Heart of Gold is a middleware for the integration of deep and shallow natural language processing components. It provides a uniform and flexible infrastructure for building applications that use Robust Minimal Recursion Semantics (RMRS) and/or general XML standoff annotation produced by NLP components |
HELENOS | A Knowledge discovery workbench for the semantic Web |
I-X Process Panels | The I-X tool suite supports principled collaborations of human and computer agents in the creation or modification of some product. |
Identify Knowledge Base | Identify-Knowledge-Base is a tool of Topic Identification about Knowledge Base |
IF-Map | IF-Map is an Information Flow based ontology mapping method. It is based on the theoretical grounds of logic of distributed systems and provides an automated streamlined process for generating mappings between ontologies of the same domain. |
ILP for Information Extraction | To overcome the knowledge acquisition bottleneck, we apply Inductive Logic Programming techniques to learn Information Extraction rules. |
Internet Reasoning Service | The Internet Reasoning Service provides a a number of tools which supports the publication, location, composition and execution of heterogeneous web services, specified using semantic web technology |
IODT | IBM’s toolkit for ontology-driven development |
IsaViz | IsaViz is a visual authoring tool for browsing and authoring RDF models represented as graphs. Developed by Emmanuel Pietriga of W3C and Xerox Research Centre Europe. |
Jambalaya | Protégé plug-in for visualizing ontologies |
Jastor | Open source Java code generator that emits Java Beans from ontologies |
Javascript RDF/Turtle parser | Javascript RDF/Turtle parser, can be used with Jibbering |
Jena | Jena is a Java framework to construct Semantic Web Applications. It provides a programmatic environment for RDF, RDFS and OWL, SPARQL and includes a rule-based inference engine. It also has the ability to be used as an RDF database via its Joseki layer. See the jena discussion list for more information |
Jibbering | Jibbering, a simple javascript RDF Parser and query thingy |
Joseki | Jena’s Joseki layer offers an RDF Triple Store facility with SPARQL interface (see also the entry on Jena) |
JRDF | JRDF Java RDF Binding is an attempt to create a standard set of APIs and base implementations to RDF using Java. Includes a SPARQL GUI. |
KAON | Open source ontology management infrastructure |
KAON2 | KAON2 is an an infrastructure for managing OWL-DL, SWRL, and F-Logic ontologies. it is capable of manipulating OWL-DL ontologies; queries can be formulated using SPARQL |
Kazuki | Generates a java API for working with OWL instance data directly from a set of OWL ontologies |
KIM Platform | KIM is a software platform for the semantic annotation of text, automatic ontology population, indexing and retrieval, and information extraction from Ontotext |
KnoZilla | |
Knowledge Broker | The knowledge broker addresses the problem of knowledge service location in distributed environments. |
Kowari | Open source database for RDF and OWL |
KRAFT – I-X TIE | Supports collaboration among members of a virtual organisation by integrating workflow and communication technology with constraint solving. |
LingPipe | LingPipe is a suite of Java tools designed to perform linguistic analysis on natural language data. LingPipe’s flexibility and included source make it appropriate for research use. Version 1.0 tools include a statistical named-entity detector, a heuristic sentence boundary detector, and a heuristic within-document coreference resolution engine |
LinguaStream | LinguaStream is an integrated experimentation environment (IEE) targeted to researchers in Natural Language Processing. LinguaStream allows processing streams to be assembled visually, picking individual components in a “palette” (the standard set contains about fifty components, and is easily extensible using a Java API, a macro-component system, and templates). Some components are specifically targeted to NLP, while others solve various issues related to document engineering (especially to XML processing). Other components are to be used in order to perform computations on the annotations produced by the analysers, to visualise annotated documents, to generate charts, etc. |
LinKFactory | Language & Computing’s LinKFactory is an ontology management tool, it provides an effective and user-friendly way to create, maintain and extend extensive multilingual terminology systems and ontologies (English, Spanish, French, etc.). It is designed to build, manage and maintain large, complex, language independent ontologies. |
Lucene | Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform. It is open source |
LuMriX | A commercial search engine using semantic Web technologies |
Magpie | Magpie supports the interpretation of web documents through on-the-fly ontologically based enrichment. Semantic services can be invoked either by the user or be automatically triggered by patterns of browsing activity |
Melita | Melita is a semi-automatic annotation tool using an Adaptive Information Extraction engine (Amilcare)to support the user in document annotation. |
MetaMatrix | Semantic vocabulary mediation and other tools |
Metatomix | Commercial semantic toolkits and editors |
MindRaider | Open source semantic Web outline editor |
MnM | MnM is an annotation tool which provides both automated and semi-automated support for annotating web pages with semantic contents. MnM integrates a web browser with an ontology editor and provides open APIs to link to ontology servers and for integrating information extraction tool |
Model Futures OWL Editor | Simple OWL tools, featuring UML (XMI), ErWin, thesaurus and imports |
Mulgara | The Mulgara Semantic Store is an Open Source, massively scalable, transaction-safe, purpose-built database for the storage and retrieval of RDF, written in Java. It is an active fork of Kowari |
Muskrat-II | Given a set of knowledge bases and problems solvers, the Muskrat system will try to identify which knowledge bases could be combined with which problems solvers to solve a given problem. |
MyPlanet | MyPlanet allows users to create a personalised version of a web based newsletter using an ontologically based profile. |
Net OWL | Entity extraction engine from SRA International |
NMARKUP | NMARKUP helps the user build ontologies by detecting nouns in texts and by providing support for the creation of an ontology based on the entities extracted. |
Nokia Semantic Web Server | An RDF based knowledge portal for publishing both authoritative and third party descriptions of URI denoted resources |
ONTOCOPI | A tool which uncovers Communities Of Practise by analysing the connectivity of instances in the 3store knowledge base. |
OntoEdit/OntoStudio | Engineering environment for ontologies |
OntoMat Annotizer | Interactive Web page OWL and semantic annotator tool |
OntoPortal | Enables the authoring and navigation of large semantically-powered portals |
OpenLink Data Spaces (ODS) | ODS is a distributed collaborative application platform for creating Semantic Web applications such as: blogs, wikis, feed aggregators, etc., with built-in SPARQL support and incorporation of shared ontologies such as SIOC, FOAF, and Atom OWL. ODS is an application of OpenLink Virtuoso and is available in Open Source and Commercial Editions |
Oracle Spatial 10g | Oracle Spatial 10g includes an open, scalable, secure and reliable RDF management platform |
Oyster | Peer-to-peer system for storing and sharing ontology metadata |
OWL Consistency checker | OWL Consistency checker (based on Pellet) |
OWL-DL Validator | WonderWeb OWL-DL Validator |
OWLJessKB | OWLJessKB is a description logic reasoner for OWL. The semantics of the language is implemented using Jess, the Java Expert System Shell. Currently most of the common features of OWL lite, plus some and minus some |
OWLIM | OWLIM is a high-performance semantic repository, packaged as a Storage and Inference Layer (SAIL) for the Sesame RDF database |
OWLViz | OWLViz is visual editor for OWL and is available as a Protégéplug-in |
Pellet | Pellet is an open-source Java based OWL DL reasoner. It can be used in conjunction with both Jena and OWL API libraries; it can also be downloaded and be included in other applications |
Piggy Bank | A Firefox-based semantic Web browser |
Pike | A dynamic programming (scripting) language similar to Java and C for the semantic Web |
pOWL | Semantic Web development platform |
Protégé | Open source visual ontology editor written in Java with many plug-in tools |
RACER | A collection of Projects and Tools to be used with the semantic reasoning engine RacerPro |
RacerPro | RacerPro is an OWL reasoner and inference server for the Semantic Web |
rdfabout.com’s Validator | RDF/XML and N3 validator |
RDF Gateway | Intellidimension’s RDF Gateway is an RDF Triple database with RDFS reasoning and SPARQL interface |
RDF InferEd | Intellidimension’s RDF InferEd is an authoring environment with the ability to navigate and edit RDF documents |
RDFLib | RDFLib, an RDF libary for Python, including a SPARQL API. The library also contains both in-memory and persistent Graph backends |
RDFReactor | Access RDF from Java using inferencing |
RDF Server | The RDF server of the PHP RAP environment |
RDFStore | RDFStore is an RDF storage with Perl and C API-s and SPARQL facilities |
RDFSuite | The ICS-FORTH RDFSuite open source, high-level scalable tools for the Semantic Web. This suite includes Validating RDF Parser (VRP), a RDF Schema Specific DataBase (RSSDB) and supporting RDF Query Language (RQL) |
Redland | The Redland RDF Application Framework is a set of free software libraries that provide support for RDF. It provides parser for RDF/XML, Turtle, N-triples, Atom, RSS; has a SPARQL and GRDDL implementation, and has language interfaces to C#, Python, Obj-C, Perl, PHP, Ruby, Java and Tcl |
RelationalOWL | Automatically extracts the semantics of virtually any relational database and transforms this information automatically into RDF/OW |
ReTAX+ | ReTAX is an aide to help a taxonomist create a consistent taxonomy and in particular provides suggestions as to where a new entity could be placed in the taxonomy whilst retaining the integrity of the revised taxonomy (c.f., problems in ontology modelling). |
Refiner++ | REFINER++ is a system which allows domain experts to create and maintain their own Knowledge Bases, and to receive suggestions as to how to remove inconsistencies, if they exist. |
Seamark Navigator | Siderean’s Seamark Navigator provides a platform to combine Web search pages with product catalog databases, document servers, and other digital information from both inside and outside the enterprise |
Semantic Annotation with MnM | MnM is a semantic annotation tool which provides manual, automated and semi-automated support for annotating web pages with ‘semantics’, i.e., machine interpretable descriptions. |
Semantical | Open source semantic Web search engine |
SemanticWorks | A visual RDF/OWL Editor from Altova |
Semantic Mediawiki | Semantic extension to the MediaWiiki wiki |
Semantic Net Generator | Utility for generating topic maps automatically |
SemWeb | SemWeb for .NET supports persistent storage in MySQL, Postgre, and Sqlite; has been tested with 10-50 million triples; supports SPARQL |
Sesame | Sesame is an open source RDF database with support for RDF Schema inferencing and querying. It offers a large scale of tools to developers to leverage the power of RDF and RDF Schema |
SMART | System for Managing Applications based on RDF Technology |
SMORE | OWL markup for HTML pages |
SPARQL | Query language for RDF |
SPARQLer | SPARQL query demo and service |
SPARQLette | A SPARQL demo query service |
SPARQL JavaScript Library | SPARQL JavaScript Library interfaces to the SPARQL Protocol and interpret the return values as part of an AJAX framework |
SWCLOS | A semantic Web processor using Lisp |
SWI-Prolog | SWI-Prolog is a comprehensive Prolog environment, which also includes an RDF Triple store. There is also a separate Prolog library to handle OWL |
Swish | Swish is a framework for performing deductions in RDF. It has similar features to CWM. It is written for Haskell developers |
Swoogle | A semantic Web search engine with 1.5 M resources |
SWOOP | A lightweight ontology editor |
TopBraid Composer | Top Quandrant’s TopBraid Composer is a complete standards-based platform for developing, testing and maintaining Semantic Web applications |
Tucana Suite | Northrop Grumman’s Tucana Suite is an industrial quality version of the Kowari metastore |
Turtle | Terse RDF “Triple” language |
Visualisations for the CS AKTive Portal | Maps are used to geographically illustrate knowledge from the Triplestore, such as highlighting the locations in the UK that are active in a particular research area. |
VisuaText | VisualText ® is an integrated development environment for building information extraction systems, natural language processing systems, and text analyzers |
W3C’s RDF Validator | W3C’s RDF Validator |
WebOnto | WebOnto supports the browsing, creation and editing of ontologies through coarse grained and fine grained visualizations and direct manipulation. |
Wilbur | Wilbur is lisp based toolkit for Semantic Web Programming. Wilbur is Nokia Research Center’s toolkit for programming Semantic Web applications that use RDF written in Common Lisp |
WSMO Studio | A semantic Web service editor compliant with WSMO as a set of Eclipse plug-ins |
WSMT Toolkit | The Web Service Modeling Toolkit (WSMT) is a collection of tools for use with the Web Service Modeling Ontology (WSMO), the Web Service Modeling Language (WSML) and the Web Service Execution Environment (WSMX) |
WSMX | Execution environment for dynamic use of semantic Web services |
XML Army Knife | XML Army Knife |
XMP | A labeling technology from Adobe that enables data about a file to be embedded as metadata into the file itself. |
YARS | YARS (Yet Another RDF Store) is a data store for RDF in Java and allows for querying RDF based on a declarative query language, which offers a somewhat higher abstraction layer than the APIs of RDF toolkits such as Jena or Redland |
Zotero | Firefox add-in (in development) that allows the auto-completion of online citations |
Hi everybody!
TermExtractor, my master thesis, is online at the
address http://lcl2.di.uniroma1.it.
TermExtractor is a FREE and high-performing software package for Terminology
Extraction. The software helps a web community to
extract and validate relevant domain terms in their
interest domain, by submitting an archive of
domain-related documents in any format
(txt, pdf, ps, dvi, tex, doc, rtf, ppt, xls, xml,
html/htm, chm, wpd and also zip archives.)
TermExtractor extracts terminology consensually
referred in a specific application domain. The
software takes as input a corpus of domain documents,
parses the documents, and extracts a list of
“syntactically plausible” terms (e.g. compounds,
adjective-nouns, etc.).
Documents parsing assigns a greater importance
to terms with text layouts (title, bold, italic,
underlined, etc.). Two entropy-based measures, called
Domain Relevance and Domain Consensus, are then used.
Domain Consensus is used to select only the terms
which are consensually referred throughout the corpus
documents. Domain Relevance to select only the terms
which are relevant to the domain of interest, Domain
Relevance is computed with reference to a set of
contrastive terminologies from different domains.
Finally, extracted terms are further filtered using
Lexical Cohesion, that measures the degree of
association of all the words in a terminological
string.
—
Francesco Sclano
home page: http://lcl2.di.uniroma1.it/~sclano
msn: francesco_sclano@yahoo.it
skype: francesco978
This is truly a great blog! Keep on posting such interesting subjects! I already bookmarked your blog and I will recommend it to some of my friends. James Spade from http://HealthReviews.org