I recently wrote about WOA (Web-oriented architecture), a term coined by Nick Gall, and how it represented a natural marriage between RESTful Web services and RESTful linked data. There was, of course, a method behind that posting to foreshadow some pending announcements from UMBEL and Zitgist.
Well, those announcements are now at hand, and it is time to disclose some of the method behind our madness.
As Fred Giasson notes in his announcement posting, UMBEL has just released some new Web services with fully RESTful endpoints. We have been working on the design and architecture behind this for some time and, all I can say is, it’s UMBELievable!
As Fred notes, there is further background information on the UMBEL project — which is a lightweight reference structure based on about 20,000 subject concepts and their relationships for placing Web content and data in context with other data — and the API philosophy underlying these new Web services. For that background, please check out those references; that is not my main point here.
We discussed much in coming up with the new design for these UMBEL Web services. Most prominent was taking seriously a RESTful design and grounding all of our decisions in the HTTP 1.1 protocol. Given the shared approaches between RESTful services and linked data, this correspondence felt natural.
What was perhaps most surprising, though, was how complete and well suited HTTP was as a design and architectural basis for these services. Sure, we understood the distinctions of GET and POST and persistent URIs and the need to maintain stateless sessions with idempotent design, but what we did not fully appreciate was how content and serialization negotiation and error and status messages also were natural results of paying close attention to HTTP. For example, here is what the UMBEL Web services design now embraces:
There are likely other services out there that embrace this full extent of RESTful design (though we are not aware of them). What we are finding most exciting, though, is the ease with which we can extend our design into new services and to mesh up data with other existing ones. This idea of scalability and distributed interoperability is truly, truly powerful.
It is almost like, sure, we knew the words and the principles behind REST and a Web-oriented architecture, but had really not fully taken them to heart. As our mindset now embraces these ideas, we feel like we have now looked clearly into the crystal ball of data and applications. We very much like what we see. WOA is most cool.
For lack of a better phrase, Zitgist has a component internal plan that it calls its ‘Grand Vision’ for moving forward. Though something of a living document, this reference describes how Zitgist is going about its business and development. It does not describe our markets or products (of course, other internal documents do that), but our internal development approaches and architectural principles.
Just as we have seen a natural marriage between RESTful Web services and RESTful linked data, there are other natural fits and synergies. Some involve component design and architecting for pipeline models. Some involve the natural fit of domain-specific languages (DSLs) to common terminology and design, too. Still others involve use of such constructs in both GUIs and command-line interfaces (CLIs), again all built from common language and terminology that non-programmers and subject matter experts alike can readily embrace. Finally, some is a preference for Python to wrap legacy apps and to provide a productive scripting environment for DSLs.
If one can step back a bit and realize there are some common threads to the principles behind RESTful Web services and linked data, that very same mindset can be applied to many other architectural and design issues. For us, at Zitgist, these realizations have been like turning on a very bright light. We can see clearly now, and it is pretty UMBELievable. These are indeed exciting times.
BTW, I would like to thank Eric Hoffer for the very clever play on words with the UMBELievable tag line. Thanks, Eric, you rock!
I have been a consistently vocal critic of “Web 3.0″ as a moniker for current Web trends, specifically as some sort of branding replacement for the semantic Web. My specific postings on the term Web 3.0, likely with lame attempts at derision, are on record as:
Then, in relation to an article on ReadWriteWeb regarding a keynote on semantics and advertising at last week’s Web 3.0 Conference & Expo in Santa Clara, CA, I added a comment joining others criticizing the use of version numbers to describe the semantic Web. My comment was simply: “Please, Squash that Web 3.0 Cockroach (see http://www.mkbergman.com/?p=406).”
I count Dan as a colleague and a friend and do not want to engage in a flame war across multiple sites. So, herein, I address his rhetorical question, “How shall we call it [Web 3.0] instead, Mike? Please indulge us.”
It just so happens I have been writing on this very topic for quite some time.
(BTW, while others commented on the proper role or not of semantics and advertising, that is not my issue. I have never criticized advertising or marketing and will not do so with respect to semantic Web techniques, either. I fully expect semantic technologies to be applied everywhere appropriate and where they can work, which most certainly includes better targeting of ads and messages.)
So, what is my problem with the use of Web 3.0 for semantic Web-related trends and activities?
Simply answered, it is because Web 3.0 means nothing.
It can mean anything or everything depending on who is pushing the idea. I dare anyone who is pushing this term to find a consensus understanding or authoritative or “official” understanding or grounding in language or usage or any other informing basis for what this term means.
I find it ironic that the semantic Web, which is at heart about meaning and data interoperability, could potentially accept a naming or branding that is itself meaningless. Simply answered, that is my issue and has been since the term Web 3.0 was first floated.
Moreover, rejection of the term Web 3.0 does not carry with it the logical fallacy of therefore not wanting simpler ways to explain semantic Web-related concepts and technologies to the broader public. Nor does rejection of the term Web 3.0 carry with it the logical fallacy of not wanting effective branding, effective terminology or effective business models.
What rejection of the term Web 3.0 does mean is that we can and should do better in conveying what our collective endeavor is really all about. And with meaning.
I have two answers to Dan’s rhetorical question: structured Web and linked data. I further place both of these concepts into context.
Many have given their own bona fide attempts at describing semantic Web aliases and where they stand within a development continuum. Some of this is the natural attempt by some to want to “name” a space and time. Some of it is undoubtedly a reaction to the glazed look many in the public get when “semantic Web” is first put before them. Some of it is likely a reaction to the fact that the semantic Web has been slow to develop or has not yet met its initial promise and (perhaps) hype.
Over the past two years, I have pointed to two concepts as part of an ongoing continuum eventually leading to the semantic Web. These are the broader and bridging structured Web, which is currently seeing one expression as linked data. I first tried to capture this continuum in a diagram from July 2007:
|Document Web||Structured Web||Semantic Web|
To further a common language, I have also put forward my own working definitions of these concepts:
I have repeatedly discussed these themes and ideas since I first criticised attempts at the Web 3.0 branding:
Those entries marked with an asterisk [*] are the most central ones dealing with either structured Web or linked data as terminology.
Over two years, about one in six of my blog posts has been devoted strictly to meaning and terminology. I agree proper branding for our collective endeavor is very important. In the end, it is not important whether my views hold sway, but that we are able to effectively explain and sell our products and services. There is still considerable outreach and communication required with the marketplace.
Amongst the two concepts, I prefer the terminology of structured Web because it seems to be readily understood and appreciated within enterprise clients. But, we have also embraced linked data because it was gaining mindshare and conveys (imo) a concrete and correct image.
Terminology adoption is a function of both providers and consumers. Consumers vote with their attention and their wallets; providers through their branding and positioning, which naturally should be attentive to what is resonant in the marketplace.
Right now, in the emerging markets around the semantic Web and its related technologies expressing the current evolution of this continuum, the naming issues are still largely in the purview of the providers. Consumers are still few. I think most would agree that terminology is still unsettled.
It is legitimate to question whether the provider community is doing itself a disservice using ‘semantic Web’ as a hook. It is healthy to seek better terminology if it can be found. Any attempt to find clear and compelling language is to be applauded.
But, insofar as we are selling improved meaning and data interoperability, let’s find language that conveys those advantages. Web 3.0 directly contradicts this fundamental message by offering no meaning in its vacuity.
Well, come on now, let’s do smile a bit. There are worse things than calling the term Web 3.0 a cockroach. After all, cockroaches have existed for 340 million years well in advance of humans and can be found everywhere. More than 5,000 species have been identified. Some claim cockroaches will be here long after humans are gone.
Still, as for me, I’m hoping the term Web 3.0 is squashed well in advance of that time. Meanwhile, I appreciate the invitation to indulge in meaningful branding and terminology.
An earlier popular entry of this AI3 blog was “99 Wikipedia Sources Aiding the Semantic Web”. Each academic paper or research article in that compilation was based on Wikipedia for semantic Web-related research. Many of you suggested additions to that listing. Thanks!
Wikipedia continues to be an effective and unique source for many information extraction and semantic Web purposes. Recently, I needed to update my own research and found that many valuable new papers have been added to the literature.
I thus decided to make a compilation of such papers a permanent feature — which I’ve named SWEETpedia — and to update it on a periodic basis. You can now find the most recent version under the permanent SWEETpedia page link.
Hint, hint: Check out this link to see the 163 Wikipedia research sources!
NOTE: If you know of a paper that I’ve overlooked, please suggest it as a comment to this posting and I will add it to the next update.
For starters, it summarizes the size and status of the English-version Wikipedia with a more discerning eye than usual:
|Articles and related pages||5,460,000|
|lists and stubs||620,000|
|between category and subcategory||740,000|
|between category and article||7,270,000|
The size, scope and structure of Wikipedia make it an unprecedented resource for researchers engaged in natural language processing (NLP), information extraction (IE) and semantic Web-related tasks. Further, the more than 250 language versions of Wikipedia also make it a great resource for multi-lingual and translation studies.
In the eight months since posting the semantic Web-related research papers using Wikipedia, my new SWEETpedia listing has grown by about 65%. There are now 63 new papers, bringing the total to 163.
Of course, these are not the only academic papers published about or using Wikipedia. The SWEETpedia listing is specifically related to structure, term, or semantic extractions from Wikipedia. Other research about frequency of updates or collaboration or growth or comparisons with standard encyclopedias may also be found under Wikipedia’s own listing of academic studies.
This graph indicates the growth in use of Wikipedia as a source of semantic Web research. It is hard to tell if the effort is plateauing or not; the apparent slight dip in 2008 is too early to yet conclude that.
For example, the current SWEETpedia listing adds another 35% more listings for 2007 to the earlier records. It is likely many 2008 papers will also be discovered later in 2009. Many of the venues at which these papers get presented can be somewhat obscure, and new researchers keep entering the field.
However, we can conclude that Wikipedia is assuming a role in semantic Web and natural language research never before seen for other frameworks.
As noted, the new 82-page technical report by Olena Medelyan et al. from the University of Waikato in New Zealand, Mining Meaning from Wikipedia , is now the must-have reference for all things related to the use of Wikipedia for semantic Web and natural language research.
Olena and her co-authors, Catherine Legg, David Milne and Ian Witten, have each published much in this field and were some of the earliest researchers tapping into the wealth of Wikipedia.
They first note the many uses to which Wikipedia is now being put:
These type of uses then enable the authors to place various research efforts and papers into context. They do so via four major clusters of relevant tasks related to language processing and the semantic Web:
There are many interesting observations throughout this report. There are also useful links to related tools, supporting and annotated datasets, and key researchers in the field.
I highly recommend this report as the essential starting point for anyone first getting into these research topics. Many of the newly added references to the SWEETpedia listing arose from this report. Reading the report is useful grounding to know where to look for specific papers in a given task area.
Though clearly the authors have their own perspectives and research emphases, they do an admirable job of being complete and even-handed in their coverage. Basic review reports such as this play an important role in helping to focus new research and make it productive.
Excellent job, folks! And, thanks!
In the longer version, Nick describes WOA as based on the architecture of the Web that he further characterizes as “globally linked, decentralized, and [with] uniform intermediary processing of application state via self-describing messages.”
WOA is a subset of the service-oriented architectural style. He describes SOA as comprising discrete functions that are packaged into modular and shareable elements (“services”) that are made available in a distributed and loosely coupled manner.
Representational state transfer (REST) is an architectural style for distributed hypermedia systems such as the World Wide Web. It was named and defined in Roy Fielding‘s 2000 doctoral thesis; Roy is also one of the principal authors of the Hypertext Transfer Protocol (HTTP) specification.
REST provides principles for how resources are defined and used and addressed with simple interfaces without additional messaging layers such as SOAP or RPC. The principles are couched within the framework of a generalized architectural style and are not limited to the Web, though they are a foundation to it.
REST and WOA stand in contrast to earlier Web service styles that are often known by the WS* acronym (such as WSDL, etc.). (Much has been written on RESTful Web services v. “big” WS*-based ones; one of my own postings goes back to an interview with Tim Bray back in November 2006.)
Shortly after Nick coined the WOA acronym, REST luminaries such as Sam Ruby gave the meme some airplay . From an enterprise and client perspective, Dion Hinchliffe in particular has expanded and written extensively on WOA. Besides his own blog, Dion has also discussed WOA several times on his Enterprise Web 2.0 blog for ZDNet.
Largely due to these efforts (and — some would claim — the difficulties associated with earlier WS* Web services) enterprises are paying much greater heed to WOA. It is increasingly being blogged about and highlighted at enterprise conferences .
While exciting, that is not what is most important in my view. What is important is that the natural connection between WOA and linked data is now beginning to be made.
Linked data is a set of best practices for publishing and deploying data on the Web using the RDF data model. The data objects are named using Web uniform resource identifiers (URIs), emphasize data interconnections, and adhere to REST principles.
Most recently, Nick began picking up the theme of linked data on his new Gartner blog. Enterprises now appreciate the value of an emerging service aspect based on HTTP and accessible by URIs. The idea is jelling that enterprises can now process linked data architected in the same manner.
I think the similar perspectives between REST Web services and linked data become a very natural and easily digested concept for enterprise IT architects. This is a receptive audience because it is these same individuals who have experienced first-hand the challenges and failures of past hype and complexity from non-RESTful designs.
It helps immensely, of course, that we can now look at the major Web players such as Google and Amazon and others — not to mention the overall success of the Web itself — to validate the architecture and associated protocols for the Web. The Web is now understood as the largest Machine designed by humans and one that has been operational every second of its existence.
Many of the same internal enterprise arguments that are being made in support of WOA as a service architecture can be applied to linked data as a data framework. For example, look at Dion’s 12 Things You Should Know About REST and WOA and see how most of the points can be readily adopted to linked data.
So, enterprise thought leaders are moving closer to what we now see as the reality and scalability of the Web done right. They are getting close, but there is still one piece missing.
I admit that I have sometimes tended to think of enterprise systems as distinct from the public Web. And, for sure, there are real and important distinctions. But from an architecture and design perspective, enterprises have much to learn from the Web’s success.
With the Web we see the advantages of a simple design, of universal identifiers, of idempotent operations, of simple messaging, of distributed and modular services, of simple interfaces, and, frankly, of openness and decentralization. The core foundations of HTTP and adherence to REST principles have led to a system of such scale and innovation and (growing) ubiquity as to defy belief.
So, the first observation is that the Web will be the central computing touchstone and framework for all computing systems for the foreseeable future. There simply is no question that interoperating with the Web is now an enterprise imperative. This truth has been evident for some time.
But the reciprocal truth is that these realities are themselves a direct outcome of the Web’s architecture and basic protocol, HTTP. The false dichotomy of enterprise systems as being distinct from the Web arises from seeing the Web solely as a phenomenon and not as one whose basic success should be giving us lessons in architecture and design.
Thus, we first saw the emergence of Web services as an important enteprise thrust — we wanted to be on the Web. But that was not initially undertaken consistent with Web design — which is REST or WOA — but rather as another “layer” in the historical way of doing enterprise IT. We were not of the Web. As the error of that approach became evident, we began to see the trend toward “true” Web services that are now consonant with the architecture and design of the actual Web.
So, why should these same lessons and principles not apply as well to data? And, of course, they do.
If there is one area that enterprises have been abject failures in for more than 30 years it is data interoperability. ETL and enterprise busses and all sorts of complex data warehousing and EAI and EIA mumbo jumbo have kept many vendors fat and happy, but few enterprise customers so. On almost every single dimension, these failed systems have violated the basic principles now in force on the Web based on simplicity, uniform interfaces, etc.
OK, so how many of you have read the HTTP specifications ? How many understand them? What do you think the fundamental operational and architectural and design basis of the Web is?
HTTP is often described as a communications protocol, but it really is much more. It represents the operating system of the Web as well as the embodiment of a design philosophy and architecture. Within its specification lies the secret of the Web’s success. REST and WOA quite possibly require nothing more to understand than the HTTP specification.
Of course, the HTTP specification is not the end of the story, just the essential beginning for adaptive design. Other specifications and systems layer upon this foundation. But, the key point is that if you can be cool with HTTP, you are doing it right to be a Web actor. And being a cool Web actor means you will meet many other cool actors and be around for a long, long time to come.
An understanding of HTTP can provide similar insights with respect to data and data interoperability. Indeed, the fancy name of linked data is nothing more than data on the Web done right — that is, according to the HTTP specifications.
Just as packets need their routers to get to their proper location based on resolving the names of a URI to a physical device, data or information on the Web needs similar context. And, one mechanism by which such context can be provided is through some form of logical referencing framework by which information can be routed to its right “neighborhood”.
I am not speaking of routing to physical locations now, but the routing to the logical locations about what information “is about” and what it “means”. On the simple level of language, a dictionary provides such a function by giving us the definition of what a word “means”. Similar coherent and contextual frameworks can be designed for any information requirement and scope.
Of course, enterprises have been doing similar things internally for years by adopting common vocabularies and the like. Relational data schema are one such framework even if they are not always codified or understood by their enterprises as such.
Over the past decade or two we have seen trade and industry associations and standards bodies, among others, extend these ideas of common vocabularies and information structures such as taxonomies and metadata to work across enterprises. This investment is meaningful and can be quite easily leveraged.
As Nick notes, efforts such as what surrounds XBRL are one vocabulary that can help provide this “routing” in the context of financial data and reporting. So, too, can UMBEL as a general reference framework of 20,000 subject concepts. Indeed, our unveiling of the recent LOD constellation points to a growing set of vocabularies and classes available for such contexts. Literally thousands and thousands of such existing structures can be converted to Web-compliant linked data to provide the information routing hubs necessary for global interoperability.
And, so now we come down to that missing piece. Once we add context as the third leg to this framework stool to provide semantic grounding, I think we are now seeing the full formula powerfully emerge for the semantic Web:
SW = WOA + linked data + coherent context
This simple formula becomes a very powerful combination.
Just as older legacy systems can be exposed as Web services, and older Web services can be turned into WOA ones compliant with the Web’s architecture, we can transition our data in similar ways.
The Web has been pointing us to adaptive design for both services and data since its inception. It is time to finally pay attention.
The past two weeks have seen an interesting emergence of new perspectives on the ‘deep Web‘. The deep Web, a term Thane Paulsen and I coined for my oft-quoted study from 2000, The Deep Web: Surfacing Hidden Value , is the phenomenon of database-backed content served from interactive Web search forms.
Because deep Web content is dynamic and produced only on request, it has been difficult for traditional search engines to index. It is also huge and of high quality (though likely not the 100x to 500x figure larger than the standard ‘surface’ Web that I used in that first study.)
This is the most recent of the three notable events over the past two weeks, and came out on Tuesday. Maureen Flynn-Burhoe of the oceanflynn @ Digg blog has produced a very informative and comprehensive timeline of deep Web and related developments from 1980 to the present (database-backed content and early Web precursors, of course, precede the Web itself and the term ‘deep Web’).
I have been directly involved in this field since 1994 and have not yet seen such a comprehensive treatment. She cites studies noting “hundreds of thousands” of deep Web sites and the faster growth of dynamic (database-served) as opposed to static (‘surface’) content on the Web.
As someone directly involved in estimating the size of the deep Web, I appreciate the analytic difficulties and take all of the estimates (my own older ones included!) with a grain of salt. Nonetheless, the deep Web is important, its content is huge, often of unique and high quality, and it deserves serious attention by Web scientists.
Great job, Maureen! I always appreciate thorough researchers. (BTW, I suspect you might also like the Timeline of Information History.)
The next notable event was the publishing of Searching the Deep Web by Alex Wright in the Communications of the ACM (October 2008) . Alex had first written about the deep Web for Salon magazine in 2004 and had given nice attention to my company at that time, BrightPlanet .
In this current update, Alex does an excellent job of characterizing current status and research in search techniques for the deep Web. I also liked the fact he used our fishing analogy of trawling for standard search crawlers versus direct angling in the deep Web (see our earlier figure at upper left).
As some may recall, Google has stepped up its activities in this area, an event I reported on a few months back. Those perspectives, and others from some other notable figures, are included in Alex’s piece as well.
My own contribution to the piece was to suggest that RDF and semantic Web approaches offered the next evolutionary stage in deep Web searching. Alex was able to take that theme and get some great perspectives on it. I also appreciate the accuracy of my quotes, which gives me confidence in the quality for the rest of the story.
Without a doubt there is high quality in the deep Web and bringing structure and semantic characterization to it through metadata is a task of some consequence.
For myself, I chose to move beyond the deep Web when its focus seemed stuck in a document-level perspective to retrieval and analysis. However, there is much to be learned from the techniques used to select and access deep Web content, which could be readily transferable to linked data.
Thanks, Alex, for making these prospects clearer! Maybe it is time to dust off some of my old stuff!
This emerging joining of deep Web and semantics is actually taking place through the efforts of a number of academic researchers. Recently and prominently has been James Geller from the New Jersey Institute of Technology and his colleagues Soon Ae Chun and Yoo Jung . Their recently published paper, Toward the Semantic Deep Web, shows how ontologies and semantic Web constructs can be combined to more effectively extract information from the deep Web. They call this combination the ‘semantic deep Web.’
The authors posit that the structured roots of deep Web content lend themselves to better ontology learning from the Web. They also point to the usefulness of deep Web structure to annotations.
That such confluences are occurring between the semantic and deep “Webs” is a function of focused academic attention and the growing maturity of both perspectives. This year, for example, saw the inauguration of the first Workshop on Advances in Accessing Deep Web (ADW 2008). As part of the International Conference on Business Information Systems (BIS 2008), this meeting saw a lot of elbow rubbing with semantic Web and enterprise topics.
It might seem strange (indeed, sometimes it does to me ) to envision structured database content being served through a Web form and then converted via ontologies and other means to semantic Web formats. After all, why not go direct to the data?
And, of course, direct conversion is less lossy and more efficient.
But, one interesting point is that semantic Web techniques are increasingly working as a structure-extraction layer wrapping the standard Web. In that regard, starting with inherently structured source data — that is, the deep Web — can lead to higher quality inputs across the distributed, heterogeneous content of the Web.
Given the impossibility of everyone starting with the same premises and speaking the same languages and concepts, semantic Web mediation methods offer a way to overcome the Tower of Babel. And, when the starting content itself is inherently structured and (generally) of higher quality — that is, the deep Web — the logic of the combination becomes more obvious.
Interested in learning more about the deep Web? I firstly recommend the resources posted at the bottom of Flynn-Burhoe’s timeline. And, for a very thorough treatment, I also recommend Denis Shestakov’s Ph.D. thesis from earlier this year . It has a bibliography of some 115 references.