Posted:April 30, 2006

Despite page ranking and other techniques, the scale of the Internet is straining available commercial search engines to deliver truly relevant content.  This observation is not new, but its relevance is growing.  Similarly, the integration and interoperabillity challenges facing enterprises have never been greater.  One approach to address these needs, among others, is to adopt semantic Web standards and technologies.

The image is compelling:  targeted and unambiguous information from all relevant sources, served in usable bit-sized chunks.  It sounds great; why isn’t it happening?

There are clues — actually, reasons — why semantic Web technology is not being embraced on a broad-scale way.  I have spoken elsewhere as to why enterprises or specific organizations will be the initial adopters and promoters of these technologies.  I still believe that to be the case.  The complexity and lack of a network effect ensure that semantic Web stuff will not initially arise from the public Internet.

Parellels with Knowledge Management

Paul Warren, in  “Knowledge Management and the Semantic Web: From Scenario to Technology,” IEEE Intelligent Systems, vol. 21, no. 1, 2006, pp. 53-59, has provided a structured framework for why these assertions make sense.  This February online article is essential reading for anyone interested in semantic Web issues (and has a listing of fairly classic references).

If you can get past the first silly paragraphs regarding Sally the political scientist and her research example (perhaps in a separate post I will provide better real-world examples from open source intelligence, or OSINT), Warren actually begins to dissect the real issues and challenges in effecting the semantic Web.  It is this latter two-thirds or so of Warren’s piece that is essential reading.

He does not organize his piece in the manner listed below, but real clues emerge in the repeated pointing to the need for “semi-automatic” methods to make the semantic Web a reality.  Fully a dozen such references are provided.  Relatedly, in second place, are multiple references to the need or value of “reasoning algorithms.”  In any case, here are some of the areas noted by Warren needing “semi-automatic” methods:

  • Assign authoritativemenss
  • Learn ontologies
  • Infer better search requests
  • Mediate ontologies (semantic resolution)
  • Support visualization
  • Assign collaborations
  • Infer relationships
  • Extract entities
  • Create ontologies
  • Maintain and evolve ontologies
  • Create taxonomies
  • Infer trust
  • Analyze links
  • etc.

These challenges are not listed in relevance, but as encountered in reading the Warren piece.  Tagging, extracting, classifying and organizing all are pretty intense tasks that certainly can not be done solely manually while still scaling.

Keep It Simple, Stupid

The lack of “simple” approaches is posited as another reason for slow adoption of the semantic Web.  In the article “Spread the word, and join it up,” in the April 6 Guardian, SA Matheson reports Tim O’Reilly as saying:

“I completely believe in the long-term vision of the semantic web – that we’re moving towards a web of data, and sophisticated applications that manipulate and navigate that data web.  However, I don’t believe that the W3C semantic web activity is what’s going to take us there….It always seemed a bit ironic to me that Berners-Lee, who overthrew many of the most cherished tenets of both hypertext theory and SGML with his ‘less is more
and worse is better’ implementation of ideas from both in the world wide web, has been deeply enmeshed in a theoretical exercise rather than just celebrating the bottom-up activity that will ultimately result in the semantic web…..It’s still too early to formalise the mechanisms for the semantic web. We’re going
to learn by doing, and make small, incremental steps, rather than a great leap forward.”

There is certainly much need for simplicity to encourage voluntary compliance with semantic Web potentials, short of crossing the realized rewards of broad benefits from the semantic Web and network effects. However, simplicity and broad use are but two of the factors limiting adoption, some of the others including incentives, self-interest and rewards.

As Warren points out in his piece:

Although knowledge workers no doubt believe in the value of annotating their documents, the pressure to create metadata isn’t present. In fact, the pressure of time will work in a counter direction. Annotation’s benefits accrue to other workers; the knowledge creator only benefits if a community of knowledge workers abides by the same rules. In addition, the volume of information in this scenario is much greater than in the services scenario. So, it’s unlikely that manual annotation of information will occur to the extent required to make this scenario work. We need techniques for reducing the load on the knowledge creator.

Somehow we keep coming back to the tools and automated ways to ease the effort and workflow necessary to put in place all of this semantic Web infrastructure. These aids are no doubt important — perhaps critical — but in my mind still short changes the most determinant dynamic of semantic Web technology adoption: the imperatives of the loosely-federated, peer-to-peer broader Web v. enterprise adoption.

Oligarchical (Enterprise) Control Preceeds the Network Effect

There are some analogies between service-oriented architectures and their associated standards, and the standards contemplated for the semantic Web.  Both are rigorous, prescribed, and meant to be intellectually and functionally complete.  (In fact, most of the WS** standards are specific SOA ones for the semantic Web.)  The past week has seen some very interesting posts on the tensions between “SOA Versus Web 2.0?, triggered by John Hagel’s post:

. . . a cultural chasm separates these two technology communities, despite the fact that they both rely heavily on the same foundational standard – XML. The evangelists for SOA tend to dismiss Web 2.0 technologies as light-weight “toys” not suitable for the “real” work of enterprises.  The champions of Web 2.0 technologies, on the other hand, make fun of the “bloated” standards and architectural drawings generated by enterprise architects, skeptically asking whether SOAs will ever do real work. This cultural gap is highly dysfunctional and IMHO precludes extraordinary opportunities to harness the potential of these two complementary technology sets.

This theme was picked up by Dion Hinchcliffe, among others.  Dion consistently posts on this topic in his ZDNet Enterprise Web 2.0 and Web 2.0 blogs, and is always a thoughtful read.   In his response to Hagel’s post, Hinchcliffe notes “… these two cultures are generally failing to cross-pollinate like they should, despite potentially ‘extraordinary opportunities.’.”

Supposedly, kitchen and garage coders playing around with cool mashups while surfing and blogging and posting pictures to Flickr are seen as a different “culture” than supposedly buttoned-down IT geeks (even if they wear T-shirts or knit shirts).  But, in my experience, these differences have more to do with the claim on time than the fact we are talking about different tribes of people.  From a development standpoint, we’re talking about the same people, with the real distinction being whether they are on payroll time or personal time.

I like the graphic that Hinchcliffe offers where he is talking about the SaaS model in the enterprise and the fact it may be the emerging form.  You can take this graphic and say the left-hand side of the diagram is corporate time, the right-hand side personal time.

Web 2.0 Enterprise Directions

I make this distinction because where systems may go is perhaps more useful to look at in terms of imperatives and opportunities v. some form of “culture” clash.  In the broad Web, there is no control other than broadly-accepted standards, there is no hegemony, there is only what draws attention and can be implemented in a decentralized way.  This impels simpler standards, and simpler “loosely-coupled” integrations.  We thus see mashups and simpler Web 2.0 sites like social bookmarking.   The  drivers are not “complete” solutions to knowledge creation and sharing, but what is fun, cool and gets buzz.

The corporate, or enterprise side, on the other hand, has a different set of imperatives and, as importantly, a different set of control mechanisms to set higher and more constraining standards to meets those imperatives.   SOA and true semantic Web standards like RDF-S or OWL can be imposed, because the sponsor can either require it or pay for it.  Of course, this oligarchic control still does not ensure adherence, just as IT departments were not able to prevent PC adoption 20 years ago, so it is important that productivity tools, workflows and employee incentives also be aligned with the desired outcomes.

So, what we are likely to see, indeed are seeing now, is that more innnovation and experimentation in “looser” ways will take place in Web 2.0 by lots of folks, many on them in their personal time away from the office.  Enterprises, on the other hand, will take the near-term lead on more rigorous and semantically-demanding integration and interoperability using semantic Web standards.

Working Both Ends to the Middle

I guess, then, this puts me squarely in the optimists camp where I normally reside.  (I also come squarely from an enterprise perspective since that is where my company resides.)   I see innovation at an unprecedented level with Web 2.0, mashups and participatory media, matched with effort and focus by leading enterprises to climb the data federation pyramid while dealing with very real and intellectually challenging semantic mediation.  Both ends of this spectrum are right, both will instruct, and therefore both should be monitored closely.

Warren gets it right when he points to prior knowledge management challenges as also informing the adoption challenges for the semantic Web in enterprises:

Currently, the main obstacle for introducing ontology-based knowledge management applications into commercial environments is the effort needed for ontology modeling and metadata creation. Developing semiautomatic tools for learning ontologies and extracting metadata is a key research area….Having to move out of a user’s typical working environment to ‘do knowledge management’ will act as a disincentive, whether the user is creating or retrieving knowledge…. I believe there will be deep semantic interoperability within organizational intranets. This is already the focus of practical implementations, such as the SEKT (Semantically Enabled Knowledge Technologies) project,
and across interworking organizations, such as supply chain consortia. In the global Web, semantic interoperability will be more limited.

My suspicion is that Web 2.0 is the sandbox where the tools, interfaces and approaches will emerge that help overcome these enterprise obstacles.  But we will still look strongly to enterprises for much of the money and the W3C for the standards necessary to make it all happen within semantic Web imperatives.

Posted by AI3's author, Mike Bergman Posted on April 30, 2006 at 7:14 pm in Information Automation, Semantic Web | Comments (1)
The URI link reference to this post is:
The URI to trackback this post is:
Posted:April 4, 2006

Author's Note: An earlier blog series by me has now been turned into a PDF white paper under the auspices of BrightPlanet Corp The citation for this effort is:

M.K. Bergman, "Why Are $800 Billion in Document Assets Wasted Annually?” BrightPlanet Corporation White Paper, April 2006, 27 pp.

Download PDF file Click here to obtain a PDF copy of this full report (27 pp, 203 KB)

It is a tragedy of no small import when $800 billion in readily available savings from creating, using and sharing documents is wasted in the United States each year. How can waste of such magnitude occur right before our noses? And how can this waste occur so silently, so insidiously, and so ubiquitously that none of us can see it?

This free white paper attempts to address these questions. This report is the result of a series of posts in response to an earlier white paper I authored under BrightPlanet sponsorship entitled, Untapped Assets: The $3 Trillion Value of U.S. Enterprise Documents. [1]

This full report intetgrates information from earlier blog postings:

Public and enterprise expenditures to address the wasted document assets problem remain comparatively small, with growth in those expenditures flat in comparison to the rate of document production. This report attempts to bring attention and focus to the various ways that technology, people, and process can bring real document savings to our collective pocketbooks.

[1] Michael K. Bergman, "Untapped Assets: The $3 Trillion Value of U.S. Enterprise Documents," BrightPlanet Corporation White Paper, July 2005, 42 pp. The paper contains 80 references, 150 citations, and many data tables.

Posted by AI3's author, Mike Bergman Posted on April 4, 2006 at 10:29 am in Adaptive Information, Document Assets, Information Automation | Comments (0)
The URI link reference to this post is:
The URI to trackback this post is:
Posted:March 26, 2006

It is a tragedy of no small import when $800 billion in readily available savings from creating, using and sharing documents is wasted in the United States each year. How can waste of such magnitude  — literally equivalent to almost 8% of gross domestic product or more than 40% of what the nation spends on health care [1] — occur right before our noses? And how can this waste occur so silently, so insidiously, and so ubiquitously that none of us can see it?

Let me repeat. The topic is $800 billion in annual waste in the U.S. alone, perhaps equivalent to as much as $3 trillion globally, that can be readily saved each year with improved document management and use. Achieving these savings does not require Herculean efforts, simply focused awareness and the application of best practices and available technology. As the T.D. Waterhouse commercial says, “You can do this.”

This entry concludes a series of posts resulting from an earlier white paper I authored under BrightPlanet sponsorship. Entitled, Untapped Assets: The $3 Trillion Value of U.S. Enterprise Documents,[2] that paper documented via many references and databases the magnitude of the poor use of document assets within enterprises. The paper was perhaps the most comprehensive look to date at the huge expenditures document creation and use occupy within our modern knowledge economy, and first quantified the potential $800 billion annual savings in overcoming readily identifiable waste.

Simply documenting the magnitude of expenditures and savings was mindblowing. But what actually became more perplexing was why the scope of something so huge and so amenable to corrective action was virtually invisible to policy or business attention. The vast expenditures and potential savings surfaced in the research quite obviously begged the question: Why is no one seeing this?

I then began this series to look at why document use savings may fit other classes of “big” problems such as high blood pressure as a silent killer, global warming from odorless and colorless greenhouse gasses, or the underfunding of cost-effective water systems and sanitation by international aid agencies. There seems to be something more difficult involving ubiquitous problems with broadly shared responsibilities.

The series began in October of last year and concludes with this summary.  Somehow, however, I suspect the issues touched on in this series are still poorly addressed and will remain a topic for some time to come.

The series looked at four major categories:

This summary wraps up the series.

I can truthfully conclude that I really haven’t yet fully put my finger on the compelling reason(s) as to why broad, universal problems such as document use and management remain a low priority and have virtual no visibility despite the very real savings that current techniques and process can bring. But I think some of the relevant factors are covered in these topics.

The arguments in Part I are pretty theoretical. They firstly ask if it is in the public interest to strive for improvements in “information” efficiency, some of which may be applicable to the private sector with possible differentials in gains. They secondly question the rhetoric of “information overload” that can lead to a facile resignation about whether the whole “information” problem can be meaningfully tackled. One dog that won’t hunt is the claim that computers intensify the information problem of private gain v. societal benefit because now more stuff can be processed. Such arguments are diversions that obfuscate deserved and concentrated public policy that can bring real public benefits  — and soon. Why else do we not see tax and economic policies that can enrich our populace by hundreds of billions of dollars annually?

Part II argues that barriers to collaboration, many cultural but others social and technical, help to prevent a broader consensus about the importance of documents reuse (read:  “information” and “knowledge”). Document reuse is likely the single largest reservoir of potential waste reductions. One real problem is the lack of top leadership within the organziation to encourage collaboration and efficiencies in document use and management through appropriate training and rewards, and commitments to install effective document infrastructures.

Part III re-visits prior failings and high costs in document or content initiatives within the enterprise. Perceptions of past difficulties color the adoption of new approaches and technologies. The lack of standards, confusing terminology, some failed projects, immaturity of the space, and the as-yet emergence of a dominant vendor have prevented more widespread adoption of what are clearly needed solutions to pressing business content needs. There are no accepted benchmarks by which to compare vendor performance and costs. Document use and management software can be considered to be at a similar point to where structured data was at 15 years ago at the nascent emergence of the data warehousing market. Growth in this software market will require substantial improvements in TCO and scalability, among a general increase in awareness of the magnitude of the problem and available means to solve it.

Part IV looks at what might be called issues of attention, perception or psychology. These factors are limiting the embrace of meaningful approaches to improve document access and use and to achieve meaningful cost savings. Document intelligence and document information automation markets still fall within the category of needing to “educate the market.”  Since this category is generally dreaded by most venture capitalists (VCs), that perception is also acting to limit the financing of fresh technologies and entrepreneurialiship.

The conclusion is that public and enterprise expenditures to address the wasted document assets problem remain comparatively small, with growth in those expenditures flat in comparison to the rate of document production. Hopefully, this series   — plus, also hopefully, ongoing dialog and input from the community  — can continue to bring attention and focus to the various ways that technology, people, and process can bring real document savings to our collective pocketbooks.

[1] According to the U.S. Dept of Health and Human Services, the nation spent $1.9 trillion on health care in 2004; see

[2] Michael K. Bergman, “Untapped Assets: The $3 Trillion Value of U.S. Enterprise Documents,” BrightPlanet Corporation White Paper, July 2005, 42 pp. The paper contains 80 references, 150 citations, and many data tables.

NOTE: This posting concludes a series looking at why document assets are so poorly utilized within enterprises.  The magnitude of this problem was first documented in a BrightPlanet white paper by the author titled, Untapped Assets:  The $3 Trillion Value of U.S. Enterprise Documents.  An open question in that paper was why more than $800 billion per year in the U.S. alone is wasted and available for improvements, but enterprise expenditures to address this problem remain comparatively small and with flat growth in comparison to the rate of document production.  This series is investigating the various technology, people, and process reasons for the lack of attention to this problem.

Posted by AI3's author, Mike Bergman Posted on March 26, 2006 at 9:46 pm in Adaptive Information, Document Assets, Information Automation | Comments (0)
The URI link reference to this post is:
The URI to trackback this post is:
Posted:March 23, 2006

Author’s Note: This is an on line version of a paper that Mike Bergman recently released under the auspices of BrightPlanet Corp The citation for this effort is:

M.K. Bergman, “Tutorial:  Internet Languages, Character Sets and Encodings,” BrightPlanet Corporation Technical Documentation, March 2006, 13 pp.

Download PDF file Click here to obtain a PDF copy of this posting (13 pp, 79 K)

Broad-scale, international open source harvesting from the Internet poses many challenges in use and translation of legacy encodings that have vexed academics and researchers for many years. Successfully addressing these challenges will only grow in importance as the relative percentage of international sites grows in relation to conventional English ones.

A major challenge in internationalization and foreign source support is “encoding.” Encodings specify the arbitrary assignment of numbers to the symbols (characters or ideograms) of the world’s written languages needed for electronic transfer and manipulation. One of the first encodings developed in the 1960s was ASCII (numerals, plus a-z; A-Z); others developed over time to deal with other unique characters and the many symbols of (particularly) the Asiatic languages.

Some languages have many character encodings and some encodings, for example Chinese and Japanese, have very complex systems for handling the large number of unique characters. Two different encodings can be incompatible by assigning the same number to two distinct symbols, or vice versa. So-called Unicode set out to consolidate many different encodings, all using separate code plans into a single system that could represent all written languages within the same character encoding. There are a few Unicode techniques and formats, the most common being UTF-8.

The Internet was originally developed via efforts in the United States funded by ARPA (later DARPA) and NSF, extending back to the 1960s. At the time of its commercial adoption in the early 1990s via the Word Wide Web protocols, it was almost entirely dominated by English by virtue of this U.S. heritage and the emergence of English as the lingua franca of the technical and research community.

However, with the maturation of the Internet as a global information repository and means for instantaneous e-commerce, today’s online community now approaches 1 billion users from all existing countries. The Internet has become increasingly multi-lingual.

Efficient and automated means to discover, search, query, retrieve and harvest content from across the Internet thus require an understanding of the source human languages in use and the means to encode them for electronic transfer and manipulation. This Tutorial provides a brief introduction to these topics.

Internet Language Use

Yoshiki Mikami, who runs the UN’s Language Observatory, has an interesting way to summarize the languages of the world. His updated figures, plus some other BrightPlanet statistics are:[1]



Source or Notes

Active Human Languages


Language Identifiers


based on ISO 639
Human Rights Translation


UN’s Universal Declaration of Human Rights (UDHR)
Unicode Languages


see text
DQM Languages


estimate based on prevalence, BT input
Windows XP Languages


from Microsoft
Basis Tech Languages


based on Basis Tech’s Rosette Language Identifier (RLI)
Google Search Languages


from Google

There are nearly 7,000 living languages spoken today, though most have few speakers and many are becoming extinct. About 347 (or approximately 5%) of the world’s languages have at least one million speakers and account for 94% of the world’s population. Of this amount, 83 languages account for 80% of the world’s population, with just 8 languages with greater than 100 million speakers accounting for about 40% of total population. By contrast, the remaining 95% of languages are spoken by only 6% of the world’s people.[2]

This prevalence is shown by the fact that the UN’s Universal Declaration of Human Rights (UDHR) has only been translated into those languages generally with 1 million or more speakers.

The remaining items on the table above enumerate languages that can be represented electronically, or are “encoded.” More on this topic is provided below.

Of course, native language does not necessarily equate to Internet use, with English predominating because of multi-lingualism, plus the fact that richer countries or users within countries exhibit greater Internet access and use.

The most recent comprehensive figures for Internet language use and prevalence are from the Global Reach Web site for late 2004, with only percentage figures shown for ease of reading for those countries with greater than a 1.0% value:[3] [4]

Percent of

2003 Internet Users

Global Population

Web Pages

















EUROPEAN (non-English)



























































































































































English speakers have nearly a five-fold increase in Internet use than sheer population would suggest, and about an eight-fold increase in percent of English Web pages. However, various census efforts over time have shown a steady decrease in this English prevalence (data not shown.)

Virtually all European languages show higher Internet prevalence than actual population would suggest; Asian languages show the opposite. (African languages are even less represented than population would suggest; data not shown.)

Internet penetration appears to be about 20% of global population and growing rapidly. It is not unlikely that percentages of Web users and the pages the Web is written in will continue to converge to real population percentages. Thus, over time and likely within the foreseeable future, users and pages should more closely approximate the percentage figures shown in the rightmost column in the table above.

Script Families

Another useful starting point for understanding languages and their relation to the Internet is a 2005 UN publication from a World Summit on the Information Society. This 113 pp. report can be found at[5]

Languages have both a representational form and meaning. The representational form is captured by scripts, fonts or ideograms. The meaning is captured by semantics. In an electronic medium, it is the representational form that must be transmitted accurately. Without accurate transmittal of the form, it is impossible to manipulate that language or understand its meaning.

Representational forms fit within what might be termed script families. Script families are not strictly alphabets or even exact character or symbol matches. They represent similar written approaches and some shared characteristics.

For example, English and its German and Romance language cousins share very similar, but not identical, alphabets. Similarly, the so-called CJK (Chinese, Japanese, Korean) share a similar approach to using ideograms without white space between tokens or punctuation.

At the highest level, the world’s languages may be clustered into these following script families:[6]








Million users







% of Total







Key languages Romance (European) Slavic (some) Vietnamese Malay Indonesian Russian Slavic (some) Kazakh Uzbek Arabic Urdu Persian Pashtu Chinese Japanese Korean Hindi Tamil Bengali Punjabi Sanskrit Thai Greek Hebrew Georgian Assyrian Armenian

Note that English and the Romance languages fall within the Latin script family, the CJK within Hanzi. The “Other” category is a large catch-all, including Greek, Hebrew, many African languages, and others. However, besides Greek and Hebrew, most specific languages of global importance are included in the other named families. Also note that due to differences in sources, that total user counts do not equal earlier tables.

Character Sets and Encodings

In order to take advantage of the computer’s ability to manipulate text (e.g., displaying, editing, sorting, searching and efficiently transmitting it), communications in a given language needs to be represented in some kind of encoding. Encodings specify the arbitrary assignment of numbers to the symbols of the world’s written languages. Two different encodings can be incompatible by assigning the same number to two distinct symbols, or vice versa. Thus, much of what the Internet offers with respect to linguistic diversity comes down to the encodings available for text.

The most widely used encoding is the American Standard Code for Information Interchange (ASCII), a code devised during the 1950s and 1960s under the auspices of the American National Standards Institute (ANSI) to standardize teletype technology. This encoding comprises 128 character assignments (7-bit) and is suitable primarily for North American English.[6]

Historically, other languages that did not fit in the ASCII 7-bit character set (a-z; A-Z) pretty much created their own character sets, sometimes with local standards acceptance and sometimes not. Some languages have many character encodings and some encodings, particularly Chinese and Japanese, have very complex systems for handling the large number of unique characters. Another difficult group is Hindi and the Indic language family, with speakers that number into the hundreds of millions. According to one University of Southern California researcher, almost every Hindi language web site has its own encoding.[7]

The Internet Assigned Names and Authority (IANA) organization maintains a master list of about 245 standard charset (“character set”) encodings and 550 associated aliases to the same used in one manner or another on the Internet.[8] [9] Some of these electronic encodings were created by large vendors with a stake in electronic transfer such as IBM, Microsoft, Apple and the like. Other standards result from recognized standards organizations such as ANSI, ISO, Unicode and the like. Many of these standards date back as far as the 1960s; many others are specific to certain countries.

Earlier estimates showed on the range of 40 to 250 languages per named encoding type. While no known estimate exists, if one assumes 100 languages for each of the IANA-listed encodings, there could be on the order of 25,000 or so specific language-encoding combinations possible on the Internet based on these “standards.” There are perhaps thousands of specific language encodings also extant.

Whatever the numbers, clearly it is critical to identify accurately the specific encoding and its associated language for any given Web page or database site. Without this accuracy, it is impossible to electronically query and understand the content.

As might be suspected, this topic too is very broad. For a very comprehensive starting point on all topics related to encodings and character sets, please see I18N (which stands for “internationalization”) Guy’s Web site at


In the late 1980s, there were two independent attempts to create a single unified character set. One was the ISO 10646 project of the International Organization for Standardization (ISO), the other was the Unicode Project organized by a consortium of (initially mostly US) manufacturers of multi-lingual software. Fortunately, the participants of both projects realized in 1991 that two different unified character sets did not make sense and they joined efforts to create a single code table, now referred to as Unicode. While both projects still exist and publish their respective standards independently, the Unicode Consortium and ISO/IEC JTC1/SC2 have agreed to keep the code tables of the Unicode and ISO 10646 standards compatible and closely coordinated.

Unicode sets out to consolidate many different encodings, all using separate code plans into a single system that can represent all written languages within the same character encoding. Unicode is first a set of code tables to assign integer numbers to characters, also called a code point. Unicode then has several methods for how a sequence of such characters or their respective integer values can be represented as a sequence of bytes, generally prefixed by “UTF.”

In UTF-8, the most common method, every code point from 0-127 is stored in a single byte. Only code points 128 and above are stored using 2, 3 or up to 6 bytes. This method has the advantage that English text looks exactly the same in UTF-8 as it did in ASCII, so ASCII is a conforming sub-set. More unusual characters such as accented letters, Greek letters or CJK ideograms may need several bytes to store a single code point.

The traditional store-it-in-two-byte method for Unicode is called UCS-2 (because it has two bytes) or UTF-16 (because it has 16 bits). There’s something called UTF-7, which is a lot like UTF-8 but guarantees that the high bit will always be zero. There’s UTF-4, which stores each code point in 4 bytes, which has the nice property that every single code point can be stored in the same number of bytes. There is also UTF-32 that stores the code point in 32 bits but requires more storage. Regardless, UTF-7, -8, -16, and -32 all have the property of being able to store any code point correctly.

BrightPlanet, along with many others, has adopted UTF-8 as the standard Unicode method to process all string data. There are tools available to convert nearly any existing character encoding into a UTF-8 encoded string. Java supplies these tools as does Basis Technolgy, one of BrightPlanet’s partners in language processing.

As presently defined, Unicode supports about 245 common languages according to a variety of scripts (see notes at end of the table):[10]



Some Country Notes

Abaza Cyrillic
Abkhaz Cyrillic
Adygei Cyrillic
Afrikaans Latin
Ainu Katakana, Latin Japan
Aisor Cyrillic
Albanian Latin [2]
Altai Cyrillic
Amharic Ethiopic Ethiopia
Amo Latin Nigeria
Arabic Arabic
Armenian Armenian, Syriac [3]
Assamese Bengali Bangladesh, India
Assyrian (modern) Syriac
Avar Cyrillic
Awadhi Devanagari India, Nepal
Aymara Latin Peru
Azeri Cyrillic, Latin
Azerbaijani Arabic, Cyrillic, Latin
Badaga Tamil India
Bagheli Devanagari India, Nepal
Balear Latin
Balkar Cyrillic
Balti Devanagari, Balti [2] India, Pakistan
Bashkir Cyrillic
Basque Latin
Batak Batak [1], Latin Philippines, Indonesia
Batak toba Batak [1], Latin Indonesia
Bateri Devanagari (aka Bhatneri) India, Pakistan
Belarusian Cyrillic (aka Belorussian, Belarusan)
Bengali Bengali Bangladesh, India
Bhili Devanagari India
Bhojpuri Devanagari India
Bihari Devanagari India
Bosnian Latin Bosnia-Herzegovina
Braj bhasha Devanagari India
Breton Latin France
Bugis Buginese [1] Indonesia, Malaysia
Buhid Buhid Philippines
Bulgarian Cyrillic
Burmese Myanmar
Buryat Cyrillic
Bahasa Latin (see Indonesian)
Catalan Latin
Chakma Bengali, Chakma [1] Bangladesh, India
Cham Cham [1] Cambodia, Thailand, Viet Nam
Chechen Cyrillic Georgia
Cherokee Cherokee, Latin
Chhattisgarhi Devanagari India
Chinese Han
Chukchi Cyrillic
Chuvash Cyrillic
Coptic Greek Egypt
Cornish Latin United Kingdom
Corsican Latin
Cree Canadian Aboriginal Syllabics, Latin
Croatian Latin
Czech Latin
Danish Latin
Dargwa Cyrillic
Dhivehi Thaana Maldives
Dungan Cyrillic
Dutch Latin
Dzongkha Tibetan Bhutan
Edo Latin
English Latin, Deseret [3], Shavian [3]
Esperanto Latin
Estonian Latin
Evenki Cyrillic
Faroese Latin Faroe Islands
Farsi Arabic (aka Persian)
Fijian Latin
Finnish Latin
French Latin
Frisian Latin
Gaelic Latin
Gagauz Cyrillic
Garhwali Devanagari India
Garo Bengali Bangladesh, India
Gascon Latin
Ge’ez Ethiopic Eritrea, Ethiopia
Georgian Georgian
German Latin
Gondi Devanagari, Telugu India
Greek Greek
Guarani Latin
Gujarati Gujarati
Garshuni Syriac
Hanunóo Latin, Hanunóo Philippines
Harauti Devanagari India
Hausa Latin, Arabic [3]
Hawaiian Latin
Hebrew Hebrew
Hindi Devanagari
Hmong Latin, Hmong [1]
Ho Devanagari Bangladesh, India
Hopi Latin
Hungarian Latin
Ibibio Latin
Icelandic Latin
Indonesian Arabic [3], Latin
Ingush Arabic, Latin
Inuktitut Canadian Aboriginal Syllabics, Latin Canada
Iñupiaq Latin Greenland
Irish Latin
Italian Latin
Japanese Han + Hiragana + Katakana
Javanese Latin, Javanese [1]
Judezmo Hebrew
Kabardian Cyrillic
Kachchi Devanagari India
Kalmyk Cyrillic
Kanauji Devanagari India
Kankan Devanagari India
Kannada Kannada India
Kanuri Latin
Khanty Cyrillic
Karachay Cyrillic
Karakalpak Cyrillic
Karelian Latin, Cyrillic
Kashmiri Devanagari, Arabic
Kazakh Cyrillic
Khakass Cyrillic
Khamti Myanmar India, Myanmar
Khasi Latin, Bengali Bangladesh, India
Khmer Khmer Cambodia
Kirghiz Arabic [3], Latin, Cyrillic
Komi Cyrillic, Latin
Konkan Devanagari
Korean Hangul + Han
Koryak Cyrillic
Kurdish Arabic, Cyrillic, Latin Iran, Iraq
Kuy Thai Cambodia, Laos, Thailand
Ladino Hebrew
Lak Cyrillic
Lambadi Telugu India
Lao Lao Laos
Lapp Latin (see Sami)
Latin Latin
Latvian Latin
Lawa, eastern Thai Thailand
Lawa, western Thai China, Thailand
Lepcha Lepcha [1] Bhutan, India, Nepal
Lezghian Cyrillic
Limbu Devanagari, Limbu [1] Bhutan, India, Nepal
Lisu Lisu (Fraser) [1], Latin China
Lithuanian Latin
Lushootseed Latin USA
Luxemburgish Latin (aka Luxembourgeois)
Macedonian Cyrillic
Malay Arabic [3], Latin Brunei, Indonesia, Malaysia
Malayalam Malayalam
Maldivian Thaana Maldives (See Dhivehi)
Maltese Latin
Manchu Mongolian China
Mansi Cyrillic
Marathi Devanagari India
Mari Cyrillic, Latin
Marwari Devanagari
Meitei Meetai Mayek [1], Bengali Bangladesh, India
Moldavian Cyrillic
Mon Myanmar Myanmar, Thailand
Mongolian Mongolian, Cyrillic China, Mongolia
Mordvin Cyrillic
Mundari Bengali, Devanagari Bangladesh, India, Nepal
Naga Latin, Bengali India
Nanai Cyrillic
Navajo Latin
Naxi Naxi [2] China
Nenets Cyrillic
Nepali Devanagari
Netets Cyrillic
Newari Devanagari, Ranjana, Parachalit
Nogai Cyrillic
Norwegian Latin
Oriya Oriya Bangladesh, India
Oromo Ethiopic Egypt, Ethiopia, Somalia
Ossetic Cyrillic
Pali Sinhala, Devanagari, Thai India, Myanmar, Sri Lanka
Panjabi Gurmukhi India (see Punjabi)
Parsi-dari Arabic Afghanistan, Iran
Pashto Arabic Afghanistan
Polish Latin
Portuguese Latin
Provençal Latin
Prussian Latin
Punjabi Gurmukhi India
Quechua Latin
Riang Bengali Bangladesh, China, India, Myanmar
Romanian Latin, Cyrillic [3] (aka Rumanian)
Romany Cyrillic, Latin
Russian Cyrillic
Sami Cyrillic, Latin
Samaritan Hebrew, Samaritan [1] Israel
Sanskrit Sinhala, Devanagari, etc. India
Santali Devanagari, Bengali, Oriya, Ol Cemet [1] India
Selkup Cyrillic
Serbian Cyrillic
Shan Myanmar China, Myanmar, Thailand
Sherpa Devanagari
Shona Latin
Shor Cyrillic
Sindhi Arabic
Sinhala Sinhala (aka Sinhalese) Sri Lanka
Slovak Latin
Slovenian Latin
Somali Latin
Spanish Latin
Swahili Latin
Swedish Latin
Sylhetti Siloti Nagri [1], Bengali Bangladesh
Syriac Syriac
Swadaya Syriac (see Syriac)
Tabasaran Cyrillic
Tagalog Latin, Tagalog
Tagbanwa Latin, Tagbanwa
Tahitian Latin
Tajik Arabic [3], Latin, Cyrillic (? Latin) (aka Tadzhik)
Tamazight Tifinagh [1], Latin
Tamil Tamil
Tat Cyrillic
Tatar Cyrillic
Telugu Telugu
Thai Thai
Tibetan Tibetan
Tigre Ethiopic Eritrea, Sudan
Tsalagi (see Cherokee)
Tulu Kannada India
Turkish Arabic [3], Latin
Turkmen Arabic [3], Latin, Cyrillic (? Latin)
Tuva Cyrillic
Turoyo Syriac (see Syriac)
Udekhe Cyrillic
Udmurt Cyrillic, Latin
Uighur Arabic, Latin, Cyrillic, Uighur [1]
Ukranian Cyrillic
Urdu Arabic
Uzbek Cyrillic, Latin
Valencian Latin
Vietnamese Latin, Chu Nom
Yakut Cyrillic
Yi Yi, Latin
Yiddish Hebrew
Yoruba Latin
[1] = Not yet encoded in Unicode.
[2] = Has one or more extinct or minor native script(s), not yet encoded.
[3] = Formerly or historically used this script, now uses another.

Notice most of these scripts fall into the seven broader script families such as Latin, Hanzi and Indic noted previously.

While more countries are adopting Unicode and sample results indicate increasing percentage use, it is by no means prevalent. In general, Europe has been slow to embrace Unicode with many legacy encodings still in use, perhaps Arabic sites have reached the 50% level, and Asian use is problematic.[11] Other samples suggest that UTF-8 encoding is limited to 8.35% of all Asian Web pages. Some countries, such as Nepal, Vietnam and Tajikistan exceed 70% compliance, while others such Syria, Laos and Brunei are below even 1%.[12] According to the Archive Pass project, which also used Basis Tech’s RLI for encoding detection, Chinese sites are dominated by GB-2312 and Big 5 encodings, while Shift-JIS is most common for Japanese.[13]

Detecting and Communicating with Legacy Encodings

There are two primary problems when dealing with non-Unicode encodings; identifying what the encoding is and converting that encoding to a Unicode string, usually UTF-8. Detecting the encoding is a difficult process, BasisTech’s RLI does an excellent job. Converting the non-Unicode string to a Unicode string can be easily done using tools available in the Java JDK, or using BasisTech’s RCLU library.

Basis Tech detects a combination of 96 language encoding pairs involving 40 different languages and 30 unique encoding types:



Albanian UTF-8, Windows-1252
Arabic UTF-8, Windows-1256, ISO-8859-6
Bahasa Indonesia UTF-8, Windows-1252
Bahasa Malay UTF-8, Windows-1252
Bulgarian UTF-8, Windows-1251, ISO-8859-5, KOI8-R
Catalan UTF-8, Windows-1252
Chinese UTF-8, GB-2312, HZ-GB-2312, ISO-2022-CN
Chinese UTF-8, Big5
Croatian UTF-8, Windows-1250
Czech UTF-8, Windows-1250
Danish UTF-8, Windows-1252
Dutch UTF-8, Windows-1252
English UTF-8, Windows-1252
Estonian UTF-8, Windows-1257
Farsi UTF-8, Windows-1256
Finnish UTF-8, Windows-1252
French UTF-8, Windows-1252
German UTF-8, Windows-1252
Greek UTF-8, Windows-1253
Hebrew UTF-8, Windows-1255
Hungarian UTF-8, Windows-1250
Icelandic UTF-8, Windows-1252
Italian UTF-8, Windows-1252
Japanese UTF-8, EUC-JP, ISO-2022-JP, Shift-JIS
Korean UTF-8, EUC-KR, ISO-2022-KR
Latvian UTF-8, Windows-1257
Lithuanian UTF-8, Windows-1257
Norwegian UTF-8, Windows-1252
Polish UTF-8, Windows-1250
Portuguese UTF-8, Windows-1252
Romanian UTF-8, Windows-1250
Russian UTF-8, Windows-1251, ISO-8859-5, IBM-866, KOI8-R, x-Mac-Cyrillic
Slovak UTF-8, Windows-1250
Slovenian UTF-8, Windows-1250
Spanish UTF-8, Windows-1252
Swedish UTF-8, Windows-1252
Tagalog UTF-8, Windows-1252
Thai UTF-8, Windows-874
Turkish UTF-8, Windows-1254

Java SDK encoding/decoding supports 22 basic European, and 125 other international forms (mostly non-European), for 147 total. If an ecoded form is not on this list, and not already Unicode, software can not talk to the site without special converters or adapters. See

Of course, to avoid the classic “garbage in, garbage out” (GIGO) problem, accurate detection must be made of the source’s encoding type, there must be a converter for that type into a canonical, internal form (such as UTF-8), and another converter must exist for converting that canonical form back to the source’s original encoding. The combination of the existing Basis Tech RLI and the Java SDK produce a valid combination of 89 language/encoding pairs (with invalid combinations shown in Bold Red above.)

Fortunately, existing valid combinations appear to cover all prevalent languages and encoding types. Should gaps exist, specialized detectors and converters may be required. As events move forward, the family of Indic languages may be the most problematic for expansion with standard tools.

Actual Language Processing

Encoding detection, and the resulting proper storage and language identification, is but the first essential step in actual language processing. Additional tools in morphological analysis or machine translation may need to be applied to address actual analyst needs. These tools are beyond the scope of this Tutorial.

The key point, however, is that all foreign language processing and analysis begins with accurate encoding detection and communicating with the host site in its original encoding. These steps are the sine qua non of language processing.

Exemplar Methodology for Internet Foreign Language Support

We can now take the information in this Tutorial and present what might be termed an exemplar methodology for initial language detection and processing. A schematic of this methodology is provided in the following diagram:

This diagram shows that the actual encoding for an original Web document or search form must be detected, converted into a standard “canonical” form for internal storage, but talked to in its actual native encoding form when searching it. Encoding detection software and utilities within the Java SDK can aid this process greatly.

And, as the proliferation of languages and legacy forms grows, we can expect such utilities to embrace an ever-widening set of encodings.

[1] Yoshiki Mikami, “Language Observatory: Scanning Cyberspace for Languages,” from The Second Language Observatory Workshop, February 21-25, 2005, 41 pp. See This is a generally useful reference on Internet and language. Please note some of the figures have been updated with more recent data.

[2] See

[3] See Also, for useful specific notes by country as well as orignial references, see

[4] Another interesting language source with an emphasis on Latin family langguages is FUNREDES’ 2005 study of languages and cultures. See

[5] John Paolillo, Daniel Pimienta, Daniel Prado, et al. Measuring Linguistic Diversity on the Internet, a UNESCO Publications for the World Summit on the Information Society 2005, 113 pp. See

[6] John Paolillo, “Language Diversity on the Internet,” pp. 43-89, in John Paolillo, Daniel Pimienta, Daniel Prado, et al., Measuring Linguistic Diversity on the Internet, UNESCO Publications for the World Summit on the Information Society 2005, 113 pp. See

[7] Information Sciences Institute press release, “USC Researchers Build Machine Translation System  –  and More — for Hindi in Less Than a Month,” June 30, 2003. See


[9] The actual values were calculated from Jukka “Yucca” Korpela’s informative Web site at

[10] See

[11] Pers. Comm., B. Margulies, Basis Technology, Inc., Feb. 27, 2006.

[12] Yoshika Mikami et al., “Language Diversity on the Internet: An Asian View,” pp. 91-103, in John Paolillo, Daniel Pimienta, Daniel Prado, et al., Measuring Linguistic Diversity on the Internet, UNESCO Publications for the World Summit on the Information Society 2005, 113 pp. See

[13] Archive Pass Project; see

Posted:March 22, 2006

The ePrécis Web site showcases technology that creates abstracts from any text document. In this Web site search, web sites relevant to your search requests are analyzed by ePrécis and results are returned in a typical search format.

Richard McManus provides a background description in ZDNet about this technology, with more focus on its comparison to Google as a search engine or in relation to OWL semantic Web approaches.

According to the ePrécis white paper by James Matthewson:

ePrécis is not a program per se, but a C++ language application programmer interface (API) that can be embedded in any number of applications to return relevant outputs given a wide variety of natural language inputs. In addition to plugging into Web browsers or search engines, it could plug into word processing programs to automatically provide abstracts, executive summaries, back-of-the book indexes, and writing or translation support.”

You can get this white paper from the ePrécissite or download a macro to embed within MS Word to create your own abstracts and indexes.  (You will also need the Microsoft SOAP 3.0 package installed.)  Check it out; it’s kinda fun, and generally pretty impressive in creating useful abstracts.  You should also try the searches from the ePrécis Web site.  Hint: For best performance, use long or technical queries (more context).

Posted by AI3's author, Mike Bergman Posted on March 22, 2006 at 10:32 am in Information Automation, Searching, Semantic Web | Comments (0)
The URI link reference to this post is:
The URI to trackback this post is: