Posted:July 8, 2014

Ye Olde Fax Technology has Advanced, but the Experience is Much the Same

My youngest, Zak, turned 25 today. In med school, being taught by his mom, and in love, he is in a pretty sweet spot. Zak has never known me not working from home. For that matter, Zak’s older sister, Erin, also in med school, is also too young to remember me as anything but being a dad working from home. This week also marks my 25th anniversary of working from home. Pretty amazing!

Over that period of time I have had five companies, lived in four states, have worked either alone or have run companies with up to 35 employees, raised millions in financing, and have mostly continued to move in (I hope) a forward direction. In the early years, I literally commuted between Montana and Washington, DC. Now, I rarely travel. My current company, Structured Dynamics, is pretty simple with low overhead — by design.

In the beginning, it was really not that easy being an independent teleworker. Technology, culture, and practices were very much different than today. In the beginning, I was actually asked to speak to groups about what it was like to be a teleworker from home. The fax machine was the key enabler for us early teleworkers. But the fax was merely the first expression of sending content digitally over phone lines, the same basic model that applies today albeit with many advances. Fortunately, it has been at least ten years since technology was a real challenge in being able to work from home.

Running a company of more than a few individuals (remotely or local) is still perhaps a stupid approach when working from home. The times that I did have remote workforces — true for two of my companies — I tried to compensate by much travel and presence. But, honestly, I don’t think a remote boss can ever really work that well, for either the boss or the employees. Key workers or partners can work well from home, but not someone charged with leading and also setting broader culture in an organization with more than a few employees.

Some Notable Events

In 1989 I had been working in downtown Washington, DC, for nearly ten years when I decided to head out on my own. We were getting close to having our second child and we had realized the metro DC area was not the place we wanted to raise our children. My wife, now a biomedical professor, had just embarked on her own career that had limited choices as to institution and location. Working on my own and being my own boss seemed to provide the flexibility that we would need to best meet the needs of two professionals and our family. Though I started my initial energy consulting business in DC right as Zak was born, we shortly thereafter moved to Hamilton, MT, for the second of my wife’s post doctoral positions. We designed our new home with explicit attention to my office and work situation.

Floating Pig during Great Flood of 2008 The nexus of my original energy consulting was DC (and other national locations), which caused me to travel up to 150,000 air miles annually during the early years. I literally went through a case of thermal fax paper each month to keep current with my clients and partners. In rural Montana I was quite the anomaly; most of my plane telecommuters were celebrities like Huey Lewis or Andie MacDowell, frequent co-travellers on my flights out of Missoula. Local civic groups often asked me to speak on what it was like to be a telework pioneer. I continued this for nearly five years, when I decided that software trumped straight consulting. By the time of our next move to Vermillion, SD, my transition to software was complete.

Vermillion was the locus of my first multi-employee company, VisualMetrics. All of that company’s employees worked from home. Staff meetings were conducted in a converted basement meeting room at my house. Our company sweatshirts celebrated the beginnings in the basement.

That effort was the springboard to starting up another company, BrightPlanet, which extended our employee reach to Sioux Falls, SD. Soon after that founding, my wife was recruited to a bigger medical school at the University of Iowa, which caused us to move to Iowa City, IA, fifteen years ago. We have been here ever since. Both of my kids graduated from the local West High School and then went on to college and their incipient careers.

For half of this period I kept my relationship with BrightPlanet, frequently making the seven-hour drive to Sioux Falls. That was probably the most difficult period of my teleworking tenure, since the company came to have many employees and the drive was pretty exhausting.

By 2006 I returned to consulting, and then was recruited to Zitgist and then formed Structured Dynamics with Fred Giasson. With the start of SD, travel began again. One of the more notable memories of that period was trying to get to a meeting in NYC after the Great Flood of 2008. I earlier wrote about that waterlogged event. I find a pig floating across the Iowa countryside to be an apt image.

Some Observations About Telework

Teleworking has both confirmed (and exceeded) certain expectations I had going in, while also seeing some surprises. Some of this I earlier documented in a twenty-year retrospective.

Mike's Home OfficeOn the expectations front, I knew that I would gain more useful time during the day by avoiding the commute, which was 90 min total for me in DC. What surprised me, however, was the additional time I gained by not needing to keep home and office systems synchronized. Early on, I found I had gained about two hours of time each day in avoiding commuting and coordination activities! Not bad; not bad at all.

My initial experience in DC working from a converted bedroom also convinced me that it was important to have a space separate from the actual living space. Though I also used a converted bedroom in Vermillion, in my other two locations I have designed specific office spaces, somewhat physically removed from the general living space in the home. Two of the other pictures in this piece show my current office in Iowa.

Planning office space in advance means you can tailor the space to your work habits. For me, I want lots of natural light, a view from the windows, and lots of desk and whiteboard space. I also needed room for office equipment (copiers in the early years, fax, printers and the like) and file cabinets. When in Montana, I designed up and had built my own office furniture suite; I have kept that furniture with me ever since.

Teaching myself and the kids that office time and office space were fairly sacrosanct was important, too. Sure, it was helpful to be around for the kids for boo-boos and emergencies and dedicated kid’s time, and to be able to be there for home repairs and the like. But, for the most part, I have tried to treat my office as a separate space and to have the kids do so, too, though this is no longer applicable now that the kids are on their own. While growing up, I think this separation of space became natural to the kids and the family, and my office has always been viewed as mostly separate, though the door has never been closed or locked.

Mike's Home OfficeI never eat in my office and do not have a television or other distractions. I try to keep regular hours. I treat my office as such, and keep family and home activities distinct. Through the years the biggest challenge has been to not allow the ease of “going into the office” to take over my life. Frankly, I have not always kept this balance, since I have always loved my work and it is also one of my main hobbies and source of intellectual stimulus.

The biggest surprise over my tenure of working from home has been the Internet and what it has brought to make telework easier. It took a few years from the mid-1990s for this potential to show itself, but it is now so evident as to be unremarkable. All of the advantages that have been brought by cloud computing have caused home work to have nearly the same advantages of a standard work office. Teleworkers can share, collaborate, create, obtain and research equivalently to what a standard office worker may do.

Another pleasant surprise is the decline in meeting demands. Meetings, which are only occasionally essential, require either travel or greater planning in advance in the telework circumstance. Otherwise, it is great to avoid the all-too-frequent office meetings, most of which are tremendous time suckers and black holes for productivity.

Because so much can be done digitally, my general office expenses have continued to decline. Fax and copies are now rarely required. A single Internet line provides email, computing, streaming media, and VOIP benefits. I no longer need to replace my office computer each 2-3 years, and what little office equipment I need (multi-function, etc.) has also dropped greatly in cost.

Another major surprise has been the orders of magnitude reductions in what it takes to start up a new business, especially one based on software. Legal forms and incorporation are now almost commodities. Open source software means new applications and platforms may be assembled much quicker and at much reduced cost. In my early efforts, it was next to impossible to start up a software venture without substantial cash reserves in the bank or some form of external financing, even if just from angels. These efforts used to take tremendous time and effort, and represented a significant overhead cost to actually getting product produced. These impediments have now been largely swept away. (Though external financing still may be useful for marketing and scale-up.)

Wondering About Where Work is Heading

Of course, telework is not for everyone. It applies mostly to knowledge work, and where constant interaction or collaboration is not the norm. Individuals who need much social interaction or who are not pretty disciplined or self-starters would find working from home difficult and perhaps unfulfilling.

But the same trends that are making telework easier are also changing the nature of the standard knowledge workforce. One wonders if the heyday of the corporation, made infamous in the 1950s by the gray flannel man, is not an institutional phenomenon in slow decline. Even larger corporations are moving towards mobile workforces and virtual or temporary offices for their knowledge workers (despite Yahoo’s banning of telework).

It used to be (and still somewhat largely is) the case that the attraction of the “city” was in pulling together smart and innovative people in close proximity. Though some of these attractions will never go away, those attractions come at a cost in higher costs of living, congestion, crime and sensory assault. The Internet is enabling virtual and specialized communities to form, some of which due to their niche attractions, may not even be easily sustainable in cities.

We sometimes laugh that we have retreated to an Ozzie and Harriet-like circumstance of safe neighborhoods, great public schools, and a sense of balance and local community. We live in a bucolic spot, with nature and greenery all around us. It may be flyover country, but it is one that has sheltered us from much that has become ugly and challenging in the modern urban environment. Our kids left their hometown for spells of their own to see the big city and touch the elephant. But, they have also gravitated back to a more accommodating and easy pace of living.

I hope that an interconnected world of knowledge will better allow all of us to live in circumstances of our own choosing, ones that help nurture ourselves and our families. For my family, achieving that vision in part has been helped by my being able to work from home. Now, after trying it out for a quarter of a century, I’m convinced I would not have it any other way.

Posted by AI3's author, Mike Bergman Posted on July 8, 2014 at 5:37 pm in Site-related | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/1744/twenty-five-years-of-working-from-home/
The URI to trackback this post is: https://www.mkbergman.com/1744/twenty-five-years-of-working-from-home/trackback/
Posted:June 30, 2014

Open Semantic Framework Structured Dynamics Moves to Integrate Key Initiatives

Structured Dynamics is pleased to announce its new UMBEL Web site and set of Web services.

Our first release of the UMBEL site occurred in 2007 while UMBEL was still under development. That site used its own homegrown HTML. The release was followed in 2008 by the addition of our own Web services. The Web services were well-received, which caused Structured Dynamics to develop the more general structWSF Web services framework (most recently updated as the OSF Web Services). We subsequently migrated the earlier UMBEL Web services to this more general framework, and also migrated to Drupal as the standard content management and Web site component for OSF.

For most reasons, including all client work to date, our OSF framework (Web services + Drupal 7) has been performant and met client site needs. However, the operation of the UMBEL Web services was often problematic after moving to the Drupal (full OSF) version. Unfortunately, we have seen both performance and stability problems, though calculations over a full 28,000 node graph are a challenge in any environment.

Since the UMBEL structure was an order of magnitude larger than our client work to date, we have frankly adopted a posture of occasional monitoring and reboots to keep the UMBEL Web site up. This posture was not limiting use of UMBEL for general browsing purposes, but was limiting its usefulness as a working API.

Because the cobbler’s son is often the last to get shoes, we have let the UMBEL Web site chill to a degree in the background. But, now, with other imperatives underway and some dedicated time to look directly at performance of larger-scale ontologies, we have looked at these items anew. The report card on our current evaluations is contained in a newly released UMBEL Web site with services, which I summarize and provide context for below. What emerges is an interesting story of discovery and growth.

Basis of the New Site

The new UMBEL site and its underlying 28,000 concept graph is consistent with the OSF layered architecture. However, the Web services are now written in Clojure and the Web site framework uses Bootstrap and plain ol’ HTML. These structured and foundational changes have been championed by Fred Giasson, SD’s chief technology officer, who is also putting forth a blog series on Clojure in particular. He also has a current post from a technical basis on these UMBEL site and service changes.

In essence, we have learned two important things about our prior practice with respect to making UMBEL Web services broadly available. First, for UMBEL, we do not need or want our standard configuration of having a Drupal front-end as the interface into OSF. Access to a knowledge graph does not need — and is ill-served — by having a complicated interface stand atop a large-scale concept model. APIs and Web services are the most important interaction points with the UMBEL knowledge graph, not a user-oriented Web site.

Second, in the various phases of our work, we had come to embrace the idea of ontology-driven applications (what we have termed ODapps). The compelling vision behind such structures is to place the emphasis on knowledge structures and data, rather than more software. Once one begins to unpack that vision, it can also become clear that software programming languages themselves that look toward “code as data” might be one way to be consistent with that vision.

Seeking a Sense of Harmony

For years I have been writing about data integration and interoperability and our company has been devoted to the topic. I have written extensively about the importance of RDF and description logics to how we organize and represent data. We were also some of the first to supplement RDF with a faceted text-search engine (Solr) to provide the most responsive query environment across structured to unstructured data. We have also adopted ontologies and the OWL 2 (plus SKOS) languages as standards to both foster and enable interoperability. We have explored native data structs to understand how wild forms of information can be efficiently pipelined into interoperable RDF and text forms.

All of this points to the ideal of the democratization of the information function in the enterprise. In other words, to the idea that how data structures get organized and represented (the ontology side of things) is something that knowledge workers can do themselves, rather than accepting the bottleneck of IT and programmers.

This is well and good except there is a critical “last mile” between data representation and data usefulness. This “last mile” deals with how actual data gets manipulated and then organized and presented (visualized). Query responses, reports, analysis and maps continue to be the choke points between knowledge workers and their IT support. And one need not frame this entirely from an enterprise perspective: these same challenges exist for the individual researcher or the small organization.

So, while one can focus on data and its organization and representation, until we address this “last mile” problem, we still are not likely addressing the largest source of frustration and lost opportunities in the knowledge function.

The reason that simple data struct forms and tools like spreadsheets continue to be popular is that they are empirically the best tools for the “last mile”. Web forms and services are increasingly showing their strengths in this realm.

Once one steps back and looks at the entire cycle from basic datum to actionable knowledge, it is clear that the question of data model is but one portion of the challenge. The remaining challenge is how (now) accessible information can be placed into context and acted upon. Further, if one premise is the democratization of the information function, then the challenge should also be how to provide productive capabilities for the last mile to the knowledge worker. Productivity is enhanced when there are the fewest channels and distortions between signal (problem) and channel (user chains).

Fred, in his investigation of functional languages, clearly saw that bringing the languages of code (programming) into the language of data (knowledge workers as expressed in our RDF world view) was one means to reduce the number and lossiness of the channels between problem (signal) and solution. A world view premised on the efficient representation and interoperability of data must logically support the idea of a coding (instructional or language) base aligned as well to problems. Moreover, since software guides the actual computer operations, a form of the software that supports the nature of the data should also provide a more performant framework for moving forward. In technical terms, this is known as homoiconicity.

Whether one looks to the intellectual foundations of Charles S Peirce or Claude Shannon (both of whom we do), one can see that the idea of signs and information theory means finding both data representations and code that minimize communication losses and promote the accurate transfer of the message. Lossless data transmission is one contributor to that vision, but so too is a functional representation for how the information is to be processed and transformed that aligns most closely with the information at hand.

Ergo, a better model for data is not enough. A better model of how to manipulate that data (that is, software) is also needed that aligns with the idea of coherence and structure in the underlying information. For our purposes, we have chosen Clojure as the functional language basis for these new UMBEL Web services. Not only is it performant, but it aligns well with the creation of domain-specific languages (DSLs) that also promise to democratize the computing function for the knowledge worker.

Bringing the Pieces Together

Fred and I founded Structured Dynamics a bit more than five years ago. But, we had worked together much earlier on UMBEL and Zitgist. For nearly ten years now, we have episodically emphasized a few different initiatives and passions.

One of those passions has been the structure of data and information. It is this perspective that brought us to RDF and data structs (and our irON efforts) at various times. The idea of structure is a basis for our company name, and represents the belief that structure can be brought to unstructured forms (via tagging, for example). Structure is perhaps the most common notion or concept in my own writings for a decade.

Another need has been the idea of making semantic technologies operational. We have been keen researchers of the tools space and algorithms and such since the beginning. We observed early on that many innovative and open source semantic programs existed, but most were the result of EU grants or academic efforts elsewhere. Thousands of tools existed, but very few had either been evaluated or stress-tested. By bringing together the best of class tools and integrating them, we could begin to provide a useful semantic platform for enterprises. This motivation was the genesis for the Open Semantic Framework, and has been the major source of our client support since SD was founded. We have finally created an enterprise-capable platform and have done much to transfer its technology. But, these concepts are difficult, and much remains to be done before semantic technologies are a standard option for enterprises.

Still, in another vein, our first love and interest has been knowledge bases. We first identified the need for UMBEL years ago when we perceived an organizing vocabulary would become an essential glue on the Web. We pursued and studied Wikipedia and how it is informing knowledge bases. Instance data and how it is represented is a passion for how these knowledge bases (KBs) get leveraged going forward.

As a smaller consulting and development boutique, we have needed to be opportunistic about when and where we devoted efforts to these pieces. So, over the months or years, we have at various times devoted ourselves to data models and ontologies (structure), the Open Semantic Framework (platform), or UMBEL or Wikipedia (KBs, knowledge bases). Depending on funding and priorities, any one of these threads did receive episodic attention and focus. But, truth is, each one of these pieces has been developed in (project-level) isolation to the whole. Such piecemeal development was essential until each component achieved an appropriate degree of maturity.

I could say we could foresee some years back that all of these pieces would eventually reinforce and bolster one another. Though there is a small bit of truth in that statement, the way things have actually unfolded is to show, as experience and sophistication have been gained, that there is a synergy that comes in the interplay of these various pieces. The goodness is that Structured Dynamics’ efforts (and of its predecessors) were building inexorably to the possible cross-fertilization of these efforts.

Once this kind of realization takes place — that data, code and semantics move hand-in-hand — it then becomes logical to look at the entire knowledge ecosystem. For example, it is not surprising that artificial intelligence, now in the informed guise of KB-backed systems, has again come to the fore. It is also not surprising that what software and programming languages we bring to bear also directly interact with these concerns. Just as Hadoop and non-relational database systems have become prominent, we should also investigate what kind of programming languages and constructs may best fit into this brave new information world.

What we have seen from that investigation is that functional languages (with their DSL offspring) somehow fit into the overall equation moving forward. SD has moved from a single-focus endeavor to one explicitly looking at integration and interoperability issues. What we had earlier seen as (largely) independent pieces we now see as fitting into a broader equation of related emphases:

Structure + Platform + KBs + Functional Language = Knowledge Worker-based Interoperability

We are seeing artificial intelligence moving in these directions. As a subset of AI, I suspect we will also see  the semantic Web moving in the same direction.

We clearly now have the theory, the data, the understanding of semantics, and languages and data representations that can make these democratic interoperabilities become real. This new UMBEL Web site is the first expression of how these pieces can begin to work together into a compelling, accessible whole.

We welcome you to visit and to take advantage of UMBEL’s fully accessible APIs.

Posted:June 15, 2014

Gold Pyramid Unpacking the Growth-producing Factors of Production

In my last article on artificial intelligence, I made the statement that “. . . innovation is the source of wealth creation.” I made that unquestioned statement as part of my reflexive world view. But, when I re-read the article after its posting, I asked myself: What are the actual arguments and evidence for this innovation-to-wealth assertion? Surprisingly, there is not nearly the evidential basis for this assertion that I would have assumed.

Since Adam Smith, the signal focus of economics has been its attempt to explain the basis of growth. This is not surprising since the birth of the field of economics also corresponded to an historically unprecedented inflection point in economic growth (see next). Smith ascribed this source to productivity resulting from the division of labor using his famous example of the pin factory. But it is really only within the past fifty years or so that economists have begun unpacking the growth function from the other factors of production.

Growth is a percent increase from a prior state. In economic terms, growth compounded over a period of time has the virtuous reward of resulting in increased wealth. Economic growth is often measured through such means as revenues (for the individual firm) or GDP (for regions or countries). Net worth (for the firm) or GDP per capita or net worth (for individuals) measure the wealth associated with the current stock of economic goods at any given point in time. And, of course, wealth alone also masks the importance of changes in comfort, convenience, freedom, choice, leisure, mobility and other values that may accompany growth and transcend the material. Too, some “externalities” of economic growth may be negative, such as congestion or pollution, but it is also true that wealthier societies tend to regulate against these effects.

Not only have we seen discontinuities in growth (and then wealth) throughout history, but we see them today between individuals, firms, industries, cities, regions and nations. Unpacking the economic factors of production that lead to growth thus has immense importance across the entire economic spectrum — from individuals to nations. Explicating and then managing these factors are intuitively a basis for improving the welfare of any economic actor. Unlocking the nature of growth, or better understanding that nature, should aid in helping to promote still further growth and wealth. Though questions of distribution and fairness may remain, a rising tide lifts all boats.

Thus, understanding the basis of growth, sustained over time, leading to greater wealth for individuals or nations, is the central question facing economics. And, as we see below, that understanding in turn is intimately related to the importance of information and innovation.

The Common Sense Argument

If we toil, year by year, doing the same activity, like growing wheat, and we gain the same harvest for the same labor and land and inputs, that is what we expect. Yet sometimes, the weather or rainfall patterns may differ, or we may have more children helping us in the fields, or a mule to help plow. Money helps us buy more of the important inputs, maybe more land, more mules or the comfort to have more children. These are the traditional factors of production: that is, land, capital and labor.

If we add more of these factors to the mix, we still understand we have merely tweaked the standard basis of our wheat production. Differences in the amount of these factors of production, throughout most of human history, are what accounted for the differences between rich and poor, landlord and serf. If, by virtue of having more land or children, we are now able to feed more people, we are by first definitions more wealthy, and if we can accumulate more of this wealth, we can leverage these standard factors even more. When we can keep more of what we produce we become more wealthy. Control and exploitation have been logical paths to much wealth creation.

These factors are pretty easy to observe and track. We intuitively understand that more inputs of labor, land or capital can themselves result in growth, but a growth that feels and appears rather fixed based on the change in these inputs. This kind of growth has a more-or-less trending return based on changes in these inputs. These types of inputs may also be subject to diminishing returns, wherein adding more of a given factor produces diminishing or negative payoff. For example, adding more fertilizer to the wheat crop produces less per unit output yield after some optimum, and then can actually reduce yields by burning the crop. Or, while a computer increases the productivity of an individual worker, giving her more computers may actually degrade her overall performance.

But there is also clearly a different kind of growth that is not constrained to a fixed or declining return based on inputs. Perhaps we have a neighbor that raises more wheat, possibly on drier, more marginal land, or with less water or fertilizer. His yield exceeds our own. These differences occur because our neighbor is doing something different and is producing more given his inputs.

Innovation is an individual affair in its discovery, but a communal one in its application. (At which point it is known as information.) Better ways of planting or spacing the wheat, perhaps using a plow, or selecting certain wheat strains for next year’s plantings, or irrigating the land, or providing harnesses to the mules, or dividing and specializing the responsibilities amongst the children, can result in real differences in how much gets harvested for a “similar” set of inputs. And, what I initially innovate, becomes information for the next farmer to emulate. Some of these innovations are new devices, such as harnesses or plows. Some of these innovations are new practices, such as tilling or irrigation methods or specializations in tasks or labor. And, of course, not every farmer must innovate on his own. Copying and imitation diffuse these changes across farms and workers.

Truly, for millenia, this is how human progress took place. Some innovations, such as fire, the wheel, iron and bronze, the arch, alphabets, the plow and the yoke had material benefits to all who encountered them. These innovations were fundamental and diffused at the pace of human movement. But, one could argue, each was understood to be a flash of insight, and not a product of systemic information and process. Further, innovations tended to diffuse slowly, along the pace and concentration of trade routes. The innovative event was quite rare, and most practices had been stable for centuries. It is not at all surprising that early economic ideas tended to focus on the traditional factors of production of land, labor and capital. These had been the steady constants for what had been very slow growth for centuries.

But then a real discontinuity in economic growth compared to all previous recorded history occurred in the early 1800s. Historically flat income averages skyrocketed, as this famous figure showing global changes in per capita (person) GDP from Angus Maddison illustrates [1].

William Nordhaus has captured a similar discontinuity looking at the price of light, normalized according to the labor effort needed to obtain 1000 lumens of light. It, too, shows an exponential decrease in the price of lighting beginning about 1800 [2]:

These comparatively abrupt changes in growth rates, and concomitant changes in wealth, that were orders of magnitude higher than what had been experienced before in human history, garnered the attention of economists and economic historians as never before [3].

From the beginning, this difference in growth rates was largely attributed to “technological change”, but the specific causes of this change have been ascribed to many things. The close concurrency to the Enlightenment suggested some fundamental change in thinking. Similarly, the concurrence with the Industrial Revolution suggested the importance of machines, prime movers and the harnessing of energy. Cultural and religious factors have been posited to explain why Britain and then the United States were the initial centers of growth. The invisible hand of the market and division of labor and specialization were advocated by Adam Smith. I have argued the importance of the mechanical printing press and pulp paper in bringing information to a broader swathe of society [3]. Education and support for basic and applied research have their advocates. Financial and banking innovations, and the rule of law and patents and other intellectual property rights, have also been cited as causes.

Common sense tells us that all of these factors, and perhaps more, can all work as force multipliers to the traditional inputs to the economic function.

But, until the mid-1950s, the broad sense of “technological change” and vague causative factors were more often than not argued in an anecdotal, literary way. Empirical datasets were few and far between to test hypotheses, and quantitative means of reasoning over economic problems were only just beginning. Economic growth theory was only just beginning to be an economic discipline in its own right.

The Theoretical Arguments

Joseph Schumpeter, in The Theory of Economic Development, first published in 1911, argued that innovation was central to economic growth and constantly disrupted the general equilibrium of market exchange [4]. Innovation granted the firm a temporary monopoly status in which to charge higher rents, thereby providing an incentive for further innovation. Schumpeter’s emphasis on entrepreneurialship and his popularization of “creative destruction” recognized that new innovative market entrants may cause older firms to become obsolete. He tied these ideas into his basic views on business cycles, also driven by technological change. Innovation was central to Schumpeter’s economic world view.

But the theoretical story really begins in earnest after World War II when the hidden X factor of technological change — in what came to be expressed as total factor productivity — came to the fore to complete the economic growth equation [5].

The Exogenous Model

Robert Solow is an American economist particularly known for his work on the theory of economic growth; the exogenous growth model is named after his work. Solow took courses from Schumpeter at Harvard and was influenced by his views on innovation and technological change [6], though Solow was also part of the generation of economists embracing the new discipline of mathematical or quantitative economics, which was foreign to Schumpeter.

As noted, economic growth was known to go beyond the typical factors of production. Solow’s insight in two papers in 1956 and 1957, for which he won a Nobel prize, was that technological change, what he called “technological progress,” must be the “residual” left over from empirical growth once the traditional inputs of labor and capital are removed [7].  Using his model, Solow calculated that about 87.5% of the growth in US output per worker was attributable to technical progress [8]. A substitute term is total-factor productivity (TFP), the “residual” in total output not credited to the traditional inputs of labor and capital. By definition, TFP cannot be measured directly.

We can express this mathematically as showing total output (Y) as a function of total-factor productivity (A), capital input (K), labor input (L), and the two inputs’ respective shares of output (α and β are the capital input share of contribution for K and L respectively):

Y = A \times K^\alpha \times L^\beta

These considerations make the exogenous growth model one of the neo-classical growth models, wherein the long-run rate of growth is exogenously supplied, apart from the internal growths of labor and capital. Within this camp, one explanation is based on the savings rate (the Harrod–Domar model); the other, as shown herein, is the rate of technological progress (Solow-Swan model [7]). By definition, in either of these so-called neo-classical models, the savings rate or the rate of technological progress remains unexplained. They are abstract external forces that are just “out there.”

The TFP approach remains strong as a basis for estimating total non-traditional inputs to the production function. It also provides a specific target within quantitative economics to begin addressing explicitly a placeholder for innovation, technological change, information, or other non-traditional considerations for what constitutes the overall production function. But, frankly, TFP still is a blob that needs to be unpacked and teased apart.

The Endogenous Model

A seminal paper by Kenneth Arrow in 1962 paper introduced the concept and evidence for what he called “learning by doing“; what is now more formally understood and accepted as the learning curve. Unlike a specific innovation, the idea of the learning curve captured that experience and practice led to efficiencies and productivity. In other words, more whatever could be done with less what as we learn better how to do whatever.

By the 1960s and 1970s it was becoming clear that developed economies were becoming information economies, increasingly staffed by knowledge workers, and these forces needed to be made explicit within quantitative models. Robert Lucas, now a Nobel laureate from the University of Chicago, probed the questions of rational expectations and internal factors promoting growth. By the mid-1980s, a group of growth theorists had became increasingly dissatisfied with common accounts of exogenous factors determining long-run growth. The focus shifted to the needs for quantitative models that made these “technological” or “information” factors explicit. In other words, these “X” factors needed to be moved from a lump, external consideration to an internal one within the models, with their own multipliers and feedbacks. In short, these new growth factors needed to be made endogenous (internal), not exogenous (external).

A book by David Warsh in 2007, Knowledge and the Wealth of Nations: A Story of Economic Discovery, is a comprehensive explanation of this transition, with a focus on Paul Romer, then of Stanford University, but earlier a colleague of Lucas, pivoting on his seminal paper, “Endogenous Technological Change” [9]. By bringing the consideration internal to the model, it could be groped, inspected and broken into parts.

Besides this essential change in focus, this and related Romer papers also brought two further key insights. First, information and its artifacts are also products and outputs of the economic function. And, second, once produced, many information or knowledge assets may be produced or distributed at essentially zero marginal cost. A new dimension in “rival” and “non-rival” goods had been added to the growth theory lexicon. Information and knowledge themselves were becoming both inputs and outputs to the economic function. This understanding required still further unpacking.

Refining Inputs and Parameters

As a non-economist, it seems a bit perplexing to me how long it took the discipline to start explicating and unpacking the factors of economic growth [10]. To be fair, most every domain of human inquiry has suffered from lacking essential test datasets and statistics upon which to probe and test assumptions. There is perhaps no better poster child for this lack of reference datasets than what has been necessary to test and probe the questions related to economic growth. Yet, as our intro suggested, there is also perhaps no more important area of human inquiry than to understand these non-traditional factors of economic growth. Better understanding of these factors will impact all economic actors from individuals to firms to nations.

Our first approximation must be to get to common units and denominators that enable calculation and comparison. Things like GDP, for example, need to be re-expressed as per capita figures to take out general population growth; money terms need to be expressed in real dollars (or whatever currency), perhaps even further adjusted to account for differences in assumed deflators and inflators across metrics. We’re getting smart enough about this stuff that we can now apply best practices for common data comparisons.

Even the traditional factors of production need further attention. Let’s first take the concept of labor. Labor is ubiquitous in virtually all economic calculations.

Most economic datasets compare items across space and time. A simple labor adjustment to per capita or hours worked can mask these underlying structural changes: life expectancy of the workers; male-female participation in the workforce; hours worked per week; holidays and vacation time; changes in retirement ages; general population and cohort growth; and, then and only then, labor productivity. Of course, the reasons for labor productivity itself come back to innovation and information: the use of better machines, practices and methods by which we do our tasks.

Similarly, the idea of “human capital” has also become predominate in the economic growth literature. Is human capital a subset of general capital? labor? Does human capital include education, training, experience, intellectual capabilitiies, etc.? And, if so, how can these be measured and made consistent for comparison or decision purposes?

We also see that the nature of innovation, information, knowledge, intellectual property (IP), practices, information artifacts, and the like, lack any consistency as to definitions and boundaries. How can nebulous concepts be compared to still other nebulous concepts in order to draw meaningful conclusons? How can test datasets be created to refine these questions if the basic concepts and definitions remain ambiguous?

We see, for example, that knowledge and its role in economic growth may vary as to whether the knowledge is propositional (the ‘sciences’), prescriptive (‘recipes’), a discovery, or an invention [10]. These may not be the best splits, but clearly we must be able to distinguish at minimum innovative ‘aha!s’ from the tech transfer of best practices. These are fundamentally different notions of information. And, of course, none of this discussion directly addresses the internal controversy within the economics community of information v knowledge.

Once we normalize our traditional inputs to the economic function to appropriate per unit bases expressed in constant, real dollars, the residual “total factor productivity” is all due to innovation and information. Innovation is the spark that brings us new methods and devices for doing things, as eventually disseminated throughout the economy via the diffusion of information. Since innovation is itself based on information, we can truly say that information is the fount from which all per capita growth and wealth ultimately derives.

The Empirical Argument

In a recent paper on total factor productivity going back 150 years to the Civil War, researchers from the Congressional Budget Office have calculated that private-sector nonfarm TFP in the United States grew at an average rate of roughly 1.6 percent to 1.8 percent annually, but has experienced several surges occurring in varying parts of the economy [11].

On a different basis, I have used Robert Schiller’s published data on per capita GDP going back to 1900 to show a similar growth trend [12]. The trendline from this data series shows an annual compounded growth rate of about 1.84% per year:

These kinds of growth rates imply a doubling of wealth every 40 to 45 years.

When TFP was first being formulated, Solow calculated that 87.5% of the growth in US output per worker was attributable to technical progress [8]. In 1954, Solomon Fabricant estimated 90% of growth was due to technological factors [13]. But, as we have seen, these were “lumpy” measures and factors like the changing size and composition of the work force (especially the growth of women and two-earner families) also masked other changes.

A different way to approximate the role of technological progress is to look at the market measure of the US markets. Again using Schiller’s CAPE data [12], but also now adjusted to a per capita basis (for the US, [14]), we see the following trends since 1900:

Nominally, labor is removed from this equation because it has been accounted for as an expense on the firm’s books. Similarly, the return due to capital has been accounted for via the payout of dividends. Under these bases, we see that the growth in value of large US firms — despite the severe oscillations due to market cycles — has been a bit more than 1 per cent per annum compounded. This would suggest that the combination of innovation and information accounted for about 55 percent of the overall per capital GDP growth rate noted earlier.

But this proxy is itself flawed in many ways. First, the S&P index is for only the 500 largest US firms, which are certainly not representative. Also, comparing GDP and S&P figures hides the fact that much of the growth and productivity of US firms occurs via foreign subsidiaries. Also, of course, labor and capital productivity — themselves the result of innovation and information — are also taken out of the S&P estimates. The discrepancy between TFP estimates as a source of growth and intrinsic S&P valuation growth is in part explained by this different accounting metric. But the real issue in all of these proxies is that we are not yet fully unpacking the various sources of information and innovation as the drivers of underlying growth.

Only within the last few years have we begun to assemble the right datasets and account for the right factors in this unpacking of growth factors. For example, between 2000 and 2005, estimates at the industry level indicate that almost half of aggregate productivity was due to productivity growth originating from information technology [15], though the IT industries themselves only accounted for a little over 3% of nominal aggregate value [16].

These findings are from a more detailed analysis of productivity and growth by Jorgenson, Ho and Samuels [16]. Their analysis attempted to explicitly separate out innovation from the diffusion of prior innovations due to information. In the authors’ words:

“We show that the great preponderance of economic growth in the US since 1947 involves the replication of existing technologies through investment in equipment, structures, and software and expansion of the labor force. Contrary to the well-known views of Robert Solow (1957) and Simon Kuznets (1971), innovation accounts for only about twenty percent of US economic growth. This is the most important empirical finding from the recent research on productivity measurement surveyed by Jorgenson (2009). “

I think some of these differences are due to semantics and terminology. Remember, early residuals and TFP discussions were centered around the concept of “technological progress”. What Schumpeter referred to as “innovation” is now understood to be too broad; innovation is but a part of the overall growth effect due to information.  What is helpful from these more recent studies is to separate out innovation from information dissemination. The next step, for which we have not yet developed useful datasets, would be to unpack the ideas of innovation and information into the categories from Mokyr [10]. Namely, these are discoveries and inventions (innovation) and the ideas of propositional and prescriptive information first distinguished by Michael Polanyi as tacit knowledge.

The aphorism that we can not understand what we can not measure applies here. To take our understanding of these empirical factors to the next level we will need to refine our concepts and gather defensible data for estimating them. A proper accounting for growth should also likely distinguish transformative innovations (such as the printing press, electricity and computing) from other discoveries and inventions.

The Beautiful Synergy of Innovation and Information

By 2009, Romer and Jones were able to claim that the endogenous growth model had been proven, and they put forward six research questions to look for in the coming 25 years, including the role of human capital, differential growth rates between countries, and accelerated growth [17]. Innovation had finally assumed its central, internal role in understanding growth.

Innovation is the root source of new devices, new technologies, new practices, new methods and new theories. Innovation, in turn, is based upon the foundational substrate of information. As new innovations occur, new information is added to this substrate, all in a virtuous circle.

Markets will rise and fall, and business cycles will gyrate. New businesses and business models will emerge while others are destroyed or whither away. These reflections of animal spirits and uneven (imperfect or wrong) information can never be smoothed. But, the trajectory of growth, fueled by the beautiful synergy of innovation and information, points to an optimistic future.

To be sure, I am not positing a near-term upward trend in the stock markets. In fact, my own personal view is that markets are temporarily oversold, with a higher near-term probability of declines rather than rises. These oscillations are part and parcel of market cycles. My longer-term optimism reflects more fundamental trends.

We are all aware of the explosion of information and content. Today, like the broadening base of information and literacy that I have elsewhere posited as a major factor in the first upward inflection of economic growth in the 1800s [3], we are in the midst of a still newer — and optimistic — inflection point. Digital content and the Internet are bringing information to nearly every human on earth. Assistive technologies are bringing this information to those previously shut out due to disabilities in sight, hearing or mobility. Non-rivalrous goods can be duplicated at essentially zero cost and open source and broad access mean new ventures can be assembled and tested in the marketplace with unprecedented speed at unprecendented lower cost. Innovation is no longer the remit solely of an educated elite, but is available to every thinking person on earth.

These are all harbingers of continued growth and increases in wealth. Sure, ignorance, despotism, fanaticism and prejudice will cause some periods and pockets to be shut off from these trends, but the broad sweep of information and history looks assured.

Innovation, as Schumpeter first posited a century ago, grants the firm a temporary monopolistic advantage. In a time of openness, information growth, and universal access to that information, the winning competitive formula for firms and knowledge workers alike is constant innovation. Though a commitment to innovation leads to a bumpy path, it is an upward one, and most assuredly the path that is on the right side of history.


[1] The historical data were originally developed in three books by Angus Maddison: Monitoring the World Economy 1820-1992, OECD, Paris 1995; The World Economy: A Millennial Perspective, OECD Development Centre, Paris 2001; and The World Economy: Historical Statistics, OECD Development Centre, Paris 2003. All these contain detailed source notes. Figures for 1820 onwards are annual, wherever possible.
For earlier years, benchmark figures are shown for 1 AD, 1000 AD, 1500, 1600 and 1700. These figures have been updated to 2003 and may be downloaded by spreadsheet from the Groningen Growth and Development Centre (GGDC), a research group of economists and economic historians at the Economics Department of the University of Groningen, headed by Maddison before his passing in 2010. See http://www.ggdc.net/.
[2] William D. Nordhaus, 1996. “Do Real-Output and Real-Wage Measures Capture Reality? The History of Lighting Suggests Not,” in Timothy F. Bresnahan and Robert J. Gordon, eds., The Economics of New Goods, University of Chicago Press, ISBN: 0-226-07415-3, January 1996, pp. 27 – 70. See http://www.nber.org/chapters/c6064.
[3] I have addressed these broad topics firstly in, “Information is the Basis for Economic Growth” (Adaptive Information blog, AI3, August 23, 2007), and in some book reviews, notably “Knowledge: Unravelling the X Factor in Growth and Wealth” (Adaptive Information blog, AI3, June 21, 2006) and “Historical Origins of the Knowledge Economy” (Adaptive Information blog, AI3, July 6, 2006).
[4] William Lazonick, 2013. “The Theory of Innovative Enterprise: A Foundation of Economic Analysis,” AIR Working Paper, #13-05/01, 36 pp., May 2013. See http://www.theairnet.org/files/research/WorkingPapers/Lazonick_InnovativeEnterprise_AIR-WP13.0501.pdf.
[5] “The growth of growth theory,” from The Economist, May 18th 2006.
[6] “Robert Solow on Joseph Schumpeter,” in Economist’s View, Thursday, May 17, 2007. Retrieved on June 11, 2014.
[7] Solow’s endogenous model of economic growth is also known as the Solow-Swan neo-classical growth model as the model was independently discovered by Trevor W. Swan and published in “The Economic Record” in 1956, allows the determinants of economic growth to be separated out into increases in inputs (labor and capital) and technical progress.
[8] Robert M. Solow, 1957. “Technical Change and the Aggregate Production Function”. Review of Economics and Statistics (The MIT Press) 39 (3): 312–320. doi:10.2307/1926047. JSTOR 1926047.
[9] Published in the Journal of Political Economy in 1990.
[10] In 2002 Joel Mokyr, an economic historian from Northwestern University, wrote a book that should be read by anyone interested in knowledge and its role in economic growth. The Gifts of Athena : Historical Origins of the Knowledge Economy is a sweeping and comprehensive account of the period from 1760 (in what Mokyr calls the “Industrial Enlightenment”) through the Industrial Revolution beginning roughly in 1820 and then continuing through the end of the 19th century.
[11] Robert Shackleton, 2013. “Total Factor Productivity Growth in Historical Perspective,” Working Paper Series, Congressional Budget Office, 21 pp., March 2013. See  htttp://www.cbo.gov/sites/default/files/cbofiles/attachments/44002_TFP_Growth_03-18-2013.pdf.
[12] Stock market and cyclically-adjusted price earnings (CAPE) ratio data from Robert J. Schiller, 2000. Irrational Exuberance, Princeton University Press. Data as periodically updated and available from http://www.econ.yale.edu/~shiller/data/ie_data.xls.
[13] Solomon fabricant, 1954. “Economic Progress and Economic Change, a part of the 34th annual report of the National Bureau of Economic Research.” New York.
[14] CAPE per capita adjustment from http://www.multpl.com/united-states-population/table?f=m.
[15] Steven Rosenthal, Matthew Russell, Jon D. Samuels, Erich H. Strassner, and Lisa Usher, 2014. “Integrated Industry – Level Production Account for the United States: Intellectual Property Products and the 2007 NAICS,” May 15, 2014 (preliminary), 24 pp. See http://scholar.harvard.edu/files/jorgenson/files/jorgenson_ho_samuels_worldklems_2014_0519.pdf.
[16] Dale W. Jorgenson, Mun S. Ho, and Jon D. Samuels, 2014. “Long-term Estimates of U.S. Productivity and Growth,” pepared for Presentation at Third World KLEMS Conference Growth and Stagnation in the World Economy, Tokyo, May 19-20, 2014. See http://www.worldklems.net/conferences/worldklems2014/worldklems2014_Ho.pdf.
[17]Charles I. Jones and Paul M. Romer, 2009. “The New Kaldor Facts: Ideas, Institutions, Population, and Human Capital,” Working Paper 15094, National Bureau Of Economic Research, 31 pp, June 2009. See http://www.nber.org/papers/w15094.

Posted by AI3's author, Mike Bergman Posted on June 15, 2014 at 11:24 pm in Adaptive Information, Adaptive Innovation | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/1736/innovation-information-growth-and-wealth/
The URI to trackback this post is: https://www.mkbergman.com/1736/innovation-information-growth-and-wealth/trackback/
Posted:June 2, 2014

Dawn of Artificial IntelligenceEight Massive Trends are Waking AI from Its Dark Winters

When I inaugurated this AI3 blog in 2005 I made this statement in the about section to clarify that the “three AIs” stood for adaptive information, adaptive innovation, and adaptive infrastructure, and not the AI of artificial intelligence:

. . . I personally believe artificial intelligence to be a lot of hooey and hype at best, and a misnomer and misdirection at worst. . . . ‘Artificial intelligence’ is a misdirection of attention and energy.

Gulp. OK. Time to take my medicine.

I am today formally retracting those statements — probably should have done so some time ago — and want to explain why. As much as anything, it has to do with the changing understanding of what is artificial intelligence, recently affirmed by global-scale applications and technologies, working effectively right now.

Many Winters within AI

Though the idea of automatons and intelligent agents standing in for humans is about as old as human storytelling, the real basic ideas around artificial intelligence became current as part of the World War II effort and were finally given a name in a famous 1956 conference at Dartmouth. Initially namers and advocates of artificial intelligence included such founders as John McCarthy, Herbert Simon, Claude Shannon and Marvin Minsky. Money to support early interest in artificial intelligence came from the part of the US military that eventually became ARPA (now DARPA), with the funding going to individual researchers to use as they wished as opposed to specific projects. Along with many futuristic visions of the 1950s to 1970s, the promises for artificial intelligence were bold, including being able to capture and automate most notable basic human capabilities.

Popular movies and books promoted the ideas of autonomous robots that we could speak with and command and that would anticipate our needs and wishes so as to act as simulacrum agents lessening our burdens and adding to our leisure and capabilities [1]. Algorithms would be discovered and codified that would mimic the basis of human thought and intelligence. The idea of the Turing machine established a defensible basis for foreseeing that any problem of mathematical logic could be captured and taken on by computers.

The predictable failure of this vision to deliver caused a backlash, sufficient that the US Congress prohibited further open-ended funding via the Mansfield Amendments in 1969 and 1973, such that by 1974 AI funding in the US had largely dried up. Similar restrictions were applied to the British research community. The result of this backlash caused the first of what would prove to be many “winters” of funding and acceptance for AI.

Roughly a decade later, in response to the perceived Japanese threat for “fifth-generation” computing in the mid-1980s, a number of AI programs were again funded. While hardware developments were proceeding apace, efforts around McCarthy’s AI-oriented language Lisp and common sense logic frameworks (what are now called ontologies or knowledge graphs) such as Cyc began to receive sponsorship again. The mid-1990s were the time of “expert systems,” to be populated by knowledge engineers charged with interviewing internal subject matter experts (SMEs) to codify their knowledge for later reuse. These efforts, too, disappointed in terms of the lack of practical benefits delivered. More AI winters ensued.

AI (“artificial intelligence”) came to again lose its credibility. Some researchers moved into specific algorithmic disciplines — Bayesian statistics and neural networks predominant — while others shifted into such areas a “hyperlinks” and what became the semantic Web. Today, one could argue, that the lost mojo of AI has affected those in the semantic Web in almost a dialectic way. First, there are those who embrace the idea of intelligent agents and global knowledge structures, more-or-less in keeping with some sort of vision of artificial intelligence. Second, there are those that have seen the failures of the past, do not want to repeat them, and are more inclined to support “loosely bounded” structure focused on bottoms-up assertions. OWL modelers and ontologists tend to occupy the first camp; linked data advocates more the second camp.

The natural community for knowledge representation and management has thus tended to bifurcate a bit: global, “visionary” AI types, with history to overcome and challenged by the sheer scale of what emerged from the Internet; and incrementalists, happy to accept a bit of RDF structured data in the hopes of an ongoing evolution to more structure and interoperability.

Ten years ago, when I made the conscious decision to reject the AI of artificial intelligence as a label for this blog, an algorithmic-vision of AI seemed “wrong” and not in keeping with the general trends of the Web. That was the basis and justification for my then-statements on AI. But a funny thing happened on the way to a cogent forecast: a massive disruption called the Internet came about that — while it took a decade to gestate — changed the whole underlying substrate over which AI could take place. Like so much of history, innovation had presented to us an entirely different reality upon which to “understand” and develop artificial intelligence. It is those changes — plus the fruits from them — that is defining AI in a new light.

Eight AI Megatrends

There are, by my reckoning, at least eight major trends that have been improving AI’s prospects, especially over the past decade (Numbers #3 to #7 below are quite related to AI, the other three are general trends.) Some of the proven wonders we now see in use such as speech recognition, speech synthesis, language translations, entity recognition, image and facial recognition, computer vision, question answering, autocompletion and spell correction, recommendation systems, sentiment analysis, information extraction, document categorization, natural language processing, machine learning, reasoning, optical character recognition, word sense disambiguation, search and information retrieval, and text generation and summarization, with their many additional categories and sub-categories, are proof these trends are making a difference. None individually constitutes what may be called “AI”, but, in combination, they show compellingly that much of AI’s initial vision is indeed being fulfilled to some degree and in some specific aspect today.

Nearly all of these applications correspond to the Grand Challenges for symbolic computing identified in the 1980s. Until a decade ago, very few of them save search and initial NLP were producing results with sufficient quality and accuracy. Now, all are.

In the past ten years, most evident in the past five, tremendous breakthroughs have occurred across the entire spectrum of artificial intelligence applications. We can point to at least the eight following megatrends enabling these breakthroughs.

#1 Computer Power

A constant river of innovation has fueled the logarithmic power improvements in computers since the first transistor. Moore’s law has led to massive improvements in hardware cost, numbers of computation cycles, and amounts of bits stored. Networking capabilities are now truly global and numbers of interconnected devices exceeds billions. Computer software innovations lead to faster and better procedures and methods; as a category, software innovation likely exceeds hardware improvements as a source of computing productivity. What today fits in the palm of our hand thirty years ago required entire rooms, and did not do one billionth of what can be done today.

The rich savanna of computing has itself encouraged a bloom of innovations, many of which contribute to artificial intelligence prospects.

#2 The Internet (and Web)

Though clearly a related function to the general improvements in computing and hardware, the advent of the Internet and its more relevant offspring of the Web has had, I believe, the most fundamental impact on the change in prospects for artificial intelligence. The sheer scale of the Web network has made available crowdsourced innovations like Wikipedia and other crowdsourced data and knowledge bases. More broadly, global content across the entire Web, accessible via a common HTTP protocol, multiplied every individual’s access to information — pay close attention — by a factor of a billion or more.

Because the entire Web is interconnected, the sheer raw grist of connected data available to analyze such things as relatedness or similarity is gamechanging. Manual constructs and derived relations from years past can now be multiplied and magnified at Web scale. Any relationship test or validation can be accomplished nearly instantaneously and at (essentially) zero cost. Phenomenal!

#3 Expectations

The discrediting of AI and its holdover smell has itself been a factor working in its favor. By being discredited, it has been possible for multiple possible AI components, many listed herein, to be developed and attended to in relative isolation. Each of today’s current pieceparts to AI could be focused upon on their own, without taint from the broader “AI” brush. Because the constituents were recognizable and justifiable on their own, they did not need to fulfill the past overblown visions and expectations for “AI” writ large. The pieceparts could develop in peace.

This observation, if true, means that grand visions like “artificial intelligence” are perhaps rarely (ever?) the result of a grand top-down plan. Rather, like a good stew, it is individual components that need to mature and become available to create the final meal. Since these ingredients need to stand or contribute on their own for their own purposes, the actual resulting stew may vary as to its ultimate ingredients. If one ingredient is not ripe or available, we vary our recipe according to what is available. There is no one single recipe leading to a tasty stew.

Put another way, AI has been flying under the radar for at least the last ten to fifteen years. Portions of the older AI agenda have benefited from specific attention. Better still, the new emergence of the idea of artificial intelligence is also more toned down and practical. Artificial intelligence is now, I believe, understood to be part of a process and not some autonomous embodiment. Human interaction and communication are themselves imprecise and subject to error. Why should not be artificial means to boost those same human capabilities?.

From the standpoint of expectations, artificial intelligence has evolved from science fiction to essentially zero awareness, meanwhile delivering, on a broad scale, focused wonder capabilities such as (nearly) instantaneous translations across 60 leading human languages.

#4 Global Knowledge Bases

How can a system promise useful suggestions or alternatives if it is bereft of information?

At the local or personal level we well understand that we need to describe ourselves via attributes, the more the merrier in terms of a more complete description. A pretty good record for me would include such things as physical description, image, work and economic description, family and life description, education description, text narratives from fun to historical,  etc. The more complete description of me requires many sources and many attributes and many perspectives. But, of course, I do not live alone in the world. To describe my world, which constantly changes, I need to describe other thousands of entities I encounter daily. Each of these, too, has many attributes and relationships to other entities. Each of these entities also changes over time (has histories) and place. So, context becomes another critical dimension.

The growth of the Web at scale has resulted in some tremendous knowledge bases of entities and concepts. Freebase and Wikipedia are two of the best known, but virtually every domain has its own sources and richness. These knowledge bases, in turn, are often open for use by others. Text mining and digital data mean these data can be combined and made to interoperate. That process is only just beginning.

Though early efforts in artificial intelligence understood that capturing and modeling common sense was both an essential and surprisingly difficult task — the impetus, for example, behind the thirty-year effort of the Cyc knowledge base —  what is new in today’s circumstance is how these massive knowledge bases can inform and guide symbolic computing. The literally thousands of research papers regarding use of Wikipedia data alone [2] shows how these massive knowledge bases are providing base knowledge around which AI algorithms can work.

The abiding impression is that the availability of these data sources has fundamentally changed how AI is done. Unlike the early years of mostly algorithms and rules, AI has now evolved to explicitly embrace Web-scale content and data and the statistics that may be derived from global corpora.

#5 Deep Learning

Machine learning is a core AI concept used to determine discriminative characteristics or patterns within source input data. It has been a constant emphasis of AI since the beginning.

Various machine learning algorithms — such as Markov chains, neural networks, conditional random fields, Bayesian statistics, and many other options — can be characterized among many dimensions. Some are supervised, meaning they need to be trained against a standard corpus in order to estimate parameters; others require little or no training, but may be less accurate as a result. Some are statistically based; others are based on pattern matching of various forms.

A more recent trend has been to combine multiple techniques in what is known as deep learning, where the problem set is modeled as a layered hierarchy of distributed representations, with each layer using (often) neural network techniques for unsupervised learning, followed by supervised feedback (often termed “back-propagation”) to fine-tune parameters. While computationally slower than other techniques, this approach has the advantage of automating the supervised learning phase and is proving generally most effective across a range of AI applications.

More fundamentally, there is a virtuous circle of feedback occurring between AI machine learning algorithms and reference knowledge and statistical bases (see next). This can extend the accuracy, completeness and efficiency of supervised methods. Some notable academic departments have relied on Web-scale corpora (University of Washington and Carnegie Mellon University are two prominent examples in the US). The most dominant player in this realm, however, has been Google (though all of the major search engine and social networking companies have smaller initiatives of similar character).

#6 Big Statistical Data

Using both statistical techniques and results from machine learning, massive datasets of entities, relationships and facts are being extracted from the Web. Some of these efforts, such as the academic NELL (CMU) or KnowItAll or Open IE (UWash) involve extractions from the open Web. Others, such as the terabyte (TB) n-gram listings from Google, are derived from Web-scale pages or Google books. These examples are but a sampling of various datasets and corpora available.

These various statistical datasets may be used directly for research on their own, or may contribute to further bootstrapping of still further-refined AI techniques. Similar datasets are aiding advertising placements, search term disambiguation and machine (language) translation. In some cases, while the full datasets may not be available, open APIs may be available for areas such as entity identification or tabular data.

What is important about these trends is that data, statistics and algorithms are all now being combined in various ways with the aim of achieving acceptable AI-backed results at Web scale. It is really via the combination of these techniques that we are seeing the most impressive AI results.

#7 Big Structure

A more nascent area, really in just its first stages of effectiveness, is the application of “big structure” to all of this information. By “big structure” I mean the application of domain and knowledge graphs to help arrange and place the concepts and entities at hand.

At Web scale, the early Yahoo! directory and Open Directory were the first examples of structuring domains. Wikipedia next became the most widely used category structure; Freebase, for example, used Wikipedia to initially bootstrap its own structure. A portion of Freebase is now what is used for Google’s own Knowledge Graph. DBpedia also created its own ontology out of the infobox structure of Wikipedia. The major search engines have also put forward the schema.org structure as a means of (mostly) organizing entity and attribute information and structured data. schema.org putatively is an input to the Google Knowledge Graph, but the exact mechanism and ability to trace the results is pretty opaque.

The need for big structure is rapidly emerging as one of the key challenges for Web-scale AI. The Web and crowdsourcing appears well suited to being able to generate entity and attribute data. What remains unclear is how this information can be coherently organized at the scale of the Web. This problem is becoming acute, because the success of “big data” on the Web needs to ultimately find an organized, coherent expression in the aggregate. This is one major AI challenge that remains distinctly unsolved, though promising first steps exist.

#8 Open Source and Content

The major theme of these AI breakthroughs comes from leveraging the global content of the Web. And this enabler, in turn, has been critically dependent on the open source nature of AI algorithms, software code and code infrastructure and architecture, and open content and (generally) open APIs. Open code, algorithms, datasets and knowledge have expanded the pool of human intelligence that can be brought to bear on the question of artificial intelligence. The positive feedbacks greased through open channels of information, code and data have been absolutely essential to the amazing AI progress of the past few years.

To be sure, open does not mean a level playing field. (See discussion on Google, next.) But, without open source and open content and data, I think no one could argue that progress would have been anywhere near as rapid as it has been. The synergy arising from open source and content has thus been another essential factor in the recent and rapid progress in AI.

The Race to Intelligence

Since innovation is the source of wealth creation, it is also no surprise that the megatrends surrounding AI have also drawn significant investment interest. This interest is in the form of a race to acquire the most innovative AI startups and human expertise (capital) in AI. Since Google has been my common touchstone in this piece — and because Google is the biggest gorilla in the room — we can use them to illustrate the scope and pace of this race. (Though Amazon, Facebook, Microsoft and IBM are also clearly entrants in this race.)

A number of recent articles, notably ones in the Washington Post and The Economist, have highlighted the total dollars at stake in this AI race. Over the past few years, there have been perhaps more than $20 billion in AI-related company acquisitions, with Nest Technologies (Google, $3.2 B), Kiva Systems (Amazon, $775 M), and DeepMind (Google, $660 M) some of the largest.

Within Google alone, there has been a buying spree in search improvements (~ $1.4 B total), robotics ($80 M), machine synthesis and recognition ($250 M), machine learning ($700 M), smart devices ($3.6 B), compression technologies ($200 M), natural language processing ($80 M), and a smattering of others ($50 M), not to mention its internal efforts in self-driving cars. I don’t monitor Google on a constant basis and likely missed some major and relevant acquisitions, but it does appear that Google has perhaps spent over $6 billion over the past five years or so for AI-related acquisitions [3].

As important as start-up acquisitions has been Google’s commitment to hire and partner with many of the leading AI researchers in the world. Besides the strong partnerships Google maintains with such institutions such as the University of Washington, Carnegie Mellon University, MIT, Stanford, UC Berkeley and others, it has also staffed its research ranks with prominent names from those institutions and others.

Peter Norvig, one of the early advocates for combining algorithmic and statistical AI, joined Google in 2001 and is now its Director of Research. Most recently and notably, Ray Kurzweil joined Google as Director of Engineering in 2012. Other notable AI researchers at Google include Alon Halevy (FusionTables), Ramanathan Guha (schema.org), Geoffrey Hinton (deep learning), Evgeniy Gabrilovich (search and machine learning), and many others for whom I am not as familiar with their research. There is probably more AI talent combined at Google than has ever been assembled in one institution before.

With IBM’s Watson getting its own division and Facebook funding an AI center to the tune of $10 B, plus Apple making a similar commitment to robotic manufacturing, it is clear that all of the major players in the computing space are making big bets on AI moving into the future.

AI is Itself But One Beneficiary of These Trends

Since the early winters in artificial intelligence, a phenomenon has developed called the “AI effect“. It really has meant two different things.

First, AI researchers have tended to call their research anything but artificial intelligence. One of the broader and trendy substitutes is known as cognitive computing. Many of the domains and disciplines I noted above got their names and prominent use as substitutes for what used to be labeled as AI. In any case, we can see that AI indeed is a big tent with many components and thrusts.

Second, the “AI effect” also refers to the fact that once an AI technique is embedded in some everyday use, it is no longer perceived as something AI and is taken as a given. Douglas Hofstadter expressed the AI effect concisely by quoting Tesler‘s Theorem: “AI is whatever hasn’t been done yet.”

I was perhaps right to initially reject the algorithmic-centric view of AI from the early years. But now, when matched with big data, big statistics and big structure, all embedded into phenomenal advances in computing power, it is also clear that we are dawning into a new age of AI. One only needs to look at the wondrous progress on many of what had seemed to be impossible Grand Challenges over the past five years to gain an appreciation of the pace and breadth of new developments to come.

These developments will reify and foster similar emphases in semantic technologies, graph structures and analysis, and functional programming and homoiconicity (“data as code, code as data”) that my colleague, Fred Giasson, is now actively exploring. We will find that representational paradigms and the basis of how our tools and algorithms work will increasingly align. There appear to be natural underpinnings to these phenomena, including the pivot of language and meaning, that are closely aligned with the thoughts and writings of that great American pragmatist and logician, Charles S. Peirce. We will increasingly come to see that the wondrous innovations of self-driving cars, talking smartphones, warehouses of fulfillment robots, and computer vision systems can trace their roots back to basic truths of how to see and understand our world.

Understanding these forces will, themselves, help to formulate guidelines and ideas that can foster further innovation. So, in the end, while I still don’t like the term of “artificial” intelligence, it is merely a sign or a term. Adaptive innovations expressed by machines are simply part of the intelligence and structure embodied in the universe, for which we are now gaining the tools and understanding to exploit.


[1] Douglas AdamsHyperland is a great exposition on this vision, with my 2007 blog post pointing to the online video.
[2] Wikipedia maintains its own page of research that relies on Wikipedia; I have earlier captured about 250 selected sources called SWEETpedia that relate specifically to semantic technologies and AI.
[3] These are merely estimates, and likely quite wrong in many specifics. The estimates were compiled by reviewing a listing of Google acquisitions (since 2009), supplemented by individual company searches when the acquisition amounts were not listed, followed by analysis of Google’s SEC Edgar filings in a manner similar to this analysis (which was also used for the robotics estimate).
Posted:April 24, 2014
Open Semantic Framework

Another Expansion in Documentation for the Open Semantic Framework

The Open Semantic Framework is a complete foundation to bring semantic technology capabilities to the enterprise. OSF has applications from enterprise information integration to collaboration networks and open government. It has been under development since 2009, leveraging a set of robust open source engines and connecting Web services and architecture, and is now in its third major version. OSF is fully integrated as a semantic technology extension to the Drupal content management system.

Structured Dynamics, the developer of OSF, with the generous support of SD clients, has been committed to provide excellent documentation and tech transfer support to OSF since its inception. For examples, OSF now has a nearly 500 document technical support library, plus many automated means for installing and testing the OSF stack.

Yet, as all of us know, written documentation is not always discovered nor read. The paradigm for technology transfer is shifting to online tutorials and screencasts.

In keeping with that trend, SD has committed itself to develop a (hopefully) complete suite of online screencasts and tutorials geared to the nuts-and-bolts of how to install, configure, test, manage and use an OSF installation. Our intent is to aid users to bring semantics into the enterprise without the need for external support or cost.

We call this curriculum of tech transfer screencasts and video tutorials the OSF Academy.

Over the past week we have been releasing the first dozen screencasts in this series. With this foundation, it is now time to make a broader announcement of the OSF Academy.

So, On With it Now

Welcome!

We are on pace to release many dozens of specific screencasts on all use and management aspects of OSF. Please stay tuned over the coming weeks.

You can always see the complete contents of the YouTube channel at the Open Semantic Framework Academy.

Also, as basic grounding, also know that the OSF Wiki section on screencasts is another central access point to this content.

The Series Begins

Most of the screencasts are quite specific in certain use aspects of the Open Semantic Framework. However, tutorial #1 is a useful overview of OSF and the series:

The Next Ten Screencasts

SD’s CTO, Fred Giasson, is the key demo jockey for most of the OSF Academy screencasts. Many of these screencasts are technical, and all are specific and focused. Access each screencast by number below. There is also a (blog post) associated with each screencast that provides useful background information and links.

Thumbnail

Where Next?

We have nearly four dozen additional screencasts in our plan to round out introductory material to OSF. Please monitor our OSF channel on YouTube to stay on top of these releases.

Posted by AI3's author, Mike Bergman Posted on April 24, 2014 at 1:16 am in Open Semantic Framework, Structured Dynamics, Videos | Comments (0)
The URI link reference to this post is: https://www.mkbergman.com/1725/osf-academy-inaugurates-with-eleven-screencasts/
The URI to trackback this post is: https://www.mkbergman.com/1725/osf-academy-inaugurates-with-eleven-screencasts/trackback/