— DENNIS THOMAS, SANJEEV MISHRA


Executive Summary

The future web is expected to evolve into a situation aware web that surfaces the hidden knowledge of enterprises and individuals and enables greatly more powerful paradigms. Actualizing these requirements demands a semantic technology that is magnitudes greater in efficiency and performance than conventional layered technologies.

Introduction

The semantic world has started acknowledging the wisdom of going beyond triples for knowledge representation. The Mark 3 Knowledge Platform, with an N-ary based architecture, is applying its technology towards solving real world problems. Encoding knowledge at Shannon Limits, Mark 3 balances the concepts of scale and complexity to optimally achieve its objectives.

Mark 3 represents situation aware, autonomous knowledge stores to actualize the vision of the future Web utilizing an N-ary architecture that scales proportional to content.

Users across the globe are excited about Web 2.0 portals. These community-of – interest websites, such as Wikipedia, YouTube, Facebook, Twitter and many others, have opened the door for the average Internet user to publicly post any kind of content they wish to publish. Wikipedia has successfully pioneered the frontier of online "WIKIs", YouTube and Facebook have become essential posting grounds for visual and audio media, and Twitter, an "of interest" interactive sharing site, continually opens doors of greater value as its users share information and experiences related to events, travel, projects and other personal undertakings.

Of concern, however, is that the technologies that support these Internet phenomena, successful as they may be at this early stage of Web development, are confronted with enormous complexity problems which not only threaten their continued quality and growth, but also, the future Knowledge Web. The reason for this concern is rooted in what is known as the complexity problem. Quite simply, too many users, too many transactions, and ever more content being piled-on already stressed infrastructures to the point where the load can barely be managed.

The workaround for the complexity problem has been to expand the capacity of datacenters by adding more blades, memory, storage, and layers of software to manage smaller slices of data. There is now a trend towards "cloud computing," a new paradigm of Internet infrastructure, "in which information is permanently stored in servers on the Internet and cached temporarily on clients that include desktops, entertainment centers, tablet computers, notebooks, wall computers, handhelds, sensors, monitors, etc." as defined in an article from Wikipedia. Other worthy efforts to work beyond the complexity problem are what are called "semantic grids" which are extensions of the Internet and Cloud computing infrastructure in which domain specific information and services are given well-defined meaning to enable computers and people to work in a more efficient and cooperative manner.

Conventional datacenters, cloud computing, and the semantic grids represent viable solutions for the glut of data and information that must be transported throughout the Internet infrastructure/grid, but these technologies fail to meet all the requirements of the Knowledge Web. Namely, the need to represent, understand, and reason with the conceptual knowledge of civilizations current and past. It appears that conventional technology forms will continue to be shaped by the complexity barrier, and that their orientations and designs are best suited to the distribution of data and information, not knowledge. The world is ready for a new knowledge paradigm of computing.

The next version of the Internet, the Knowledge Web, poses a unique problem. The technologies that support the Knowledge Web must be specifically designed to surface the "deep web" knowledge that is stored in hundreds of thousands of corporate and government databases. They must rapidly and explicitly deliver answers to user’s "who," "what," "when," "where," and "how much" data questions, and more importantly, their "how," "why," and "what-if" knowledge questions. As this level of delivery is actualized, the Internet will make the shift from information-based computing as we know it today, to knowledge-based computing. Once achieved, this computing paradigm should survive the rest of the 21st Century.

The technical demands for acquiring and managing conceptual "how," "why," and "what-if" knowledge as opposed to text and object-oriented information, is substantial. The success of Knowledge Web technologies will be measured not by their capacity to search and deliver data, but by their capacity to faithfully represent human thought, to elevate the human-to-computer experience to that of a co-creative partner leading to a state of performance where the user experience is interconnected and symbiotic in every way. It is expected that Knowledge Web machines will KNOW and REASON like people to autonomously understand and respond to each user’s use of language and experience level. And, that the user experience will become a seamless, 2nd nature experience just as the use of cell phones are today. Through versioning and pattern recognition, these technologies will "intuit" desires, needs and requirements of users, and respond with explicit interactive feedback to make the human-to-machine interaction more complete and satisfying. The Future Knowledge Web will be a network of intelligence. Knowledge will be freely exchanged throughout the network.

Achieving this ideal is easier said then done. First, the complexity barrier must be overcome. The problem is that present IT infrastructures are composed of layers of isolated software components that cannot scale to incorporate the range of functions required for the software to fully perform intended purposes. Scaling limitations result primarily from the use of schemas, indexing and other programming conventions that limit the number of data relationships required to completely represent human thought. When exceeded, these limitations cause explosive growth within software components to the point where further development exceeds cost/performance ratios. As a workaround, additional layers of software are developed and integrated into component groups, which then perform intended functions – to varying degrees of success. Once software layers have been developed, they then need to be integrated with other software components within the function group. This process requires that each software component’s unique protocols and Application Programming Interfaces (APIs) be emulated by interfacing components before they will work together. This process becomes increasingly complex and costly as layers increase in number, size and functionality. This process also creates an immense number of execution paths that may contain errors and security loopholes. The impact on the infrastructure is measured in development and operational cost, lost efficiency, performance and security risk.

Recent semantic methodologies and technologies such as formal ontology construction, RDF and OWL have attempted to solve some of these problems, but these approaches use layered architectures as well, embodying the same set of complexity problems that semantic technologies were intended to overcome. After several years of research and development, there remains little evidence to suggest that a robust, large-scale solution to the problems of scalability, complexity and interoperability will emerge from these conventional approaches.

Fortunately, there are numerous unconventional efforts being made by software scientists to solve the complexity problem. These approaches include both hardware (computer chips), and revolutionary software architectures that are designed to embrace complexity rather than buckle under its weight. One such breakthrough architecture, invented by Dr. Richard L. Ballard, is referred to as an "n-ary architecture." Unlike conventional architectures that are constructed with predefined structures and indexing, Ballard’s n-ary architecture self-builds and organizes ontologies (machine understandable concepts) by first representing independent concepts as uniquely encoded, language independent representations of thought, that are further refined in meaning by establishing as many concurrent associative relationships to other independent concepts as necessary, until every possible idea or pattern of thought, of any knowledge domain, is faithfully represented. Through this process, Ballard’s n-ary architecture is expected to represent every metaphysical concept, and every possible physical instance, within the machine environment.

Of equal importance, is that Ballard’s n-ary architecture naturally organizes knowledge content through a means called "theory-based semantics." In a 2006 presentation entitled: "Systems that Know – Ultimate Innovations" Ballard states, "To capture every form of knowledge is not about dealing with infinite complexities, nature is everywhere finite. Successful representations of knowledge seek to find nature’s most elegant limit simplicities." Ballard takes the position that the binding element of human thought is "theory," which constrains the very meaning of concepts, ideas and thought patterns within our own brains. For this reason, it makes sense that concept representations within the machine environment are most efficiently organized according to well-justified theory (lessons-learned relationships established through science, business, finance, manufacturing, social relations, law and other established disciplines) rather than through structure and indexing. Likewise, theory-based semantics embraces axiological values such as time, states of being, and orientations. And, even beliefs, if a knowledgebase product warrants such representations.

This is significant to the success of the Knowledge Web, because n-ary architectures demonstrate a decisive move away from complex database artifacts composed of text and objects that require people to read and interpret, to machine internalized knowledge stores composed of precise metaphysical concepts and physical instance representations which machines can understand and use on their own. Theory-based semantic representations do not just represent meaning, they precisely embody the very knowledge they are intended to represent – they KNOW.

Of further importance, since n-ary architectures are by nature "concept machines," they hold the promise that problems of content integration and virtual interoperability can be overcome because every structure, syntax, protocol and indexed relationship can be conceptually emulated within a virtual n-ary environment. For this reason, the problem of too many moving parts, integration of cross-functional, cross-departmental disciplines that are based on different mindsets, objectives, processes, language and systems of operation, now have a viable solution that does not require the addition of more hardware, or the dicing and slicing of yet finer sets of data. This advantage also applies to legacy infrastructures that are plagued with multiple versions of software and hardware accumulated piecemeal over time. N-ary concept machines provide a viable option for virtual integration to solve these problems as well.

Another advantage to the N-ary architecture is that its declarative store dramatically decreases content volume while providing speed-of-thought delivery of knowledge content. Content volume is decreased because concepts are never duplicated, and all content is stored at "Shannon/Ballard bit-limits." This means there is no need for compression because all content is stored at the most minimal storage configuration possible. This is especially valuable since bit-limit storage eliminates the costly need for compression/decompression processes required to datamine legacy storage systems. The N-ary Architecture’s structural efficiency makes its greatest gains by eliminating conventional indexing requirements. This is important to the overall functionality of Nary knowledge products because indexing overhead can represent a substantial percentage of content overhead, and in some cases, be greater than the very data it organizes. It has been determined, for example, that the indexing costs of triples is as high as 66% of a triples store. Ballard’s N-ary Architecture eliminates indexing overhead because it relies on real-time "Situation Awareness," not 100% procedurebased processing, to deliver its content to a users screen. M.R. Endsley (1988) describes situation awareness as comprising three levels: level 1, perception of elements, level 2, comprehending what those elements mean, and level 3, using that understanding to project future states. (Moray, 2004): SA is also considered as the state of knowing and understanding what is going on around you and predicting how things will change, or, in other words, "being coupled to the dynamics of your environment.”

Finally, security is an essential requirement of the Knowledge Web. Ballard’s N-ary Architecture supports manual implementation through Knowledge Engineering, and automatic securing of every knowledgebase during its development. This includes security traps throughout each knowledgebase product to include tamper-proof, component-to-component, component to server, network, desktop, laptop or PDA as required to lock-down content, as well as standard security features such as password protected "need to know and need to access." Since knowledge content is uniquely coded and language independent, and there are no pre-defined structures or indexing, there is also no way for hackers to access and decode content stores.

In conclusion, the gap between the requirements for the Social Web and the Knowledge Web are substantial. The Social Web is linguistic. The Knowledge Web is semantic. The leap across "the linguist/semantic gap" as Ballard calls it, is considerable in both mindset and practicality. Inventions such as Ballard’s N-ary Architecture and other advances that are just now appearing, will bridge the chasm between the Information Age and the Knowledge Age. The Future Knowledge Web is not just about technology; it is about the evolution of human understanding.