Posts Tagged ‘Case Study’
Arnaud Le Hors and Steve Speicher of IBM recently composed a case study: “Open Services Lifecycle Collaboration framework based on Linked Data.” In the introduction they write, “The Rational group in IBM has for several years been employing a read/write usage of Linked Data as an architectural style for integrating a suite of applications, and we have shipped commercial products using this technology. We have found that this read/write usage of Linked Data has helped us solve several perennial problems that we had been unable to successfully solve with other application integration architectural styles that we have explored in the past.” Read more
Ivan Herman recently pointed out a new semantic web case study: Enriching and sharing cultural heritage data in Europeana. According to the article, “Europeana provides access to millions of objects gathered from hundreds of libraries, archives, museums and other cultural institutions throughout Europe. To do so, it gathers descriptive metadata and links to Web resources from all of these institutions. The result is a set of highly heterogeneous metadata. This metadata has hitherto been processed by converting it to a very simple, flat, common-denominator format. This solution gets in the way of putting our data where users and application builders can benefit from it—or use it to build better services.”
It continues, “In order enhance the way Europeana harvests, manages and publishes metadata, we have developed a new, Semantic Web-inspired approach: the Europeana Data Model(EDM). This community-developed model re-uses existing Semantic Web vocabularies (ontologies) such as Dublin Core, SKOS, and OAI-ORE, and adapts them to the Europeana context. Read more
SemTechBiz San Francisco is set to begin this Sunday, June 3. The conference will feature over 130 presentations and over 160 speakers covering such topics as linked data, social networks, content management, open government, semantic wikis, and much more. The already full agenda has been made even better with the late addition of a new presentation, Tangible Semantics, a case study to be presented by Christian Doegel, Founder and CEO of Uma Information Technology GmbH. Read more
On Tuesday the E&P Information Management Association (EPIM) launched EPIM ReportingHub (ERH), an interesting semantic technology project in the field of oil and gas. According to the project website, ERH is “a very flexible knowledgebase for receiving, validating (using NPD’s Fact Pages and PCA RDL), storing, analysing, and transmitting reports. The operators shall send XML schemas for DDR, DPR and MPR to ERH and ERH sends DDR and MPR as XML schemas to the NPD/PSA and all three reports as PDF to EPIM’s License2Share (L2S). The partners may download all three reports and/or any data from one or more reports through flexible queries. Some parts of ERH will be in operation already in November 2011 and the rest as soon as the authorities and the industry are ready for it. ERH is owned and operated by EPIM.” Read more
The W3C recently interviewed Jeanne Holm, Chief Knowledge Architect at the Jet Propulsion Laboratory at Cal Tech. Holm also leads the Knowledge Management team at NASA. In the interview, Holm stated, “The goal of our project was to make it easy find expertise within an organization, or, as you’ll see, across organizational boundaries. The project is called POPS for ‘People, Organizations, Projects, and Skills.’ The acronym does not include E for Expert for a good reason: we tried three times to create a system with data specifically about expertise, but failed each time for different social reasons. Each attempt relied on self-generated lists of expertise. In the first attempt, people over- or under-inflated their expertise, sometimes to bolster their resumes. The second attempt prompted labor unions to get overly involved because greater expertise could be tied to higher pay. The third approach involved profiles verified by management, and that led to a number of human resources grievances when there was a disagreement. In all cases, the data became suspect.” Read more
The upcoming Semantic Technology Conference this June in San Francisco will feature a number of case studies that highlight real-world semantic technology applications. Here are just a few (Click session titles to view details):
Details on how the BBC sport site currently uses embedded Linked Data identifiers, ontologies and associated inference plus RDF semantics to improve navigation, content re-use, re-purposing, and search engine rankings.
How a team of developers using semantic technology and an expressive business language made a significant breakthrough to help business users create, extend and alter high level business concepts and create natural language rules. We recently had a webcast with Craig Hanson from Amdocs, the speaker on this session. Read more
In today’s connected online world, to optimize a customer oriented business requires real time contextual customer knowledge across all business channels and relevant social and competitive forces. Read more
This webcast presents a case study from Amdocs, the market leader in customer experience systems, and Franz, Inc. a leader in Semantic Technology implementations.
|LIVE WEBCAST *|
|Date:||Thursday, December 16, 2010|
|Time:||2:00pm ET / 11:00am PT|
|* The webcast will be recorded and archived here at SemanticWeb.com|
In today’s connected online world, to optimize a customer oriented business requires real time contextual customer knowledge across all business channels and relevant social and competitive forces. This knowledge must be used to control the intended outcome of each business transaction. In complex, heavily customer-centric businesses such as Telecommunications, Health Care, and Financial Services, the optimal business must understand how each action of the business and the individual customer relates to the profitability of the business and customer satisfaction. This is possible if systems holistically see what is going on in real time, determine the meaning of these activities, and in-stream decide and take the optimal action which maximizes profit and customer stickiness. Every business function should be coordinated and driven through a complete awareness of the business theatre. Read more
Joseph C. Wicentowski, U.S. Department of State
Dan McCreary, Dan McCreary and Associates
The U.S. State Department’s Office of the Historian has embarked on an ambitious effort to migrate its diplomatic history document archive from paper to an enriched electronic media for online consumption. We have extremely high standards for semantic precision and accuracy, due to Congressional mandates, which makes this unique resource useful to a broad audience, which includes scholars, government officials, and the general public. Furthermore, the new format allows us to repurpose our content and integrate it with "mashup" applications such as timelines and geographical map views.
This case study reviews the U.S. State Department’s requirements and the decision process that led us to adopt high-precision semantic markup standards that are supported by our tools as well as by our vendors. We will review our requirements and decision-making, and will show concrete examples of how the precise identifiers for people, locations, and events allow us to enrich the display of our documents online.
We will also review the full document lifecycle and the need for automated but high quality entity extraction tools to minimize document conversion costs. This case study will discuss some of the tradeoffs others may face when advanced technology decisions have both risks and rewards for the digital historian.
In this presentation we will:
- Review business requirements for a high precision entity extraction application
- Describe our semantic approach
- Demonstrate entity extraction
- Demonstrate timeline and other mashups
- Summarize project benefits
Attachment: High Precision Entity Extraction – A US State Department Case Study.mp3 (54.54 MB)
After completing a Fulbright grant in Asia for his doctoral research and receiving his Ph.D. in History from Harvard University, Joseph C. Wicentowski joined the U.S. Department of State’s Office of the Historian. He has taken a leadership role in digital history management as a digital historian, developing new digital formats for the Department’s archive of U.S. diplomatic and foreign affairs documents, which reach back to the founding of the historian’s office in 1861. He has led development of a new website for these documents, based on a native XML database, and is working to bring the benefits of data visualization, metadata management, and other digital history applications to the federal government and the public. He has particular interests in XML, XQuery, and U.S. and Chinese history.
Dan is an enterprise data architect/strategist living in Minneapolis. He has worked for organizations such as Bell Labs and Steve Job’s NeXT Computer as well as founding his own consulting firm of over 75 people. He has a background in object-oriented programming and declarative XML languages (XSLT, XML Schema design, XForms, XQuery, RDF, and OWL). He has published articles on various technology topics including the Semantic Web, metadata registries, enterprise integration strategies, XForms, and XQuery. He is author of the XForms Tutorial and Cookbook.
NEXT PAGE >>