Guest

FIBO Summit Opening Remarks by EDMC Managing Director Mike Atkin

[Editor's Note: As our own Jennifer Zaino recently reported, the Enterprise Data Management (EDM) Council, a not-for-profit trade association dedicated to addressing the practical business strategies and technical implementation realities of enterprise data management held a two day FIBO Technology Summit in conjunction with MediaBistro’s Semantic Technology & Business (SemTechBiz) Conference, June 7th and 8th in San Francisco, California.  SemTechBiz was chosen for the summit because of its close proximity to the leading minds in Silicon Valley.
 
In afternoon and morning sessions, lead by distinguished academic and industry leaders, 60 top developers discussed 4 key technology challenges and developed plans that will lead to solutions critical to simultaneously lowering the cost of operations in financial institutions and ensuring the transparency required by regulations put in place since the beginning of the financial crisis of 2008.
 
Michael Atkin, EDM Council Managing Director began the deliberations with the following charge to the assembled experts.]

Photo of Mike Atkin, Managing Director, EDM CouncilI spent the majority of my professional life as the scribe, analyst, advocate, facilitator and therapist for the information industry.   I started with the traditional publishers and then moved on to my engagement in the financial information industry.  I watched the business of information evolve through lots of IT revolutions … from microfiche to Boolean search to CD-ROM to videotext to client server architecture to the Internet and beyond.

At the baseline of everything was the concept of data tagging – as the key to search, retrieval and data value.  I saw the evolution from SGML (which gave rise to the database industry).  I witnessed the separation of content from form with the development of HTML.  And now we are standing at the forefront of capturing meaning with formal ontologies and using inference-based processing to perform complex analysis.

I have been both a witness to (and an organizer of) the information industry for the better part of 30 years.  It is my clear opinion that this development – and by that I mean the tagging of meaning and semantic processing is the most important development I have witnessed.  It is about the representation of knowledge.  It is about complex analytical processing.  It is about the science of meaning.  It is about the next phase of innovation for the information industry.

Let me see if I can put all of this into perspective for you.  Because my goal is to enlist you into our journey.  Read more

UX Design Considerations for Semantic Technology

Incorporating semantic technologies into applications is more practical than ever, and so the uses – and range of users – have become very broad. As leaders in this industry find ways to incorporate these advances into new products, we have to be as innovative with the human element… the user experience.

How and when do you incorporate user experience design activities into your process? What questions are answered, and what value does it bring?

It helps to reflect on the role of UX through an example where it is second nature. We often use the metaphor of a dinner party to explore the intersections between user research, experience design, data design, and technology development.

Know Who’s Coming (User Research)

Know Who's ComingA successful dinner party is one that cares about the people who are there, and plans an experience that meets and exceeds their expectations. Are the people compatible socially? Do you know their interests and concerns? Have you learned if they have any allergies or diet preferences, so you don’t give them something they find unpleasant or harmful?

Read more

Big Data Means More Than Volume

[NOTE: This guest post is by Peter Haase, Lead Architect for Research and Development, fluid Operations.]

Photo of Peter HaaseIndustry engineers waste a significant amount of time searching for data that they require for their core tasks. When informed about potential problems, diagnosis engineers at Siemens Energy Services, an integrated business unit which runs service centers for power plants, need to access several terabytes of time-stamped sensor data and several gigabytes of event data, including both raw and processed data. These engineers have to respond to about 1,000 service requests per center per year, and end up spending 80% of their time on data gathering alone. What makes this problem even worse is that their data grows at a rate of 30 gigabytes per day. Similarly, at Statoil Exploration, geology and geographic experts spend between 30 and 70% of their time looking for and assessing the quality of some 1,000 terabytes of relational data using diverse schemata and spread over 2,000 tables and multiple individual databases [1]. In such scenarios, it may take several days to formulate the queries that satisfy the information needs of the experts, typically involving the assistance of experienced IT experts who have been working with the database schemata for years.

Siemens and Statoil Exploration are hardly the only companies faced with time-wasting Big Data issues, but the root of these issues is not simply the “big” aspect of their data. The real challenge is finding a way to efficiently and effectively mine data for value and insight, regardless of its volume.

Read more