Suzanne Kattau of Silicon Angle reports, “IBM and the United Services Automobile Association (USAA), a financial services provider for the military community, today announced they have teamed up to offer IBM’s Watson Engagement Advisor in a pilot program to assist USAA members. USAA provides insurance, banking, investments, retirement products and advice to 10.4 million current and former members of the U.S. military and their families. Named after IBM founder Thomas J. Watson, IBM Watson uses natural language processing and analytics, and can process information similar to the way people think. This helps organizations to quickly analyze, understand and respond to vast amounts of Big Data. IBM’s Watson Engagement Advisor analyzed USAA’s business data and now understands more than 3,000 documents on topics exclusive to military transitions.” Read more
Digital Reasoning’s Synthesys machine learning platform (which The Semantic Web Blog initially covered here) this summer should see its Version 3.9 release. The update will build on the 3.8 release, which delivered with its Glance user interface the discovery and investigative capabilities that help information analysts in finance, intelligence and other compliance- and security-sensitive sectors react to findings in user profiles of interest and their associated relationships, activities and risks. Version 3.9 takes on the proactive part of the equation — early risk detection — via its Scout user interface.
Last year, the company honed in on compliance use cases ranging from insider trading to money laundering with Version 3.7 of Synthesys (covered here). There, the technology for discovering the meaning in unstructured data at scale, highlighting important entities in context, was applied to email communications for organizations such as financial institutions that have to be on the lookout for conversations that cross compliance boundaries.
AlphaSense’s Advanced Linguistics Search Engine Could Buy Back Time For Financial Analysts To Do More In-Depth Research
When Raj Neervannan, CTO and co-founder of financial search engine company AlphaSense, thinks about search, he thinks about it “as a killer app that is only growing…..People want answers, not noise. They want to ask more intelligent questions and get to the next level of computer-aided intelligence.”
For AlphaSense’s customers – analysts at large investment firms and banks or any other industry, as well as one-person shops – that means search needs to get them out of ferreting through piles of research docs for the nuggets of information they really need. Neervannan knows the pain of trying to interpret a CEO’s commentary to understand what he or she was really saying when making the point that numbers were going down when referring to inventory turns. (Jack Kokko, former analyst at Morgan Stanley, is AlphaSense’s other co-founder.)
“You are essentially digging through sets of documents [using keyword search], finding locations of terms, pulling them in piece by piece and constructing a case as to what the company’s inventory turn was really like – what other companies’ similar information was, how that matches up. You have to do quantitative analysis and benchmarks, and it can take weeks,” he says.
Marty Loughlin of Wall Street & Technology recently noted that in this era of “massive business and IT transformation,” organizations in the financial industry “will need to change how they track, manage, and consume data. For many organizations, this data is not easily accessible — it is distributed across the organization, often trapped in local business units, applications, data warehouses, spreadsheets, and documents. Traditional technologies are struggling to address this challenge and many believe a new approach is required. Some of the new big-data solutions do help. They are good at liberating and colocating data. However, they often struggle to make it usable. Creating a ‘data lake’ where rigid structure is not required can result in yet another silo of unusable data where context, meaning, and sources are lost. Many organizations are turning to semantic technology for the answer.” Read more
A new report from the Securities Technology Analysis Center (STAC), Big Data Cases in Banking and Securities, looks to understand big data challenges specific to banking by studying 16 projects at 10 of the top global investment and retail banks.
According to the report, about half the cases involved e petabyte or more or data. That includes both natural language text and highly structured formats that themselves presented a great deal of variety (such as different departments using the same field for a different purpose or for the same purpose but using a different vocabulary) and therefore a challenge for integration in some cases. The analytic complexity of the workloads studied, the Intel-sponsored report notes, covered everything from basic transformations at the low end to machine learning at the high-end.
Larry Hardesty of the MIT News Office reports, “By now, most people feel comfortable conducting financial transactions on the Web. The cryptographic schemes that protect online banking and credit card purchases have proven their reliability over decades. As more of our data moves online, a more pressing concern may be its inadvertent misuse by people authorized to access it. Every month seems to bring another story of private information accidentally leaked by governmental agencies or vendors of digital products or services. At the same time, tighter restrictions on access could undermine the whole point of sharing data. Coordination across agencies and providers could be the key to quality medical care; you may want your family to be able to share the pictures you post on a social-networking site.” Read more
The Aite Group, which provides research and consulting services to the international financial services market, spends its fair share of time exploring the data and analytics challenges the industry faces. Senior analyst Virginie O’Shea commented on many of them during a webinar this week sponsored by enterprise NoSQL vendor MarkLogic.
Dealing with multiple data feeds from a variety of systems; feeding information to hundreds of end users with different priorities about what they need to see and how they need to see it; a lack of a common internal taxonomy across the organization that would enable a single identifier for particular data items; the toll ETL, cleansing, and reconciliation can take on agile data delivery; the limitations in cross-referencing and linking instruments and data to other data that exact a price on data governance and quality – they all factor into the picture she sketched out.
Jake Thomases of Waters Technology reports that AlphaSense has won the Best Sell-Side Analytics Product award at the Sell-Side Technology Awards. In his profile on the company, Thomases writes, “Searching through endless regulatory filings, company presentations, earnings call transcripts, news releases, and other research is an interminable and eye-glazing process. Electronifying those documents has allowed analysts to perform keyword searches, although they still had to search each document for every keyword variation individually.” Read more
Amir Halfon of Marklogic recently discussed the ways that semantic technologies can create value in the financial sector, among other industries. One such way is through data provenance: “Due to the increased focus on data governance and regulatory compliance in recent years, there’s a growing need to capture the provenance and lineage of data as it goes through its various transformation and changes throughout its lifecycle. Semantic triples provide an excellent mechanism for capturing this information right along with the data it describes. A record representing a trade for instance, can be ‘decorated’ with information about the source of the different elements within it (e.g.: Cash Flow -> wasAttributedTo -> System 123). And this information can be continuously updated as the trade record changes over time, again without the constraints of a schema, which would have made this impossible.” Read more
Thinknum is a startup with the mission: disrupting financial analysis.
In his work as a quantitative strategist at Goldman Sachs, Thinknum co-founder Gregory Ugwi saw firsthand the trials and tribulations financial analysts went through to digest companies’ financial reports and then build their own research reports about their expectations for future performance based on past numbers. The U.S. SEC’s mandate that companies disclose their financial data using XBRL (eXtensible Business Reporting Language) was supposed to help them, as well as investors of all stripes and sizes that want to better understand what’s going on at the companies they’re interested in.
“The SEC has mandated that all companies have to release their numbers in a machine-readable format, and that’s XBRL (eXtensible Business Reporting Language),” says Ugwi. The positive side of that is that anyone can now get the stats on companies from Google to Wal-Mart, but the downside is that by and large, they can’t do it in a user-friendly way.
NEXT PAGE >>