Financial Services

AlphaSense’s Advanced Linguistics Search Engine Could Buy Back Time For Financial Analysts To Do More In-Depth Research

alpha1When Raj Neervannan, CTO and co-founder of financial search engine company AlphaSense, thinks about search, he thinks about it “as a killer app that is only growing…..People want answers, not noise. They want to ask more intelligent questions and get to the next level of computer-aided intelligence.”

For AlphaSense’s customers – analysts at large investment firms and banks or any other industry, as well as one-person shops – that means search needs to get them out of ferreting through piles of research docs for the nuggets of information they really need. A former financial industry analyst himself, Neervannan knows the pain of trying to interpret a CEO’s commentary to understand what he or she was really saying when making the point that numbers were going down when referring to inventory turns. (Jack Kokko, the former financial industry analyst at Morgan Stanley, is AlphaSense’s other co-founder.)

“You are essentially digging through sets of documents [using keyword search], finding locations of terms, pulling them in piece by piece and constructing a case as to what the company’s inventory turn was really like – what other companies’ similar information was, how that matches up. You have to do quantitative analysis and benchmarks, and it can take weeks,” he says.

Read more

How Semantic Technology is Improving the Financial Service Industry

310847464_42c3a50b99

Marty Loughlin of Wall Street & Technology recently noted that in this era of “massive business and IT transformation,” organizations in the financial industry “will need to change how they track, manage, and consume data. For many organizations, this data is not easily accessible — it is distributed across the organization, often trapped in local business units, applications, data warehouses, spreadsheets, and documents. Traditional technologies are struggling to address this challenge and many believe a new approach is required. Some of the new big-data solutions do help. They are good at liberating and colocating data. However, they often struggle to make it usable. Creating a ‘data lake’ where rigid structure is not required can result in yet another silo of unusable data where context, meaning, and sources are lost. Many organizations are turning to semantic technology for the answer.” Read more

Big Data Challenges In Banking And Securities

Photo courtesy: Johan Hansson, https://www.flickr.com/photos/plastanka/

Photo courtesy: Johan Hansson, https://www.flickr.com/photos/plastanka/

A new report from the Securities Technology Analysis Center (STAC), Big Data Cases in Banking and Securities, looks to understand big data challenges specific to banking by studying 16 projects at 10 of the top global investment and retail banks.

According to the report, about half the cases involved e petabyte or more or data. That includes both natural language text and highly structured formats that themselves presented a great deal of variety (such as different departments using the same field for a different purpose or for the same purpose but using a different vocabulary) and therefore a challenge for integration in some cases. The analytic complexity of the workloads studied, the Intel-sponsored report notes, covered everything from basic transformations at the low end to machine learning at the high-end.

Read more

HTTPA Will Let You Track How Your Private Data is Used

oshani-seneviratne_bv13_0

Larry Hardesty of the MIT News Office reports, “By now, most people feel comfortable conducting financial transactions on the Web. The cryptographic schemes that protect online banking and credit card purchases have proven their reliability over decades. As more of our data moves online, a more pressing concern may be its inadvertent misuse by people authorized to access it. Every month seems to bring another story of private information accidentally leaked by governmental agencies or vendors of digital products or services. At the same time, tighter restrictions on access could undermine the whole point of sharing data. Coordination across agencies and providers could be the key to quality medical care; you may want your family to be able to share the pictures you post on a social-networking site.” Read more

Financial Execs Worry About Data Lineage; Triple Stores Can Calm Fears

 

Photo courtesy: Flickr/ FilterForge

Photo courtesy: Flickr/ FilterForge

The Aite Group, which provides research and consulting services to the international financial services market, spends its fair share of time exploring the data and analytics challenges the industry faces. Senior analyst Virginie O’Shea commented on many of them during a webinar this week sponsored by enterprise NoSQL vendor MarkLogic.

Dealing with multiple data feeds from a variety of systems; feeding information to hundreds of end users with different priorities about what they need to see and how they need to see it; a lack of a common internal taxonomy across the organization that would enable a single identifier for particular data items; the toll ETL, cleansing, and reconciliation can take on agile data delivery; the limitations in cross-referencing and linking instruments and data to other data that exact a price on data governance and quality – they all factor into the picture she sketched out.

Read more

AlphaSense Wins Best Sell-Side Analytics Product 2014

alphasense

Jake Thomases of Waters Technology reports that AlphaSense has won the Best Sell-Side Analytics Product award at the Sell-Side Technology Awards. In his profile on the company, Thomases writes, “Searching through endless regulatory filings, company presentations, earnings call transcripts, news releases, and other research is an interminable and eye-glazing process. Electronifying those documents has allowed analysts to perform keyword searches, although they still had to search each document for every keyword variation individually.” Read more

Making the Case for Semantic Tech in the Financial Sector

Wall Street

Amir Halfon of Marklogic recently discussed the ways that semantic technologies can create value in the financial sector, among other industries. One such way is through data provenance: “Due to the increased focus on data governance and regulatory compliance in recent years, there’s a growing need to capture the provenance and lineage of data as it goes through its various transformation and changes throughout its lifecycle. Semantic triples provide an excellent mechanism for capturing this information right along with the data it describes. A record representing a trade for instance, can be ‘decorated’ with information about the source of the different elements within it (e.g.: Cash Flow -> wasAttributedTo -> System 123). And this information can be continuously updated as the trade record changes over time, again without the constraints of a schema, which would have made this impossible.” Read more

Thinknum Sees Financial Analysis In A New Light

thinknumpixThinknum is a startup with the mission: disrupting financial analysis.

In his work as a quantitative strategist at Goldman Sachs, Thinknum co-founder Gregory Ugwi saw firsthand the trials and tribulations financial analysts went through to digest companies’ financial reports and then build their own research reports about their expectations for future performance based on past numbers. The U.S. SEC’s mandate that companies disclose their financial data using XBRL (eXtensible Business Reporting Language) was supposed to help them, as well as investors of all stripes and sizes that want to better understand what’s going on at the companies they’re interested in.

“The SEC has mandated that all companies have to release their numbers in a machine-readable format, and that’s XBRL (eXtensible Business Reporting Language),” says Ugwi. The positive side of that is that anyone can now get the stats on companies from Google to Wal-Mart, but the downside is that by and large, they can’t do it in a user-friendly way.

Read more

The Office of Financial Research To Look Hard At FIBO For Financial Instrument Reference Database

3120877348_5130705a52

Image Courtesy: Flickr/ .reid

Ontologies are getting a thumbs up to serve as the basis for the Office of Financial Research’s Instruments database. Last week, the Data & Technology Subcommittee of the OFR Financial Research Advisory Committee (FRAC) recommended that the OFR “adopt the goal of developing and validating a comprehensive ontology for financial instruments as part of its overall effort to meet its statutory requirement to ‘prepare and publish’ a financial instrument reference database.”

The Instruments database will define the official meaning of financial instruments for the financial system — derivatives, securities, and so on. The recommendation by the subcommittee is that the OFR conduct its own evaluation of private sector initiatives in this area, including the Financial Industry Business Ontology (FIBO), to assess whether and how ontology can support transparency and financial stability analysis.

FIBO, which The Semantic Web Blog discussed in detail most recently here, is designed to improve visibility to the financial industry and the regulatory community by standardizing the language used to precisely define the terms, conditions, and characteristics of financial instruments; the legal and relationship structure of business entities; the content and time dimensions of market data; and more. The effort is spearheaded by the Object Management Group and the Enterprise Data Management (EDM) Council.

Read more

Text Analytics Can Cut A Broad Swath in Capital Markets

tgroupWhat’s next for the capital markets arena when it comes to unstructured content? According to research and consulting firm TABB Group, which specializes in the stock, bond and money markets, it’s time to turn text analytics to internally generated and disseminated unstructured data, which holds a high value for customized intelligence.

In new research,  “Inner Voices: Harvesting Text Analytics from Proprietary Data,” research analyst Valerie Bogard and senior analyst Paul Rowady discuss that there are more use cases than initially undertaken for text analytics tools. “Although ultra-low latency trading strategies were an early use case in this space, text analytics is no longer limited to just that,” Bogard said in an email to The Semantic Web Blog. “The use of machine readable news has been widely adopted and all major market data providers incorporate market moving news content into their feeds.”

Read more

NEXT PAGE >>