Instagram. Tumblr. Pinterest. The web in 2012 is a tremendously visual place, and yet, “visual media still as dumb today as it was 20 years ago,” says Todd Carter, founder and CEO of Tagasauris.

It doesn’t have to be that way, and Tagasauris has put its money on changing the state of things.

Why is dumb visual media a problem, especially at the enterprise-level? Visual media, in its highly un-optimized state, hasn’t been thought of in the same way that companies think about how making other forms of data more meaningful and reasonable can impact their business processes. A computer’s ability to assess image color, pattern and texture isn’t highly useful in the marketplace, and as a result visual media has “just been outside the realm of normal publishing processes, normal workflow processes,” Carter says. Therefore, what so many organizations – big media companies, photo agencies, and so on –  would rightly acknowledge to be their treasure troves of images don’t yield anywhere near the economic value that they can.

“The blue sky what-if,” Carter says, “is what if you re-label the web and describe everything with semantics, and it’s all part of this cool big graph. You’d have this computational layer on top of visual media that would make it discoverable, and connected, and engaging in ways people don’t fully understand yet.”

But the first steps are being taken, with the help of companies like Tagasauris, which leverages both human- and machine-intelligence to build up the data layer that fills in meaning and context for visual media. (Temporal context, too, when you’re talking about videos.) The engine automates the discovery and generation of semantically linked metadata.

For example, it can look at an image and scavenge for embedded metadata about the asset, such as camera data about longitude and latitude that it can then reverse-map to Geoplaces using Geonames. All the tags it generates hook back into the graph, and the hooks form the basis for the reasoning operations, Carter says.

“Once we know that something is New York City we know it is the Big Apple, the City That Never Sleeps, and so on.” Or, if an image is tagged as having, say, a horse in the photo, Tagasauris knows that a horse is an animal and thus its entire taxonomic hierarchy – what its adaptations and behavior are. Humans can check the output and train or re-train the machines, too, so that accuracy improves, with the crowd’s help via micro-outsourcing services like Amazon’s Mechanical Turk. Humans and machines “work together cooperatively to do what neither can do in isolation,” he says.

An example of how companies are dipping their toes into these semantic image waters is a deployment Tagasauris did with Magnum Photos. If editors and others who use the photographic cooperative service online can’t find what they want quickly, it’s off to another option. That was the case at the service, which depended on internal staff manually tagging images with keywords – their output far outstripped by the production of those submitting images. Tagasauris came in a couple of years ago to help fix things by re-labeling its archive and bringing assets without metadata – about a quarter-million of Magnum’s 500,000 digital assets – up to industry standards. But things went so well that it was expanded to include tagging all Magnum’s production images this way, Carter says.

Since the start, the number of Magnum images in Google’s search index has risen to more than 1.8 million, Carter says, and unique visitors search traffic has increased by 5400 percent. Its digital asset management system is connected to Tagasauris at the API level and every 15 minutes image data is pulled in or pushed back, following its run through an annotation workflow. “This lets them respond to business opportunities in totally new ways,” says Carter. That includes leveraging crowd-sourcing for special projects, like one where it wanted to find all images with non-identifiable people to offer photos that could be licensed without model releases. “We had a whole crowd process where thousands of people worked on that and did it in 48 hours,” he says.

More recently, Tagasauris has embarked on a project with Tribune Media Services with respect to its syndicated electronic program guides that provide information about shows, movies and other content. TMS came to the conclusion, Carter says, “that their customers to whom they syndicate content [like TiVo] want rich interactive visual experiences. They are starting to think of the future of TV – over-the-top TV or second-screen TV. The electronic program guide should be the entry point, and somehow you should be able to derive recommendations from it and discover related shows and content,” Carter says. “So at Tribune Media Services it’s about tying all movies, TV shows and personalities in those electronic program guides back into the Knowledge Graph to make TV more about discovery, connection and engagement.”

Today Tagasauris is creating the ability for TMS to manage and maintain all the keys to authoritative places about, for example, an actor in a show featured in the guide (the URI for his page on Wikipedia or his Twitter account, for example). But the idea going forward also involves identifying all the places where that actor exists in a particular video and linking to that. “What you really create with this data layer is a switchboard for content,” Carter says. “Tribune Media Services now wants to enrich their electronic program guide and our interest is in building the data layer and creating some apps that let people start to access that data layer.”

This, Carter says, “is a brave new world. We’re still at the point in our trajectory where we are learning every day.”