If you’re not heavily involved in the data world, you may not have heard of semantic technology, but it might be time to give the category some attention. It’s one of those areas of tech that’s becoming more important as organizations of all kinds contend with streams of information that contain multiple data structures (or no structures) and move at speeds that approach the threshold of mind-boggling.
If you follow the news, you can watch the technology’s spread through a variety of industries and products. Ford, for example, recently led a seed funding round for California startup Civil Maps, which converts raw 3D data into machine-readable maps to help fully autonomous vehicles navigate any road. And health IT experts say the day is coming when “data silos and lack of semantic interoperability will not be tolerated.” As Irene Polikoff, CEO of semantic data integration company TopQuadrant in Raleigh-Durham, N.C., told Dice Insights: “Pretty much every major bank has a group working in this area.”
But what is semantic technology? In Business Cloud News, Jarred McGinnis, UK managing consultant at Bulgaria-based developer Ontotext, defined semantic technologies as encoding “meaning into content and data to enable a computer system to possess human-like understanding and reasoning.” Polikoff describes it as “a standard for sharing data and metadata so it can be easily distributed using a standard protocol,” allowing you to “use the same data model to capture both data and metadata.”
Why is that important? It allows for significant improvements in the way data is categorized, stored and retrieved.
“Data has become so heterogeneous,” Polikoff observed. “You used to be able to design a data model and stick with it, but now it’s becoming so varied it needs more flexibility” to process and use effectively. With semantic technology, data and metadata can be stored in the same standard and space. Among other things, that means the data warehouse doesn’t have to be designed in advance.
“Call it schema-last,” Polikoff said. “Add data first, then structure it.” That’s a handy capability to have when your company struggles to understand disparate data sets, where they’re coming from and what they represent. “It allows identification of non-standardized data in a multitude of formats,” Polikoff explained.
Many Opportunities, Not So Many People
As in so many other areas of tech, the demand for people with skills in semantic technology outstrips the number of tech pros who understand how to put it to use. Those people are in “a whole range” of roles, Polikoff said, including data architects and those who work with data models, as well as application architects and developers who know the standards and languages to create applications to run against it.
With so much activity taking place across so many industries, it doesn’t appear as if demand will let up any time soon. To help address the shortage, companies such as Polikoff”s TopQuadrant, Oracle, Ontotext (which has offices in New York) and Oakland-based Franz Inc. provide training programs; some colleges, and a handful of MOOCs, also offer classes.
As Luca Scagliarini notes in Networks Asia, “The opportunities presented by ‘making sense’ of our data and information come with new requirements for comprehension, context and connection.” In cognitive computing, information management, and data analytics, semantic technology is becoming an important part of the data-scientist toolkit.