The Need for Big Data Expertise is Only Going to Grow

Big Data VisualizationBig data is poised to become a key basis of competition, underpinning new waves of productivity growth, innovation and consumer surplus. A report by McKinsey identifies five domains that can reap huge benefits by harnessing big data: healthcare and retail in the U.S., the European public sector, and manufacturing and personal data industries around the world.

There’s a problem, though: Gartner says that fewer than 30 percent of the world’s companies are ready to manage big data. The U.S. alone could face a shortage of 140,000 to 190,000 professionals with the deep analytical skills needed by 2018, as well as 1.5 million managers and analysts with the expertise to use the analysis to make effective decisions. In fact, the need for talent has become so dire that the Heritage Provider Network, California healthcare firm, is offering a $3 million prize to anyone who can create an algorithm to predict when people are likely to be sent to the hospital.

The skills needed here are pattern-based analytics, knowledge discovery, social collaboration, advanced business insights and innovation, say Gartner bloggers John Roberts and Lily Mok. They believe the talent gap will be filled through business partnerships with academic communities, though some companies are attempting to build an inventory of experts by cross training existing workers.

If you want to get a jump on all this, IBM is offering 1,200 free bootcamps.  You might uncover additional resources for free education and training by joining one of the 100 big data meetups groups around the globe.

Photo: Wikipedia

No Responses to “The Need for Big Data Expertise is Only Going to Grow”

    • I think there are multiple views to “Big Data”. First, many see that “Big Data”=”Big Money”. Second, is that “Big Data” will solve all the business world’s ills – it is the ultimate panacea or from an IBM standpoint, “pixie dust.”

      For those of us who have been wrestling with what is known as “Big Data” for many years, it means being able to ingest and process lots and lots of data of very large sizes – so large that traditional data bases cannot contain this data – and to use analytical/statistical techniques to be able to derive information (near real-time) in which you can make decisions on.

      The poster child for “Big Data” is’s Hadoop and MapReduce, and its associated complement of tools. Traditional data bases DO NOT factor into the overall problem therefore you may hear a lot about NoSQL (suggest you do your own search).

      The problem set of “Big Data” comes out of the High Performance Technical Computing space as it is applied to more IT and business related problem sets.

      So, have I been dealing with “Big Data”? Yes. Since before it was “Big Data” and funny enough I first used these HPTC techniques at LLNL on a project called Fusions in the early 90’s. It was to simulate nuclear explosion experiments… so in a sense I used “Big Data” techniques to simulate “Big Explosions.”

      Later in the ’90’s I used these same techniques for data mining the Human Genome. More recently, over the past few years these same techniques have been used for predictive analytics on people and companies. Today, with enough information, we can actually predict (with a certain degree of confidence) certain types of human behavior (like purchasing habits or what certain companies will do) based on statistical models and what-if scenarios.

      Only the surface of “Big Data” is being scratched today and only because folks can see “Big Money.” Some of my own experimentation has shown that using information derived from “Big Data” you can also change human behavior (from a purchasing standpoint) and now this is where some see the “Big Money” is.

      I hope that helps…