Turning Big Data Into Actionable Insights
Big Data is a very powerful concept. When it is put in the hands of the right expert, Big Data can be used to glean insights and draw conclusions that are often completely unexpected. However, Big Data is dangerous in the hands of the average person. There are so many variables, so much volume, so much to process. Most of us don’t have the expertise to understand, manipulate, and get valid insight from a big set of data. In fact, without the right expertise, it is easy to draw inaccurate conclusions with accurate Big Data.
On the other hand, using small data seems awfully limiting in that it can feel like data with training wheels. However, it is those training wheels that make the data accessible to the non-expert.
We use small data to find specific insights. To answer predetermined questions, the data-analysis expertise has been ‘baked’ into the selected slices of data to be shown, into the parameters put in place to scope down the volume of data, and into the carefully chosen method of visually representing the data. As a result, it can be understood, interpreted, and acted upon without much mathematical or statistical expertise.
It allows the average person to observe trends in predetermined areas of inquiry, to spot anomalies, and react to them. It is streamlined to inform day-to-day decisions.
As much as we love to talk and think about big data, most of the data we consume is small. We rely on tools, processes, and data experts to convert our big data into small data so that we can understand it and use it to inform our decisions.
Let’s look at a few examples to better understand this big-to-small transition. There seem to be three main characteristics of data that define it as either ‘big’ or ‘small’:
- The volume of data
- The variety of types of data
- The velocity at which it is accumulated / processed
In other words, big data can be characterized in terms of lots of diverse data elements coming into the organization at the speed of light. Whereas smaller amounts of more homogeneous data elements, coming at a slower rate, in digestible chunks, is representative of small data.
There is no hard limit on the amount of data that draws the line between small and big data. In fact, most collections of big data can contain a multitude of small data sets.
Instead of trying to find a hard limit on size to distinguish small and big data, the question to ask is what kind of insights are we after. This might help in making the distinction between the two.