Big Data: 4 Myths and 4 Methods to Improve It

The excitement over big data is beginning to tone down. Technologies like Hadoop, cloud and their variants have brought about some incredible developments in the field of big data, but a blind pursuit of ‘big’ might not be the solution anymore. A lot of money is still being invested to come up with improved infrastructure to process and organize gigantic databases. But the costs sustained in human resources and infrastructure from trying to boost big data activities can actually be avoided for good – because the time has come to shift focus from ‘big data’ to ‘deep data’. It is about time we become more thoughtful and judicious with data collection. Instead of chasing quantity and volume, we need to seek out quality and variety. This will actually yield several long-term benefits.


Big Myths of Big Data

To understand why the transition from ‘big’ to ‘deep’ is essential, let us look into some misconceptions about big data:

  1. All data must be collected and preserved
  2. Better predictive models come from more data
  3. Storing more data doesn’t incur higher cost
  4. More data doesn’t mean higher computational costs

Now the real picture:

  1. The enormity of data from web traffic and IoT still overrules our desire to capture all the data out there. Hence, our approach needs to be smarter. Data must be triaged based on value and some of it needs to be dropped at the point of ingestion.
  2. Same kind of examples being repeated a hundred times doesn’t enhance the precision of a predictive model.
  3. Additional charges related to storing more data doesn’t end with the extra dollars per terabyte of data charged by Amazon Web Services. It also includes charges associated with handling multiple data sources simultaneously and the ‘virtual weight’ of employees using that data. These charges can even be higher than computational and storage costs.
  4. Computational resources needed by AI algorithms can easily surpass an elastic cloud infrastructure. While computational resources increase only linearly, computational needs can increase exponentially, especially if not managed with expertise.

When it comes to big data, people tend to believe ‘more is better’.

Here are 3 main problems with that notion:

  1. Getting more of the same isn’t always useful: Variety in training examples is highly important while building ML models. This is because the model is trying to understand concept boundaries. For example, when a model is trying to define a ‘retired worker’ with the help of age and occupation, then repeated examples of 35 year old Certified Accountants does little good to the model, more so because none of these people are retired. It is way more useful if examples at the concept boundary of 60 year olds are used to indentify how retirement and occupation are dependent.
  2. Models suffer due to noisy data: If the new data being fed has errors, it will just make the two concepts that an AI is trying to study more unclear. This poor quality data can actually diminish the accuracy of models.
  3. Big data takes away speed: Making a model with a terabyte of data usually takes a thousand times more time than preparing the same model with a gigabyte of data, and after all the time invested the model might fail. So it’s smarter to fail fast and move forward, as data science is majorly about fast experimentation. Instead of using obscure data from faraway corners of a data lake, it’s better to build a model that’s slightly less accurate, but is nimble and valuable for businesses.

How to Improve:

There are a number of things that can be done to move towards a deep data approach:

  1. Compromise between accuracy and execution: Building more accurate models isn’t always the end goal. One must understand the ROI expectations explicitly and achieve a balance between speed and accuracy.
  2. Use random samples for building models: It is always advisable to first work with small samples and then go on to build the final model employing the entire dataset. Using small samples and a powerful random sampling function, you can correctly predict the accuracy of the entire model.
  3. Drop some data: It’s natural to feel overwhelmed trying to incorporate all the data entering from IoT devices. So drop some or a lot of data as it might muddle things up in later stages.
  4. Seek fresh data sources: Constantly search for fresh data opportunities. Large texts, video, audio and image datasets that are ordinary today were nonexistent two decades back. And these have actually enabled notable breakthroughs in AI.

What all get’s better:

  • Everything will be speedier
  • Lower infrastructure costs
  • Complicated problems can be solved
  • Happier data scientists!

Big data coupled with its technological advancements has really helped sharpen the decision making process of several companies. But what’s needed now is a deep data culture. To make best of powerful tools like AI, we need to be clearer about our data needs.

For more trending news on big data, follow DexLab Analytics – the premier big data Hadoop institute in Delhi. Data science knowledge is becoming a necessary weapon to survive in our data-driven society. From basics to advanced level, learn everything through this excellent big data Hadoop training in Delhi.


Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

January 11, 2019 6:00 pm Published by , , , , , ,

, , , ,

Comments are closed here.


Call us to know more