Deep learning on large images: challenges and CNN

Applying deep learning on large images is always a challenge but there a solution using convolutional but first, let’s understand in brief where is the challenge? So one of the challenges in computer vision is that inputs become very big as we increase image size.

Ok. Consider the example of a basic image classification problem e.g.; cat detection.

Let’s take an input 64×64 image of a cat and try to figure out if that is a cat or not? to do that we’ll need 64x64x3, (where 3 is the no of RGB channel) parameters, so the x input feature will have 12288 dimensions though this is not a very large number considering just the 64×64 image size which is very small there are lots of input features to deal with..

Say now if we take 1000×1000 image (1MB) which is decent in size but there will be 1000x1000x3 (where 3 are the RGB channels) ~ 3M input features

so if you put in a deep network then x will be 3M, and suppose if first hidden layer has 1000 hidden units then the total no of weights for a fully connected network will be (Weight matrix, X) ~ (1000, 3M) dimension means that the matrix will have 3 Billion parameters which are very very large, so with that much of data it is difficult to avoid overfitting in Neural Network and also the computational requirement to train the 3 Billion params is not feasible.

This is just for a 1MB image but in computer vision problem you don’t want to stick with using just tiny images using bigger images results in overfitting and huge input feature vector so here comes convolution operation which is the basic building block of Convolutional Neural Network (CNN).

Source: Andrew Ng Deep learning Course

More on CNN in the next post.


Inspiration & source of understanding: Andrew Ng, Deep Learning

Prognostic Analytics for Predictive Maintenance, a case study

First, let’s try to understand the difference between prognostic analysis and predictive analysis. Predictive analysis tells that something is going to fail in future whereas Prognostic analysis tells that something is going to fail says in next some days/weeks/months. so there is always a time dimension factor in the prognostic analysis. Prognostic analysis can help in planning things in advance before the system actually fails results in saving resources and time. Let elaborate further with a case study.

Case Study: Let’s take a case where we want to use Prognostic Analytics for Predictive Maintenance in IoT based Systems in large plants e.g.; aviation, oil & gas, big manufacturer, etc. Running a prognostic model can help in finding out that performance of which controls system is degrading by analyzing the key sensor data of the past which can give an early sign that the system may go down and one can take precaution which can result in big saving. The control systems which also includes sensor infrastructure used in heavy industries generate tons-tons of data continuously and most of the time the data is decades-old, so basically companies stores the data but get never used. So there is a huge opportunity for the heavy manufacturers to use the past data to get a good insight into the different parts of the system. Technically, ML team can make a pipeline and stream the data which is coming out of the control system as it works and stream to a cloud-like AWS/Google/Azure or private cloud and then ML models can be run to check for the abnormalities and preventive maintenance can be planned. For an example, if you in a power plant and some crucial parts fails then someone from the supplier has to rush, take a plane and deliver and install which costs a lot of money but if we start doing the prognostic analysis we can get an early sign of which parts might fail and can procure. Just to summarise, we can use the old data of the system/putting new data to the cloud and make preventive/prognostic analysis which can save money, resources and time.

References-Thanks: Dr. Harpreet Singh, I heard him on a podcast and highly impressed by his vision on data science and different use cases.

Debugging a crucial skill but very rarely taught

Listening on software engineering radio podcast Diomidis Spinellis mentions how debugging is so much important in software development but still, we don’t teach much in-depth this skill in our universities. I and believe any other programmer will agree that debugging tools are the key arsenal in fixing bugs and even understanding the system.

Either you use modern tools or just by basic print/printf statements that don’t matter. Students should learn these key skills and professors should emphasize on educating and not only in universities even in industry set-up when a new developer joins in there should be good exposure to debugging so that they dissect code base and become productive fast.

Worth considering I think …

What do you think? Please share in comments.

Outliers with Pankaj Mishra: podcast on entrepreneurship

Found this interesting podcast focusing on the Indian IT landscape, start-up, journey and entrepreneurship in general. The bio says ” A podcast about the ones who chose to take the road not taken often. It’s about the crazy and the curious. Those that dared to stand out, and stand-alone. It’s about their journey through hope and disillusionment, failures and pitfalls, joy and success, pain and bliss. It’s a candid exploration of experiences and ideas that have driven some of the shining stars, told as is.

So far I have listened to a couple of them and found an enriching conversation.

Happy listening!

Date print formats Perl Date::Manip

Perl Date::Manip is one of the modules which I use a lot. It’s a wonderful lib and has very clean API with great documentation. Below is a quick look at Date::Manip print format options which sometimes is very handy. For detailed interpretation and other options encourage to go through Date::Manip on CPAN


my $present_date_hash = Date::Manip::Date->new("today");
my $present_date = $present_date_hash->printf("%Y-%m-%d %H:%M:%S");

Happy Coding!

also, can check my previous article on generating date patterns using Date::Manip

Books recommendation series: Essentialism: The Disciplined Pursuit of less


Essentialism: The Disciplined Pursuit of less
Greg Mckeown
Essentialism – The Disciplined Pursuit of Less


The theme of the book is to find out what is truly essential? Focus on essential and eliminate everything else. Greg McKeown defines Essentialism as “Less but better”.


The key takeaway is that only once you give yourself the permission to stop trying to do it all, to stop saying yes to everyone, can you make your highest contribution towards the things that really matter. How often we say “yes” to everything, there is so much going on in our lives that nothing ever gets our true focus and attention. How can we give our best in any area when we’re being pulled in so many different directions? We can’t. We need to identify what is truly important, learn to say no to things that don’t fit into the “essential” category and simplify our lives. This book is an essential read for anyone who feels overcommitted, overloaded, or overworked.


Very good read. Highly recommended.

Note: My friend Rishi has done a detailed review of this book. Check this out here:

Happy reading!

git push up to a certain commit

This is a quick share on git.
Scenario: I want to push my local changes to git but am having a few commits which I don’t want to push now. In other words, I just want to push changes till a certain commit.
$git push <remotename><commit SHA>:<remote_branch_name>

To elaborate
First, fetch the SHA of the commit you want to push
$ git log

Copy the SHA and use the command below (Make sure that you replace the SHA of your commit with the given in the example).

$git push origin 7120f221660dad58d41b9ac729a22f08572b109:master

You are good to go. Your local commit of the given SHA is now pushed to server master keeping your local commits local only.

Please share in comments if you any other alternate way.


Algorithms performance: big O notation: simplified short notes

 The big O notation is used to analyze runtime time complexity. big O notation provides an abstract measurement by which we can judge the performance of algorithms without using mathematical proofs. Some of the most common big O notations are:

  • O(1) : constant: the operation doesn’t depend on the size of its input, e.g. adding a node to the tail of a linked list where we always maintain a pointer to the tail node.
  • O(n): linear: the run time complexity is proportionate to the size of n.
  • O(log n): logarithmic: normally associated with algorithms that break the problem into similar chunks per each invocation, e.g. searching a binary search tree.
  • O(n log n): just n log n: usually associated with an algorithm that breaks the problem into smaller chunks per each invocation, and then takes the results of these smaller chunks and stitches them back together, e.g, quicksort.
  • O(n2): quadratic: e.g. bubble sort.
  • O(n3): cubic: very rare
  • O(2n): exponential: incredibly rare.

Brief explanation:     
Cubic and exponential algorithms should only ever be used for very small problems (if ever!); avoid them if feasibly possible. If you encounter them then this is really a signal for you to review the design of your algorithm always look for algorithm optimization particularly loops and recursive calls. 

The biggest asset that big O notation gives us is that it allows us to essentially discard things like hardware means if you have two sorting algorithms, one with a quadric run time and the other with a logarithmic run time then logarithmic algorithm will always be faster than the quadratic one when the data set becomes suitably large. This applies even if the former is ran on a machine that is far faster than the latter, Why?

Because big O notation isolates a key factor in algorithm analysis: growth. An algorithm with quadratic run time grows faster than one with logarithmic run time.

Note: The above notes are for quick reference. Understanding algorithmic performance is a complex but interesting field. I would recommend picking a good book to understand the nitty-gritty of big O and other notations.