Wednesday, March 14, 2018

ML DL

We use these short little acronym in government (Cyberia) all the time.  ML = Machine Learning.  DL = Deep Learning.  If you don't know what these mean, you're likely a left-behind politician with too little time for engineering to really govern (steer).

As a newbie in the circle of ML / DL teachers, I'm very humble.  I sit at the feet of favorite teachers and sponge it up, tasking my own neural net to re-weight and re-bias as necessary.  Get to the bottom of all these meanings.  Investigate.  Don't assume, coming in, that your namespace is well-tempered (well-tuned).

My approach is two-track.  First, I've somewhat abandoned doing everything in Sphinx, not because I have any issues with Sphinx, but because of my own weaknesses and shortcomings with regard to Github.  There's a final step wherein documentation might "go live" in world-readable (open source) space, but I'm not taking it.  Second, I'm staying with Python.

Track One:  manual skills, like when gardening, you need to know how to use a spade, trowel, shovel, bucket, weed whacker and so on. 

Track Two:  conceptual grasp.  The latter comes slowly or at least at its own rate, less under conscious control, whereas practicing with matplotlib, numpy, pandas and scikit-learn APIs is eminently doable of one's own volition.

My focus is on polishing Track One manual skills and remaining patient with the "slow dawning" that is the gradual emergence (surfacing) of any knowledge domain.  I can't rush Track Two whereas if I burn the candle at both ends, I can practice the way athletes practice:  you keep at it.

Keeping these tracks separate has one big advantage:  I don't have to apologize for taking the ten thousand foot view and going for broke on Track Two, all out of proportion to what my manual skills yet allow.  I'm barely able to dig a trench yet am already studying the intricacies of orchid raising, or beekeeping (not usually considered part of gardening, but then really everything is).

My humility does not translate into refraining from actually studying the magic.  I just have to admit I haven't practiced enough, nor re-tuned my model enough, to fully minimize the error function (cost function).  I'm still getting to the bottom of ML DL (gradient descent).