Deep learning specialization notes

A couple of months back I have completed Deep Learning Specialization taught by AI guru Andrew NG. During the learning process, I have made personal notes from all the 5 courses.  Notes are based on lecture video and supplementary material provided and my own understanding of the topic. I have used lots of diagrams and code snippets which I made from course videos and slides. I am fully complying with The Honor Code. No programming assignment and solutions are published on GitHub or any other site.

Please note that most of the places I am not using exact mathematical symbol and other notation, instead using plain English name this is just to save some time, also please note that this is a personal diary made during course and I guess a bit longer too and few places not very well organized, so in any form doesn’t replace the content and learning process one follows during course which includes quizzes, programming assignments, project etc. This is a great course so I encourage you to enroll.

What you will learn at the end of the specialization:

Neural Networks and Deep Learning: This course gives foundations of neural networks and deep learning. How to build and train. At the end of this course, we’ll in position to recognize cat so will make a cat recognizer.  [PDF]

Improving Deep Neural Networks – Hyperparameter Tuning, Regularization and Optimization: In this course, we’ll learn about practical aspects of the NN. Now you have made NN/deep network so the focus is on how to make it perform well. We’ll fine tune various things like hyperparamater tuning, regularization algorithms and optimization algorithms like RMSProp, Adam etc. So this course helps greatly in making model perform well.  [PDF]

Structuring your Machine Learning Project: In this course, we’ll learn how to structure machine learning projects. It is observed that strategy for machine learning projects has been changed a lot in deep learning era. For example, the way you divide data in train/test/dev set has been changed in the era of deep learning also whether train and test data comes from the same distributions etc.? we’ll also learn about end-to-end deep learning. The material in this course is relatively unique.  [PDF]

Convolutional Neural Networks(CNN): CNN is often applied in images mainly in computer vision problems. In this course, we’ll learn about how to make these models using CNN’s.  [PDF]

Natural Language Processing-Building Sequence Models: In this course, we’ll learn about algorithms like Recurrent Neural Network (RNN’s), LSTM (Long Short-Term Memory) and learn how to apply them with the sequence of data like natural language processing, speech recognition, music generation etc.  [PDF]

 

Happy learning!

Pl, drop a note in case of any feedback.

 

References:

Deep Learning Specialization:
https://www.deeplearning.ai/

Github (Source code and diagrams used in notes):
https://github.com/ppant/deeplearning.ai-notes

Deep learning Specialization completion certificate: https://www.coursera.org/account/accomplishments/specialization/WVPVCUMH94YS

 

Choropleth Maps in Python

Choropleth maps are a great way to represent geographical data. I have done a basic implementation of two different data sets. I have used jupyter notebook to show the plots.

World Power Consumption 2014

First do Plotly imports

import plotly.graph_objs as go
from plotly.offline import init_notebook_mode,iplot
init_notebook_mode(connected=True)

Next step is to fetch the dataset, we’ll use Python pandas library to read the read the csv file

import pandas as pd
df = pd.read_csv('2014_World_Power_Consumption')

Next, we need to create data and layout variable which contains a dict

data = dict(type='choropleth',
locations = df['Country'],
locationmode = 'country names', z = df['Power Consumption KWH'],
text = df['Country'], colorbar = {'title':'Power Consumption KWH'},
colorscale = 'Viridis', reversescale = True)

Let’s make a layout

layout = dict(title='2014 World Power Consumption',
geo = dict(showframe=False,projection={'type':'Mercator'}))

Pass the data and layout and plot using iplot

choromap = go.Figure(data = [data],layout = layout)
iplot(choromap,validate=False)

The output will be be like below:

Check github for full code.

In next post I will try to make a choropleth for a different data set.

 References: 

https://www.udemy.com/python-for-data-science-and-machine-learning-bootcamp

      https://plot.ly/python/choropleth-maps/

Getting and cleaning data using R programming project notes

Brief notes of my learning from course project of getting and cleaning data course from John Hopkins University.

The purpose of this project is to demonstrate the ability to collect, work with, and clean a data set. Final goal here is to prepare tidy data that can be used for later analysis.

One of the most exciting areas in all of the data science right now is wearable computing – see for example companies like Fitbit, Nike, tomtom, Garmin etc are racing to develop the most advanced algorithms to attract new users. In this case study, the data is collected from the accelerometers from the Samsung Galaxy S smartphone. A full description is available at the site where the data was obtained:

http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones

Here is the dataset for the project:

https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip

I have created an R script called run_analysis.R which does the following.

  • Merges the training and the test sets to create one data set.
  • Extracts only the measurements on the mean and standard deviation for each measurement.
  • Uses descriptive activity names to name the activities in the data set.
  • Appropriately labels the data set with descriptive variable names.
  • Finally, creates a second, independent tidy data set with the average of each variable for each activity and each subject.References:

http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones
https://www.coursera.org/learn/data-cleaning
https://github.com/ppant/getting-and-cleaning-data-project-coursera

 

For working code and tidy dataset please check my Github repo.

 

Accessing Github API with OAuth example using R

Modern API provided by Google, Twitter, Facebook, Github etc uses OAuth for authentication and authorization. In this example, I am using GitHub API. We get a JSON response which can be used to fetch specific information. In this code I have used my Github account.Code is written R programming languages.

Here are the steps:
1. Find OAuth settings for Github
2. Create a application in Github
3. Add/Modify secret keys
4. Get OAuth credentials
5. Finally use API and parse json data to show response

## Load required modules
library(httr)
library(httpuv)
require(jsonlite)

# 1. Find OAuth settings for github:
# http://developer.github.com/v3/oauth/
oauth_endpoints("github")

# 2. To make your own application, register at at
# https://github.com/settings/applications.
## https://github.com/settings/applications/321837
## Use any URL for the homepage URL
# (http://github.com is fine) and http://localhost:1410 as the callback url. You will need httpuv

## Add Secret keys
## Secret keys can be get from developer github
myapp <- oauth_app("github",
key = "7cd28c82639b7cf76fcc",
secret = "d1c90e32e12baa81dabec79cd1ea7d8edfd6bf53")

# 3. Get OAuth credentials
github_token <- oauth2.0_token(oauth_endpoints("github"), myapp)
## Authentication will be done automatically

# 4. Use API
gtoken <- config(token = github_token)
req <- GET("https://api.github.com/users/ppant/repos", gtoken)
stop_for_status(req)
##content(req)
output <- content(req)
## Either of the two can be used to fetch the required info, name and date created of repo ProgrammingAssignment3
out<-list(output[[30]]$name, output[[30]]$created_at)

BROWSE("https://api.github.com/users/ppant/repos",authenticate("Access Token","x-oauth-basic","basic"))
# OR:
req <- with_config(gtoken, GET("https://api.github.com/users/ppant/repos"))
stop_for_status(req)
content(req)


For updated code please check github