Implemeting Data Structures and Algorithms in Python: Problems and solutions Contd..

Contd.. from last post

Array Pair SumArray Pair Sum

Given an integer array, output all the unique pairs that sum up to a specific value k. So the input:

sum_arr_uniq_pairs([1,2,2,3,4,1,1,3,2,1,3,1,2,2,4,0],5) 

would return 2 pairs:
(2,3)  (1,4)

Find a missing element in an array/list

Consider an array of non-negative integers. A second array is formed by shuffling the elements of the first array and deleting a random element. Given these two arrays, find which element is missing in the second array. Here is an example input, the first array is shuffled and the number 5 is removed to construct the second array.
Input:

 find_missing_ele([1,2,3,4,5,6,7],[3,7,2,1,4,6]) 

Output:5 is the missing numberStack class implementation

Implement basic stack operations (LIFO)

push() – Push an element in a stack pop()- POP an element from top of the stackpeek() – Just peek into top element of the stack (don’t perform any operation)Queue class implementation

Implement basic Queue operations (FIFO)

enqueue – adding a element to the queue dequeue – removing an element from the queueDeque (DECK) class implementation

Implement basic operation in deque (Add and remove elements both at front and rear)

addFront()

Add an element at the front

addRear()

Add an element at the rear

removeFront()

Remove from front

removeRear()

Remove from rear

Balance parentheses using stack/list

Given a string of opening and closing parentheses, check whether it’s balanced. We have 3 types of parentheses: round brackets: () square brackets: [] curly brackets: {}. Assume that the string doesn’t contain any other character than these, no spaces words or numbers. As a reminder, balanced parentheses require every opening parenthesis to be closed in the reverse order opened. For example ‘([])’ is balanced but ‘([)]’ is not. Algo will take a string as the input string and will return boolean (TRUE/FALSE) Examples:

print (check_parentheses_match('([])'))
print (check_parentheses_match('[](){([[[]]])'))

 

Queue with 2 stack implementation

This is a classic problem. We need to use the basic characteristics of the stack (popping out elements in reverse order) will make a queue. Example:

# Create a object of the class
qObj = QueueWith2Stack()
# Add an element 
qObj.enqueue(1)
# Add another element
qObj.enqueue(2)
# Add more element
qObj.enqueue(4)
# Add more element
qObj.enqueue(8)
# Remove item
print (qObj.dequeue())
# Remove item 
print (qObj.dequeue())
# Remove item
print (qObj.dequeue())
# Remove item 
print (qObj.dequeue())

 

Singly Linked List class implementation

Implement basic skeleton for a Singly Linked List Example:

# Added node
a = LinkedListNode(1)
b = LinkedListNode(2)
c = LinkedListNode(3)        
# Set the pointers
a.nextnode = bb.nextnode = c
print (a.value)
print (b.value)
print (c.value)
# Print using class 
print (a.nextnode.value)

Doubly Linked List class implementation

Implement basic skeleton for a Doubly Linked List Example:


# Added node
a = DoublyLinkedListNode(1)
b = DoublyLinkedListNode(2)
c = DoublyLinkedListNode(3)        
# Set the pointers
# setting b after a (a before b)
b.prev_node = a
a.next_node = b
# Setting c after a
b.next_node = c
c.prev_node = b
print (a.value)
print (b.value)
print (c.value)
# Print using class 
print (a.next_node.value)
print (b.next_node.value)
print (b.prev_node.value)
print (c.prev_node.value)

 

Reverse a linked list implementation

The aim is to write a function to reverse a Linked List in place. The function will take in the head of the list as input and return the new head of the list. Example:


# Create a Linked List 
a = LinkedListNode(1)
b = LinkedListNode(2)
c = LinkedListNode(3)
d = LinkedListNode(4)
a.nextnode = b
b.nextnode = c
c.nextnode = d
print (a.nextnode.value)
print (b.nextnode.value)
print (c.nextnode.value)
# Call the reverse()
LinkedListNode().reverse(a)
print (d.nextnode.value)
print (c.nextnode.value)
print (b.nextnode.value)

 

Linked list Nth to the last node

The aim is a function that takes a head node and an integer value n and then returns the nth to last node in the linked list. Example:


# Create a Linked List 
a = LinkedListNode(1)
b = LinkedListNode(2)
c = LinkedListNode(3)
d = LinkedListNode(4)
e = LinkedListNode(5)

a.nextnode = b
b.nextnode = c
c.nextnode = d
d.nextnode = e

print (a.nextnode.value)
print (b.nextnode.value)
print (c.nextnode.value)
print (d.nextnode.value)

# This would return the node d with a value of 4, because its the 2nd to last node.
target_node = LinkedListNode().nth_to_last_node(2, a) 
print (target_node.value)
# Ans: d=4


Check GitHub for the full working code.

I will keep adding more problems/solutions.

Stay tuned!

Ref:  The inspiration of implementing DS in Python is from this course

Implemeting Data Structures and Algorithms in Python: Problems and solutions

Recently I have started using Python in a lot of places including writing algorithms for MI/data science,  so I thought to try to implement some common programming problems using data structures in Python. As I have mostly implemented in C/C++ and Perl.

Let’s get started with a very basic problem.

Anagram algorithm

An algorithm will take two strings and check to see if they are anagrams. An anagram is when the two strings can be written using the exact same letters, in other words, rearranging the letters of a word or phrase to produce a new word or phrase, using all the original letters exactly once

Some examples of anagram:
“dormitory” is an anagram of “dirty room”
“a perfectionist” is an anagram of “I often practice.”
“action man” is an anagram of “cannot aim”

Our anagram check algorithm with take two strings and will give a boolean TRUE/FALSE depends on anagram found or not?
I have used two approaches to solve the problem. First is to sorted function and compare two string after removing white spaces and changing to lower case. This is straightforward.

like


def anagram(str1,str2):
# First we'll remove white spaces and also convert string to lower case letters
str1 = str1.replace(' ','').lower()
str2 = str2.replace(' ','').lower()
# We'll show output in the form of boolean TRUE/FALSE for the sorted match hence return
return sorted(str1) == sorted(str2)

The second approach is to do things manually, this is because to learn more about making logic to check. In this approach, I have used a counting mechanism and Python dictionary to store the count letter. Though one can use inbuilt Python collections idea is to learn a bit about the hash table.

Check GitHub for the full working code.

I will keep adding more problems/solutions.

Stay tuned!

Ref:  The inspiration of implementing DS in Python is from this course

Choropleth Maps in Python

Choropleth maps are a great way to represent geographical data. I have done a basic implementation of two different data sets. I have used jupyter notebook to show the plots.

World Power Consumption 2014

First do Plotly imports

import plotly.graph_objs as go
from plotly.offline import init_notebook_mode,iplot
init_notebook_mode(connected=True)

Next step is to fetch the dataset, we’ll use Python pandas library to read the read the csv file

import pandas as pd
df = pd.read_csv('2014_World_Power_Consumption')

Next, we need to create data and layout variable which contains a dict

data = dict(type='choropleth',
locations = df['Country'],
locationmode = 'country names', z = df['Power Consumption KWH'],
text = df['Country'], colorbar = {'title':'Power Consumption KWH'},
colorscale = 'Viridis', reversescale = True)

Let’s make a layout

layout = dict(title='2014 World Power Consumption',
geo = dict(showframe=False,projection={'type':'Mercator'}))

Pass the data and layout and plot using iplot

choromap = go.Figure(data = [data],layout = layout)
iplot(choromap,validate=False)

The output will be be like below:

Check github for full code.

In next post I will try to make a choropleth for a different data set.

 References: 

https://www.udemy.com/python-for-data-science-and-machine-learning-bootcamp

      https://plot.ly/python/choropleth-maps/

Stanford Machine learning class slides

Andrew NG Machine learning class is the best class so far which I took online.

Apart from the course video sometimes lecture slides are also important for quick reference. For quite some time, I was looking for them as they are not available on course home.

Here all the lecture slides available at:
https://d396qusza40orc.cloudfront.net/ml/docs/slides/Lecture1.pdf

Lecture2.pdf

Lecture3.pdf

Lecture4.pdf

and so on…

 

My own experience slides only make sense if you go through the full video course.  Professor is an amazing teacher.

 

Enjoy learning.

 

Getting and cleaning data using R programming project notes

Brief notes of my learning from course project of getting and cleaning data course from John Hopkins University.

The purpose of this project is to demonstrate the ability to collect, work with, and clean a data set. Final goal here is to prepare tidy data that can be used for later analysis.

One of the most exciting areas in all of the data science right now is wearable computing – see for example companies like Fitbit, Nike, tomtom, Garmin etc are racing to develop the most advanced algorithms to attract new users. In this case study, the data is collected from the accelerometers from the Samsung Galaxy S smartphone. A full description is available at the site where the data was obtained:

http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones

Here is the dataset for the project:

https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip

I have created an R script called run_analysis.R which does the following.

  • Merges the training and the test sets to create one data set.
  • Extracts only the measurements on the mean and standard deviation for each measurement.
  • Uses descriptive activity names to name the activities in the data set.
  • Appropriately labels the data set with descriptive variable names.
  • Finally, creates a second, independent tidy data set with the average of each variable for each activity and each subject.References:

http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones
https://www.coursera.org/learn/data-cleaning
https://github.com/ppant/getting-and-cleaning-data-project-coursera

 

For working code and tidy dataset please check my Github repo.

 

Accessing Github API with OAuth example using R

Modern API provided by Google, Twitter, Facebook, Github etc uses OAuth for authentication and authorization. In this example, I am using GitHub API. We get a JSON response which can be used to fetch specific information. In this code I have used my Github account.Code is written R programming languages.

Here are the steps:
1. Find OAuth settings for Github
2. Create a application in Github
3. Add/Modify secret keys
4. Get OAuth credentials
5. Finally use API and parse json data to show response

## Load required modules
library(httr)
library(httpuv)
require(jsonlite)

# 1. Find OAuth settings for github:
# http://developer.github.com/v3/oauth/
oauth_endpoints("github")

# 2. To make your own application, register at at
# https://github.com/settings/applications.
## https://github.com/settings/applications/321837
## Use any URL for the homepage URL
# (http://github.com is fine) and http://localhost:1410 as the callback url. You will need httpuv

## Add Secret keys
## Secret keys can be get from developer github
myapp <- oauth_app("github",
key = "7cd28c82639b7cf76fcc",
secret = "d1c90e32e12baa81dabec79cd1ea7d8edfd6bf53")

# 3. Get OAuth credentials
github_token <- oauth2.0_token(oauth_endpoints("github"), myapp)
## Authentication will be done automatically

# 4. Use API
gtoken <- config(token = github_token)
req <- GET("https://api.github.com/users/ppant/repos", gtoken)
stop_for_status(req)
##content(req)
output <- content(req)
## Either of the two can be used to fetch the required info, name and date created of repo ProgrammingAssignment3
out<-list(output[[30]]$name, output[[30]]$created_at)

BROWSE("https://api.github.com/users/ppant/repos",authenticate("Access Token","x-oauth-basic","basic"))
# OR:
req <- with_config(gtoken, GET("https://api.github.com/users/ppant/repos"))
stop_for_status(req)
content(req)


For updated code please check github

Creating recurring date patterns using Perl

This program will be helpful if someone want to create recur date patterns based on criteria (yearly, monthly,weekly and daily). Program is written in Perl using old version of Date::Manip CPAN module.

# Script to calculate recurrence dates based on given criteria using Perl Date::Manip module.
# All the input dates in this are given hard coded. These shall be passed through external program etc.
#!/usr/local/bin/perl -w
use strict;
use Date::Manip; 
use Data::Dumper;  
# calculate the dates for yearly patterns.
&yearly();
&monthly();
&weekly();
&daily();
sub yearly {
	my $base = "2015-10-29";
	my $start_date = "2015-10-29";
	my $end_date = "2018-01-01";
	my $yearly_recur_every ="1";
	my $yearly_on_month = "10";
	my $yearly_on_week = "0";
	my $yearly_on_day = "29";
	my $yearly_on_the_month = "10";
	my $yearly_on_the_week = "1";
	my $yearly_on_the_day = "1";
	my $frequency = "";
	my $frequency_pattern_yearly_on = "$yearly_recur_every*$yearly_on_month:$yearly_on_week:$yearly_on_day:0:0:0";
	my $frequency_pattern_yearly_on_the = "$yearly_recur_every*$yearly_on_the_month:$yearly_on_the_week:$yearly_on_the_day:0:0:0";
	my @yearly_dates_on = ParseRecur($frequency_pattern_yearly_on,$base,$start_date,$end_date); # On a certain day of a month
	my @yearly_dates_on_the = ParseRecur($frequency_pattern_yearly_on_the,$base,$start_date,$end_date); # First Monday of Oct 
	print "\n";
	print "******************************************************************************\n";
	print "**************************** YEARLY *******************************************\n";
	print "*******************************************************************************\n";
print "Start date :". $start_date."\n";
print "End date :". $end_date."\n";
print "\n";
print "******************************************************************************\n";
print "Temporal expression: every 1 year on October 29\n";
print "Rule: ".$frequency_pattern_yearly_on;
print "\n";
print "Dates:\n";
print Dumper (\@yearly_dates_on);
print "\n";
print "Temporal expression: every 1 year on the first Monday of October\n";
print "Rule: ".$frequency_pattern_yearly_on_the;
print "\n";
print "Dates:\n";
print Dumper (\@yearly_dates_on_the);
print "\n";
}
# Monthly
sub monthly () {
	my $base = "2015-10-29";
	my $start_date = "2016-01-22";
	my $end_date = "2017-06-01";
	my $monthly_recur_every ="1";
	my $monthly_day_of = "29";
	my $monthly_the_day = "1";
	my $monthly_the_week = "1";
	my $frequency = "";
	my $frequency_pattern_monthly_day = "0:$monthly_recur_every*0:$monthly_day_of:0:0:0";
	my $frequency_pattern_monthly_the_day ="0:1*-2:5:0:0:0"; # Every month on the 2nd last Friday
	my @monthly_dates_day = ParseRecur($frequency_pattern_monthly_day,$base,$start_date,$end_date); # On a certain day of a month
	my @monthly_dates_the_day = ParseRecur($frequency_pattern_monthly_the_day,$base,$start_date,$end_date); # First Monday of Oct 
	
print "\n";
print "******************************************************************************\n";
print "**************************** MONTHLY *******************************************\n";
print "*******************************************************************************\n";
print "Start date :". $start_date."\n";
print "End date :". $end_date."\n";
print "\n";
print "******************************************************************************\n";
print "Temporal expression: Day 29 of every 1 month\n";
print "Rule: ".$frequency_pattern_monthly_day;
print "\n";
print "Dates:\n";
print Dumper (\@monthly_dates_day);
print "\n";
print "Temporal expression: The first monday of every month\n";
print "Rule: ".$frequency_pattern_monthly_the_day;
print "\n";
print "Dates:\n";
print Dumper (\@monthly_dates_the_day);
print "\n";
}
# Weekly
sub weekly () {
	my $base = "2015-10-29";
	my $start_date = "2016-01-22";
	my $end_date = "2016-03-01";
	my $weekly_recur_every ="1";
	# We need to add comma on the value we are getting from UI .. if the field is not selected means no value then 
	# no comma will be added
	my $first_day_of_the_week = ""; # Monday
	my $second_day_of_the_week = "2,"; # Tuesday
	my $third_day_of_the_week = ""; # Wednesday
	my $fourth_day_of_the_week = "4,"; #Thrusday
	my $fifth_day_of_the_week = ""; # Friday
	my $sixth_day_of_the_week = ""; # Saturday
	my $seventh_day_of_the_week = ""; # Sunday
	# my $weekly_the_day = "1";
	# my $weekly_the_week = "1";
	my $frequency = "";
	my $frequency_pattern_weekly_day = "0:0:$weekly_recur_every*$first_day_of_the_week$second_day_of_the_week$third_day_of_the_week$fourth_day_of_the_week$fifth_day_of_the_week$sixth_day_of_the_week$seventh_day_of_the_week:0:0:0";
		my @weekly_dates_day = ParseRecur($frequency_pattern_weekly_day,$base,$start_date,$end_date); # On a certain day of a month
	
print "\n";
print "******************************************************************************\n";
print "**************************** WEEKLY *******************************************\n";
print "*******************************************************************************\n";
print "Start date :". $start_date."\n";
print "End date :". $end_date."\n";
print "\n";
print "Temporal expression: Every every week on Tuesday and Thrusday\n";
print "Rule: ".$frequency_pattern_weekly_day;
print "\n";
print "Dates:\n";
print Dumper (\@weekly_dates_day);
print "\n";
}
# Daily
sub daily () {
	my $base = "2015-10-29";
	my $start_date = "2016-01-22";
	my $end_date = "2016-02-05";
	my $daily_recur_everyday ="1";
	# We need to add comma on the value we are getting from UI .. if the field is not selected means no value then 
	# no comma will be added
	my $first_day_of_the_weekday = "1,"; # Monday
	my $second_day_of_the_weekday = "2,"; # Tuesday
	my $third_day_of_the_weekday = "3,"; # Wednesday
	my $fourth_day_of_the_weekday = "4,"; #Thrusday
	my $fifth_day_of_the_weekday = "5"; # Friday
	my $daily_start_time = "8:00"; # 8AM
	my $frequency = "";
	my $frequency_pattern_daily_everyday = "0:0:0:$daily_recur_everyday*0:0:0";
	# 0:1*1-5:$dow:0:0:0";
	# "0:0:0:$n*0:0:0";  # Every nth day
	my $frequency_pattern_daily_every_weekday = "0:0:$daily_recur_everyday*$first_day_of_the_weekday$second_day_of_the_weekday$third_day_of_the_weekday$fourth_day_of_the_weekday$fifth_day_of_the_weekday:0:0:0";
	my @daily_dates_everyday = ParseRecur($frequency_pattern_daily_everyday,$base,$start_date,$end_date); # On a certain day of a month
	my @daily_dates_every_weekday = ParseRecur($frequency_pattern_daily_every_weekday,$base,$start_date,$end_date); # On a certain day of a month
	print "\n";
	print "******************************************************************************\n";
	print "**************************** DAILY *******************************************\n";
	print "******************************************************************************\n";
	print "Start date: ". $start_date."\n";
	print "End date: ". $end_date."\n";
	print "\n";
	print "Temporal expression: Everyday\n";
	print "Rule: ".$frequency_pattern_daily_everyday;
	print "\n";
	print "Dates:".@daily_dates_everyday."\n";
	print Dumper (\@daily_dates_everyday);
	print "\n";
	print "Temporal expression: Every weekday\n";
	print "Rule: ".$frequency_pattern_daily_every_weekday;
	print "\n";
	print "Dates:\n";
	print Dumper (\@daily_dates_every_weekday);
	print "\n";
	}
	
	# End of script

Full working code is available on GitHub with documentation.

Enjoy,

apache lucy search examples

Investigating search engines and this time apache Lucy 0.4.2. I am showing a basic indexer and a small search application. See below code for indexer (This will take documents one by one and then index them). Search module will take arugument as STDIN and then will show the search result.

This is pure command line utility just to show how basic indexing and searching works using apache lucy.

indexer.pl

#!/usr/local/bin/perl

use strict;
use warnings;
use Lucy::Simple;

#
# Ensure the index directory is both available and empty.
#
my $index = "/ppant/LucyTest/index";
system( "rm", "-rf", $index );
system( "mkdir", "-p", $index );
# Create the helper...a new Lucy::Simple object
my $lucy = Lucy::Simple new( path = $index, language = 'en', );

# Add the first "document". (We are mainly adding meta data of the document)
my %one = ( title ="This is a title of first article" , body ="some text inside the body we need to test the implementaion of lucy", id =1 );
$lucy-add_doc( \%one );

# Add the second "document".
my %two = ( title ="This is another article" , body ="I am putting some basic content, using some words which are also in first document like implementation", id =2 );
$lucy add_doc( \%two );

# Both the documents are now indexed in path

One indexing of the documents is done we'll make a small search script.

search.cgi

#!/usr/local/bin/perl

use strict;
use warnings;

use Lucy::Search::IndexSearcher;

my $term = shift || die "Usage: $0 search-term";

my $searcher = Lucy::Search::IndexSearcher new( index ='/ppant/LucyTest/index');
# A basic search command line which will look for indexed items based on STDIN and will show that in which document query string is found and no of hits
my $hits = $searcher hits( query =$term );
while ( my $hit = $hits next ) {
print "Title: $hit {title} - ID: $hit {id}\n";
}
# End of search.cgi


***********************************************************************

If you want to explore more check Full Code on GitHub

Implementing mapper attachment in elasticsearch

Create a new index

curl -X PUT "192.168.0.37:9200/test" -d '{
"settings" : { "index" : { "number_of_shards" : 1, "number_of_replicas" : 0 }}
}'

Mapping attachement type

curl -X PUT "192.168.0.37:9200/test/attachment/_mapping" -d '{
"attachment" : {
"properties" : {
"file" : {
"type" : "attachment",
"fields" : {
"title" : { "store" : "yes" },
"file" : { "term_vector":"with_positions_offsets", "store":"yes" }
} } } } }'

Shell script to convert content to base64 encoding and index

#!/bin/sh</code>

coded=`cat TestPDF.pdf | perl -MMIME::Base64 -ne 'print encode_base64($_)'`
json="{\"file\":\"${coded}\"}"
echo "$json" &gt; json.file
curl -X POST "192.168.0.37:9200/test/attachment/" -d @json.file


Query  (Search esults will be highlighted)
curl "192.168.0.37:9200/_search?pretty=true" -d '{
"fields" : ["title"],
"query" : {
"query_string" : {
"query" : "Cycling tips"
}
},
"highlight" : {
"fields" : {
"file" : {}
} } }'

***********************************************************************

If you want to explore more check Full Code on GitHub