Integrating Smartsheet with Spotfire

It’s been over three months at my new job and I have already learnt a variety of new things. Working with a brilliant team of developers, I have been exploring the nitty-gritties of Spotfire and unraveling new functionalities every week. While there is a significant difference between the available functionalities for the client and web versions, Spotfire leads my list of best visualization tools to provide business insights. But what I have found most exciting recently is integrating it with Smarstheet.

Image: Wikimedia Commons

Smartsheet, as the name suggests, is a smart sheet – essentially an Excel-meets-Trello-meets-Tableau platform. Tying it up with Spotfire gives you a pretty neat solution where on one end, you can create efficient workflows for your team to work directly on the data source and on the other, pull the data in to create great data visuals. To give you a simple example – you can set an automation in Smartsheet to send out email reminders to different members of your team to populate cells assigned to them and this data can then be pulled into the corresponding Spotfire dashboard. The only catch here is – you would need a Smartsheet business license to connect Smartsheet to Spotfire using the Live Data Connector.

CHECK OUT Using Python and Natural Language Processing for Mental Health Data Analysis

As I continue to monitor things that are new in upcoming versions of Spotfire by following Neil Kanungo’s enlightening Dr. Spotfire sessions, I plan to keep an eye out for other such integrations to help my team build efficient processes and deliver fast, accurate business insights.

Using Python and Natural Language Processing for Mental Health Data Analysis

In an attempt to practice my analytics coding skills, I thought I’ll put them to work in a topic that interests and affects many people across the globe. So, I used Python, NLP (natural language processing), matplotlib, seaborn, WordCloud, and Tweepy, to perform some basic analysis followed by a round of sentiment analysis on data extracted from recent tweets.

WC

Through hands-on implementation of pandas, natural language processing, and #matplotlib, I learnt a bunch of stuff during this project including –

  • how to install and use wordcloud
  • how to create and use a twitter #developer account
  • how to install and use #tweepy
  • how to perform #sentimentanalysis on extracted data

MH4

While the project is not everything I wanted it to be, it provided some good practice in essential data science tools and techniques. I wrote a detailed description of this project in this article on LinkedIn. All the code for this project is posted on my github page.

The realization that I am lacking in so many aspects of data science is sometimes disheartening. However, I am determined to keep moving forward. The day I find myself to be an excellent data analyst cannot be too far, right?

Up For Some Tableau in The Office? That’s What She Said!

Last summer I worked with a Game of thrones dataset for a visualization project. I was planning to revisit that dataset to unravel some more mysteries, when it occurred to me that I should look for something similar with my current favorite – The Office.

I found this wonderful dataset of lines from the show. It has dimensions like Speaker and Seasons making it a tempting dataset for a Tableau exercise. The first thing that came to mind was to get into Michael’s business – That’s What She Said!

Nothing surprising here – Michael obviously stands out! I was also interested in looking at the lines from a sentiment analysis point of view. It turns out that not many people laugh in the show (at least that’s what the script says). An analysis of the lines revealed some unusual observations –

  • Angela talks more than Oscar, and Toby talks more than Stanley
  • Dwight laughs more than Pam, and Toby more than Oscar

Looking at both these dashboards together, you can see that –

  • Season 4 has the maximum number of “That’s what she said”s but the lowest lines with characters laughing.

You can find the dashboard on my github page. I wanted to explore this further but I came across this amazing Tableau Public workbook, and this brilliant article where the author goes into data mining with R and word frequencies and character correlations. These are great inspirations for me to explore some other datasets and come up with interesting insights and dashboards.

Staying in the Loop with Python, the Queen of Data Science

My on-and-off relationship with Python began a few months before I started my Master’s degree. When I knew that I was going to turn towards IT, it was a no brainer that I had to raise my coding game. I had learned C programming during my engineering days but that was almost a decade ago. So, to go back to my roots, I took a weekend course in object-oriented programming with Java. While it was a lot of fun, it became clear to me that Java, though brilliant, was more of a mobile app development tool (no offense, Java lovers!). There was another language that reigned over the data science kingdom and for any chance of success as a data analyst, I had to woo her.

I started learning Python with the MIT OCW course (edX) on Introduction to computational programming with Python to understand the basic data structures and some beginner-level programs. While I got through the basics, I could not complete this course as, after a point, I found it to be a bit dry. And that was that. At UTD, I was already making good progress in my analytics learning trajectory thanks to my work with R programming. So there was no need to hurry things up with another language. However, as things progressed with my club Travelytics, and I came across competitions online, I couldn’t delay getting my hands dirty with Python anymore.

So, I dived right in with Kaggle Learn‘s wonderful data science track which started with 7 hours of Python, including all the basics from variables, lists, loops and functions to important libraries and elementary programs. This was followed by my internship at iCode where I worked with Python projects and also trained over 50 students in the foundations of Python and machine learning. The hands-on exercises and projects at iCode, like building a movie recommender system, were of great help in laying down the foundations of Python for data science in my brain.

Joseph Kim in one of his Python sessions at UTD

Back at UTD, it helped that my friend, Joseph Kim, who was the President of the data science club, conducted some amazing hands-on sessions for people to learn Python basics. Attending these sessions helped me, and many others, stay in the loop (pun totally intended). Then came my own Python research for my facial recognition project to solve crime tourism, at the end of which I had adapted three simple python programs that detect and recognize faces in real time. This was my most memorable time spent with Python programming, as I was able to see some tangible results generated by code written by me.

In the last few months, I have been following the extraordinary free YouTube lessons of Krishna Naik. His Machine Learning playlist is the most valuable resource I have found online that helps me practise everything from the use of impressive data science libraries like NumPy, Pandas and scikit learn to data visualization exercises with matplotlib and Seaborn. He is also an excellent coach in analytics concepts like entropy and Gini impurity, and machine learning algorithms like regression, k-means clustering, k-nearest neighbors, decision trees and ensemble methods.

We are truly fortunate to live in a world and time where so many resources are available for anyone who has an Internet connection and wants to learn. I am currently working my way through Kiril Eremenko’s well-acclaimed Udemy course on Python for data science. While all these wonderful online resources have their charm, nothing comes close to in-class training. This became evident in my object-oriented programming class with Dr. Nassim Sohaee. Her diligent classwork and challenging assignments, which I am still working on, have been excellent tools to help me understand the nuts and bolts of object-oriented design and the anatomy of Python programming. I have worked in various projects dealing with loops, functions, classes, inheritance and exception handling. In addition to all the data science exercises, this class has helped me gain more confidence in leveraging Python as a powerful programming language in the time to come.

ALSO SEE Saying “Hello, old friend” to Statistics and Analytics
Diving Deep into Business Analytics with R Programming

This is the fifth post of my #10DaysToGraduate series where I share 10 key lessons from my Master’s degree in the form of a countdown to May 8, my graduation date.

Balancing Up with SQL and Database Management

I had understood very early on while learning the basics of data science that the three pillars of a sturdy analytics structure are statistics, a programming language, and database management. So, after covering the first two in my previous posts, it’s natural that I move to database foundations.

During Fall 2018, I started learning the basics of databases in Dr. James Scott’s class. The man is a gifted speaker and entertainer. His class was full of marvelous impressions, anecdotes from his variety of experiences, and exciting PowerPoint presentations. It was here that I understood the concept of data modeling with topics like primary and foreign keys, Entity Relationship Diagrams (ERD) , schemas and sub-schemas, weak and strong relationships, and Normalization . However, the most important part of this class was that it got me started in one THE MOST IN-DEMAND TOOL asked for in every job role I desire – SQL!

Photo by Tobias Fischer on Unsplash

As my friend Ankita loves saying – SELECT is written in our star(*)s. It was a delight to work on class assignments that tested our knowledge of dependencies, NULL values, SQL functions, relational operators, joins, sub-queries, and views. We also got into the basics of transaction management using SQL. And since we had worked extensively with Relational Databases for most part of the class, Dr. Scott spent the last leg of our semester teaching us the basics of NoSQL and MongoDB. It formed a great runway for my future big data endeavors.

My SQL and database learning during this semester culminated with a project where I got my hands dirty with some data munging, database modeling and even regression using SQL and R. Just cleaning this data before we can perform any kind of retrieval was a task in itself. Thanks to this class, I find myself proficient in creating ERDs, working with various SQL joins and clauses to retrieve simple as well as aggregated data from complex data sets.

ALSO SEE Saying “Hello, old friend” to Statistics and Analytics
Diving Deep into Business Analytics with R Programming

This is the third post of my #10DaysToGraduate series where I share 10 key lessons from my Master’s degree in the form of a countdown to May 8, my graduation date.

Diving Deep into Business Analytics with R Programming

When a class is named after your graduation major, and one of the most popular disciplines in the present world, you know it’s going to be pivotal in your learning path. BA with R proved to be just that. The brilliant Dr. Sourav Chatterjee made it clear right at the beginning that R programming is going to be used just as a tool (which it is) to understand and master the nuances of business analytics. Having said that, his course material left no stone unturned in taking us through all aspects of R programming needed for data science.

I had worked a bit with Java and PHP, but this was my first experience with the R programming language. I started with an introductory course on Datacamp to quickly learn the very basics of R like vectors, matrices and data frames. Then, in class, Dr. Chatterjee proved to be a dedicated and patient professor as he started with basic manipulations and sample generation in R and then quickly moving to the foundations of data analytics. We got familiar with libraries like tidyverse, forecast, gplots and toyed with data visualization using ggplot on some interesting data sets. We created several plots, graphs, charts, and heatmaps, before scaling up to larger data sets.

This was followed by some of the most important things a business analyst/data scientist learns in his career. So far, everything looked pretty straight forward to me but now was the time to push boundaries and actually dive deep into analytics. I was introduced to dimension reduction, correlation matrix and the all-important analytics task of principal component analysis (PCA). I learnt how to evaluate performance of models, create lift and decile charts, and classification with the help of a confusion matrix – all with just a few lines of code. As Dr. Chatterjee explained time and again, it was never about the code. It was about knowing when and how to use it and what to do with the result.

Dr. Sourav Chatterjee’s BA with R class

We then followed the natural analytics progression with linear and multiple regression where I learned about partitioning of data and generating predictions. This was followed by a thorough understanding of the KNN model and how and when to run it. By now, I was beginning to get a hand of problem statements and the approach to take to solve them, thanks to class assignments on real-world scenarios like employee performance and spam detection. Through the examples done in class, it was easy to grasp the concepts of R-squared value, p-value and the roles they play in model evaluation. It was in this class that I understood logistic regression, discriminant analysis, association rules for the first time and I have been working on them ever since, in every data science course or project that I have taken up.

All of this knowledge and Dr. Chatterjee’s guidelines were put to use in the final project where I worked with a group led by the talented Abhishek Pandey on London cabs data. After rigorous work on large data sets downloaded/extracted from various sources, we trained a model to predict arrival times for cabs by comparing RMSE across random forests, logistic regression, and SVMs. It was a great way to put into practice everything we had learned over four months.

And with that, I had laid a robust foundation in data analytics, and was ready to build it further in the time to come. By January 2019, I was confident to dive into analytics projects and work on complex data sets to generate prediction models using the tools taught by Dr. Saurav Chatterjee.

ALSO SEE Saying “Hello, old friend” to Statistics and Analytics

This is the second post of my #10DaysToGraduate series where I share 10 key lessons from my Master’s degree in the form of a countdown to May 8, my graduation date.

Saying “Hello, old friend” to Statistics and Analytics

There’s a reason I chose Statistics to be no. 10 and the first one in this countdown. When you want to enter the world of data science, you realize very quickly that you can do nothing without the concepts of statistics being clear in your head. The University of Texas at Dallas obviously understood this and made Statistics and Analytics a core course. So, when I started my Master’s program in Fall 2018, I enrolled for this course with Dr. Avanti Sethi in my very first semester. Dr. Sethi proved to be an excellent teacher, and I am honored to have had the pleasure of knowing and working with him during the past two years.

Photo by Luke Chesser on Unsplash

Thanks to his well-designed lectures and assignments, I was able to build a strong statistical foundation with good practice of basic concepts like measures of central tendency (mean, median, mode) and measures of statistical dispersion (variance, standard deviation, IQR). The course then went on to cover concepts like population, sampling, estimation, z-score, t-score, Normal distribution, hypothesis testing, p-value, chi-square tests, ANOVA tests and regression. Dr. Sethi, who is an Excel ninja, also conducted a separate hands-on session for students interested in learning Advanced Excel and taught us how to build macros. The problem statements in his assignments covered real-life scenarios ranging from sports team performances and automobile dealerships to Halloween sales and manufacturing plant obstacles.

Dr. Sethi’s class in Sep 2018

And just like that, right in the very first semester, Statistics and Analytics had set the ball rolling on my data science journey. I have been going back to Dr. Sethi’s assignments every few months, to make sure I don’t forget the very foundations of everything that I have learned in analytics so far. It was a memorable semester thanks to this wonderful class, and left me with a lot of confidence to move forward.

This is the first post of my #10DaysToGraduate series where I share 10 key lessons from my Master’s degree in the form of a countdown to May 8, my graduation date.

Facial Recognition with Python, OpenCV and Raspberry Pi

Everybody Loves Recognition! Technically, the definition of recognition is – Identification of someone or something or person from previous encounters or knowledge. But how can it be used to solve real-world problems? This was the premise of a facial recognition project I built using Python and OpenCV on a Raspberry Pi. All the code for this project is available on my github page.

The Problem

Crime tourism, which is very different from ‘crime against tourists’, refers to organized gangs that enter countries on tourist visas with the sole intention to commit crime or make a quick buck. Residing in their destination countries for just a few weeks, they seek to inflict maximum damage on locals before returning to their home countries. It’s something that has been picking up all over the world but especially in Canada, US, Australia.  Here’s an excerpt from a Candian Report:

“Over the weekend, we got a notification that there were at least three people arrested,” he said. “And there were two detained yesterday in a different city. It’s just a growing problem.” When police in Australia broke up a Chilean gang in December, they thanked Canadian police for tipping them off. Three suspects who’d fled Ontario and returned to Chile turned up in Sydney, Australia. The tip from Halton Regional Police led to eight arrests and the recovery of more than $1 million worth of stolen goods.

While the tip came in handy, it would be much more effective to have portable facial-recognition devices at airports and tourist spots to identify criminals and stop them before their crime in a new destination.

The Solution

I used Crime tourism as an example problem to demonstrate the use of facial recognition as a solution. It started with buying a Raspberry Pi v3 ($35) and a 5 MP 1080 p mini Pi camera module ($9) and configuring them.

Then, using Adrian Rosebrock’s brilliant tutorial, I embarked on a 10-hour journey (full of mistakes made on my part) to compile OpenCV on my Raspberry Pi! Here are some important things to remember from this compilation expedition:

•You need to expand your file system to be able to use the entire 32 GB of Pi memory •You need to create a Python 3 virtual environment and always make sure that you’re working inside that environment
•Before you begin the compile process – Increase the SWAP space from 100 MB to 2048 MB to enable you to compile OpenCV with all four cores of the Raspberry Pi (and without the compile hanging due to memory exhausting).
•After installation of NumPy and completion of your OpenCV compilation, re-swap to 100 MB

Python Code for Facial Recognition

I then followed MjRobot’s tutorial to write three simple Python programs for the actual facial-recognition using OpenCV. The object-detection is performed using the Haar feature-based cascade classifiers, which is an effective object detection method proposed by Paul Viola and Michael Jones in their paper, “Rapid Object Detection using a Boosted Cascade of Simple Features” in 2001. It is a machine-learning based-approach where cascade function is trained from a lot of positive and negative images. These images are then used to detect objects in other images. Haar Cascades directory is readily available on the OpenCv github page.

Demonstration

I presented this project on my last day as the President of the UTD club – Travelytics. There, I conducted a live demonstration of the Pi cam capturing my face after I run the first Python program, training the model with the second program, and real-time facial recognition using the third program. Here’s a glimpse:

This project proved to be an excellent route for me to learn the basics of Python, OpenCV, computer vision, Raspberry Pi and how we can implement a low-budget, effective facial recognition solution to complex problems.

Grasping at Straws

Univ

As I am all set to enter the final semester of my Masters degree, I am feeling extremely anxious. While most people are concerned about finding a full-time job in a state or company of their preference, for me that thought is still miles away. My immediate concern is how much I know as a data engineer/analyst. 18 months ago, I made the switch from product manager/actor/writer to Business Analytics student. The goal was to become proficient in the concepts of data mining and analysis, since it was a promising sector and the whole world seemed to be moving in a direction where every industry heavily relies on data science. Now, as I get closer to my graduation date, I keep questioning the extent of my knowledge. And to my disappointment, I keep coming across questions I do not know the answer to.

I need to fix this situation and quickly. I have 117 days to go until my graduation date (May 8, 2020). So, I am taking a start from scratch approach for now. The idea is to revise everything I have learnt at UTD as part of my course, followed by a couple of online courses and certifications. This includes the basics of statistics (p-value, hypothesis testing), database foundations, SQL, NoSQL, mining concepts like principal component analysis, regression techniques, clustering, time series, big data – Hadoop, Spark, Hive, language basics in Python and R, and data visualization techniques.

To devise a plan for this, I am contacting some students I look up to and asking for their advise on the best approach to ensure maximum retention. I am also hoping to audit some classes this final semester. I have just one class left to fulfill my graduation requirements but there is so much more I wish to learn. Natural Language Processing, Applied Machine Learning and Business Data Warehousing are my top picks. I have written to the professors asking for their permission to let me sit in on their lectures.

20191114_191341

Finally, this will also be my last semester as the president of Travelytics – a club I conceived and founded with the help of some of my friends. After one final project presentation (Computer Vision with Python, OpenCV and Raspberry Pi), it will be time to hand over the reins of this organization to the next batch of students.

117 days to go. Time for a final sprint!

Travelytics presents BIG DATA IN TRAVEL with Dr. Rick Seaney

When we kicked off our first Travelytics event in 2018, Prof. Kevin Short at UTD was kind enough to grace us with his presence and speak on the use of data in the airline industry. And now, thanks to him, we have a travel domain stalwart visiting UTD and conducting a special lecture for Travelytics. The topic is an exciting one – BIG DATA in the TRAVEL INDUSTRY. We look forward to an exciting session with Dr. Seaney and a bunch of enthusiastic data science students.

Dr. Rick Seaney - Big Data in Travel