BLOG

Tag Archives: Data

How do you import data into Python?

Python is increasingly growing in popularity thanks to the large number of packages that cater to the need of the data scientist. Importing data into Python thus becomes the starting point for any data science project that you will undertake. This guide gives you a comprehensive introduction into the world of importing data into Python.

There are number of file formats that are avaliable that offer you with a source of structured and unstructured data.

The various sources of structured data are:

  • .CSV files
  • .TXT files
  • Excel files
  • SAS and STATA files
  • HDF5 files
  • Matlab files

The various sources of unstructured data are

  • Data from the web in the form of HTML pages

This guide will teach you the fundamentals of importing data from all these sources straight into your python workspace of choice with minimal effort. So let’s get coding!

1) CSV files

CSV files usually contain mixed data types and it’s best to import the same as a data frame using the pandas package in python. You can do this with the code snippet shown below:

We first import the pandas package and then store the file of interest into a variable called ‘filename’. We then use the function pd.read_csv() in order to read the filename into Python and we save the same into the variable ‘data’. The data.head() function is used to display the first 5 rows along with the column names of the dataset.

2) TXT files

The next type of file that we might encounter on our quest to becoming a master data scientist is the .TXT file. Importing these files into python is as easy as importing the CSV file and can be done with the code snippet shown below:

The above line of code that uses the ‘with’ is called as the context manager in python. The open() function opens the file – ‘file.txt’ as a read only document using the argument ‘r’. We then read the file using the myfile.read() argument and printing out the same. If you want to edit the .txt file that you just imported you would want to use the ‘w’ argument with the open() function instead of the ‘r’ argument.

3) Excel Files

Excel files are a huge part of any business operation and it becomes imperative that you learn exactly how to import these into python for data analysis as a pro data scientist. In order to do this we can use the code snippet shown below:

In the above code we first imported pandas. We then stored in the excel file into a variable called ‘file’ after which we imported the file into python using the pd.ExcelFile() function. Using the ‘.sheet_names) we printed out the sheet names present in the excel file. We then extracted the contents of the first sheet as a dataframe using the ‘.parse()’ function.

4) SAS and STATA files

Statistical analytic software is widespread in the business analytics space and needs to be given due diligence. Let’s take a close look at how can get them into python for further analysis.

Importing SAS files requires the SAS7BDAT package while importing STATA files requires only the pandas package.

5) HDF5

The HDF5 file format stands for  Hierarchal  Data Format version 5. The HDF5 is very popular for storing large quantities of numerical data which can span from a few Gigs to exabytes. It’s very popular in the scientific community for storing experimental data. Fortunately for us we can import these files quite easily into python by using the code snippet shown below:

In the above code snippet the package that we are using to import the hdf5 file is the h5py package. The function h5py.file() can be used to import the file in both read only ‘r’ and write ‘w’ modes.

6) Matlab files

Matlab files are used quite extensively by electronic and computer engineers for designing various electrical and electronic systems. Matlab is built around linear algebra and can store a lot of numerical data that we could use for analysis.  In order to import a matlab file we can use the code snippet illustrated below:

Matlab files can be imported using the spicy.io package and the scipy.io.loadmat() function that comes along with the package. When we import matlab files into python we import it as a dictionary containing key:value pairs of your data from matlab.

7) Data from the web

Data from the web is usually in the form of unstructured data that has no order to linearity to it. However, we can find structured data on some websites like Kaggle and the UCI machine learning repository. Such files can be downloaded directly into python from the web using the code snippet below:’

In the code above we have used the urlretrieve package from urllib.request in order to download a csv file from my website. We then saved it as a dataframe locally using the pandas package.

In order to import HTML pages into python we can make use of the ‘requests’ package and a couple of lines of code that’s shown below:

The requests.get() function sends a request to the server to import the webpage while the file.text will convert the webpage into a text file.

Most of the time data from webpages don’t really make a lot of sense. It’s usually in the form of jumbled up text and a lot of code that does not resonate well with anybody. In order to make sense of the data that we import from the web we have to make use of the BeautifulSoup package that is offered by Python.

The .prettify() function displays useful information about your HTML file in a structured fashion while the .title() function would give you the title of your HTML page. For more information about the various functions and the in-depth documentation of the BeautifulSoup package please visit the link: https://www.crummy.com/software/BeautifulSoup/bs4/doc/

Now that you have pretty good idea about how you can import data into python you can finally start your next big Hackathon/Kaggle competition! Be sure to keep exploring the various ways you can explore all kinds of data and all the packages available in the python documentation pages found on the web. There’s no end to the knowledge you can acquire.

Happy coding!

 

 

 

Getting started with text mining in R – a complete guide

Text based data is all around us – we find text on blogs, reviews, articles, social media, social networks like LinkedIn, e-mails and in surveys. Therefore it is critical that companies and firms use this data to their advantage and gain valuable insights. This article provides you with a comprehensive guide that will help you get started with text mining using R.

Before heading into the technical details that encompasses the world of text mining, let’s try and understand what your workflow should look like when it comes to text mining.

The package that makes text mining possible in R is the qdap package. QDAP (Quantitative Discourse Analysis Package) is an R package designed to assist in quantitative discourse analysis. The package stands as a bridge between qualitative transcripts of dialogue and statistical analysis & visualization. Below I will showcase all the techniques and tools that you can utilize for effective text mining using the dap package.

The qdap package in R offers us with a wide array of text mining tools. Assume we have a paragraph of text and we want to count the most frequently used words in that text – We can use the qdap package as shown below

 

One of the most important parts of text mining is cleaning your messy text data. The “tm” package that comes with the “qdap” package in R lets you do just that. The tm package essentially allows R to interpret text elements in vectors or data frames as a document. It does this by first converting a text element into a source document by using the VectorSource() or the DataframeSource() functions and then converting these source objects into Corpus. Corpuses can be manipulated and cleaned to our requirements. Let me illustrate how R does this with an example.

Let’s Consider the following Dataset from Netflix:

We are going to isolate the ratingLevel column and use it for text mining.

Once you have your corpus ready you can then proceed to pre-process your text data using the tm_map() function. Let’s illustrate how you can use the tm_map() function with an example:

As you can see in the code executed above the “tolower” argument in tm_map() function has made all the words lowercase.

The various pre-processing arguments that you can use with the tm_map is given below:

  • removePunctuation() – Removes all punctuation like periods and exclamation marks.
  • removeNumbers() – removes all numeric values from your text
  • removeWords() – remove words like “is”, “and” that are defined by you
  • stripWhiteSpace() – removes all tabs and spaces in your text

Word stemming is another pre-processing technique that is used to find the common words from a large pool of words. Assume you have 4 words – Ludacris, Ludabite, Ludarock and LudaMate. If you apply the stemDocument() function to these 4 words you would extract ‘Luda’ as the common word between these 4 words. This is illustrated by a code snipped shown below:

The qdap package also offers other powerful cleaning functions such as:

  • bracketX() – This will remove all text in brackets – “Data (Science)” becomes “Data”
  • replace_number() – 10 becomes “ten”
  • replace_abbreviations() – “Dr” becomes “Doctor”
  • replace_symbol() – “%” becomes “percentage”
  • replace_contraction() – “don’t” becomes “do not”

Sometimes you would want to remove very common words in text such as “is”, “and” “to”, “the” and the like. You might also want to remove words that you think might not have any significant impact on your analysis. For example if you downloaded a dataset titled “World Bank” it might be useful to remove the word “World” or “Bank” as it is likely to be repeated many times with no significant impact. You can implement this in R using the stopwords() function as shown below:

Notice how the word “Parental” that was added to the words_gone vector has disappeared from the words_gone_forever. The stopwords(“en”) contains a list of stop words such as “not”, “be” etc that was also eliminated in the words_gone_forever vector.

There are three types of matrices that can tell you how many times a particular term occurs in a piece of text. They are – Term Document Matrix (TDM) and the Document Term Matrix (DTM) . The structure and code needed to produce these 2 types of matrices are illustrated below:

In the TermDocumentMatrix we can see how the words are listed along the rows while each of the Ratings is in the columns.

In the DocumentTermMatrix we can see how the words are listed in the columns while the Ratings of shows are listed in the rows.

Now that you know how you can clean your text based data and pre-process it to your requirements we need tools to visualize the clean text data so that we can display our insights to the board, CEO, manager or your audience of interest. There are many tools that can be used to visualize your text based data.

The first visualization tool that you would want to use is the bar plot. We can use the bar plot to plot the most frequent words that occur in our text based data as shown below

The next visualization tool is the word cloud. Word clouds are super useful because they instantly showcase how frequent a word appear or the significance of the word with their size in the cloud. We can implement word clouds in R using the code as shown below.

The neat thing about word clouds is that you can use them to compare the words between two different text based data or you can use them to find out the common words between two different text based data. Wordclouds can be created using the wordcloud package in R.

 

Another useful tool for data visualization of text is word networks. Word networks show you the relationship of a particular word with another word. Take a look at an example of a word network below:

With the tools you learnt above you are now ready to tackle your first text mining dataset. The world of text mining is huge and there are a vast amount of concepts and tools that are still left for you to explore.

Never stop learning and happy text mining!