Difference between revisions of "Leif-big-data"

From Earlham CS Department
Jump to navigation Jump to search
Line 1: Line 1:
* Project title: Stories in a Word
+
* Project title: Stories in Words
 
* Project data set: Google Ngrams - 1gram (English)
 
* Project data set: Google Ngrams - 1gram (English)
  
 
===== Project Tasks =====
 
===== Project Tasks =====
 
#Identifying and downloading the target data set
 
#Identifying and downloading the target data set
#*This project uses Google Ngrams - 1gram (English) which can be downloaded from [http://books.google.com/ngrams/datasets] 0-10 CSV files.
+
#*This project uses Google Ngrams - 1gram (English) which can be downloaded from Google Books at [http://books.google.com/ngrams/datasets] 0-10 CSV files.
 
#Data cleaning and pre-processing
 
#Data cleaning and pre-processing
 
#*The raw CSV file values are separated by TABS so I had to use a script to replace TABS with COMMAS as follows: tr '\t' ',' <input_file.csv>output_file.csv  
 
#*The raw CSV file values are separated by TABS so I had to use a script to replace TABS with COMMAS as follows: tr '\t' ',' <input_file.csv>output_file.csv  
Line 16: Line 16:
 
* Node: as6
 
* Node: as6
 
* Path to storage space: local machine
 
* Path to storage space: local machine
* Path to Data Setup Script
 
* Path to Data Extraction Script
 
* Path to GNUPLOT Script
 
* Path to Raw Extracted Data
 
* Path to Presentation
 
  
 
===== Results =====
 
===== Results =====
 
* The visualization(s)
 
* The visualization(s)
 
* The story
 
* The story

Revision as of 12:51, 6 December 2011

  • Project title: Stories in Words
  • Project data set: Google Ngrams - 1gram (English)
Project Tasks
  1. Identifying and downloading the target data set
    • This project uses Google Ngrams - 1gram (English) which can be downloaded from Google Books at [1] 0-10 CSV files.
  2. Data cleaning and pre-processing
    • The raw CSV file values are separated by TABS so I had to use a script to replace TABS with COMMAS as follows: tr '\t' ',' <input_file.csv>output_file.csv
  3. Load the data into your Postgres instance
    • I used a script which when piped into postgres drops existing tables, creates the tables, copies the data in, and then indexes the tables.
  4. Develop queries to explore your ideas in the data
  5. Develop and document the model function you are exploring in the data
  6. Develop a visualization to show the model/patterns in the data
Tech Details
  • Node: as6
  • Path to storage space: local machine
Results
  • The visualization(s)
  • The story