You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Go to mortgage-data-analysis/loading_and_modeling, and pip install requests==2.5.3.
Type python download_freddie_mac.py. Enter credentials and quarters to download when prompted. This downloads zip files into the current folder for each quarter.
Type python download_fannie_mae.py. Enter credentials and quarters to download when prompted. This downloads zip files into the current folder for each quarter.
Move the data into HDFS directory
Start Hadoop, postgres, and Hive in EC2 instance.
If this is your first time, type . create_hdfs_dir.sh. This creates necessary HDFS folders.
Type . unzip_to_HDFS.sh. This unzips the zipped files into mortgage-data-analysis/temp_download, removes the zipped files, loads unzipped files to HDFS, and removes the unzipped files. Note that this step can take 15-30 minutes depending on number of quarters being loaded.
Create Hive tables
Go to mortgage-data-analysis/transforming and type . create_hive_tables.sh. This creates Hive metadata for base Fannie and Freddie data in hdfs and for the combined data sets. Note that this script can take several hours to run, depending on how many quarters of data are there (for 15 quarters, acquisition data took 10 min, performance data took ~ 2 hours).
Use Tableau to visualize data
Once Hive tables are created, start HiveServer2 by typing hive --service hiveserver2 &.
Set up an ODBC connection with the server in Tableau and visualize data as necessary. A sample Tableau workbook along with the CSV file extracted from one of Hive tables are available in mortgage-data-analysis/serving folder.
About
ETL process which downloads, transforms, and loads Freddie Mac/Fannie Mae mortgage data