A refresher on typical Git & GitHub workflow at NN.
This content was presented to Nelson\Nygaard Staff at a Lunch and Learn webinar on Thursday, January 26th, 2023, and is available as a recording here and embedded below.
These are the suggested options for the R/Python Extract, Transform, Load (ETL) workflow locally:
Inputs: Our data typically come from clients or the web, and get transmitted to some folder for storage/archiving before we perform ETL steps on the data. Here are suggested locations for storage with notes on each. Something critical to note here is we do not recommend storing raw data in GitHub – GitHub has a file size limitation of 100 MB, and is made for storing code, not data.
Sharepoint
Use the get_sharepoint_dir()
function from {nntools}
for
easy access to your sharepoint parent folder in a way that
will work across machines if someone else runs your code.
Use the “sync” feature of sharepoint to synchronize necessary Sharepoint folders to your local machine (via OneDrive).
Important to sync to the same folder so that the file paths will be the same across machines – recommendation is highest folder level in Sharepoint
P Drive: Data can often be stored by PMs in the Background or analysis folders. There are no technical issues with this, but loading large data files from NN’s file server can be slow.
G Drive: Data can often be stored here for use in combination with ArcGIS workflows. There are no technical issues with this, but loading large data files from NN’s file server can be slow.
ETL processes (i.e. the code): This is the step that should be version controlled with Git, and stored on GitHub.
Cloned locally onto your machine
If you don’t want to use GitHub for some reason, recommendation would be sharepoint.
Outputs: We typically are outputting results either a) back to NN file storage locations or b) directly to some cloud-hosted format, such as a Shiny application, an AWS bucket, a SQL database, a GitHub pages website, or some combination thereof.
NN file storage locations
Sharepoint: Outputs can be sent here and then shared via a weblink once they have synchronized to the cloud.
P Drive: Outputs might be sent to the Analysis or Graphics folders
G Drive: Outputs might be sent to a number of different subfolders within a G drive folder, depending on if it is a tabular output or a spatial file for mapping.
Cloud destinations
Shinyapps.io: Shiny apps can be deployed here.
AWS S3 bucket: Files of miscellaneous structure can be stored here for use by Shiny applications, typically.
PostgreSQL database: Data that can be formatted into a table (spatial or not) can be stored via SQL and queried by further R/Python scripts or Shiny applications.
GitHub Pages website: Rendered HTML and associated files
can be sent to a docs
folder within a GitHub repository to
be hosted as a website via GitHub pages.
Other Integrated Development Environments (IDEs) like VSCode have GUI options for Git/GitHub