FrostyFriday Inc., your benevolent employer, has an S3 bucket that is filled with .csv data dumps. This data is needed for analysis. Your task is to create an external stage, and load the csv files directly from that stage into a table.
The S3 bucket’s URI is: s3://frostyfridaychallenges/challenge_1/
Remember if you want to participate:
- Sign up as a member of Frosty Friday.ย You can do this by clicking on the sidebar, and then going to โREGISTERโ (note joining our mailing list does not give you a Frosty Friday account)
- Post your code to GitHub and make it publicly available (Check out our guide if you don’t know how to here)
- Post the URL in the comments of the challenge.
If you have any technical questions you’d like to pose to the community, you can ask here on our dedicated thread.
Started with this. Seems quite interesting!
Off to a good start!
Starting off the frosty challenges ๐
Seems really helpful for learning Snowflake. Thank you!
W1 done! ๐
Had fun.
Nice foundational start to the challenges.
Solution URL: https://github.com/jameskalfox/frosty-friday-snowflake-challenges/tree/main/Week_1_Basic_External_Stages
Solution URL updated
My first challenge and still a lot to learn for me
https://github.com/HeinrichPreuss/Frosty_Fridays/blob/main/Challenge_1.sql
Basic? lol
New things learned:
– the metadata$filename and metadata$file_row_number columns are handy for storing the row order in the source files
– adding NULL_IF to the file format allows for cleanup of unneeded string values (in this case ‘NULL’ and ‘totally_empty’)
Done
Learned a lot! Reffered to a blog.
Nice intro to some basic snowflake setup!
First challenge. Love it so far!