Data Lakehouse with PySpark — Introduction & Agenda

Representation Image

The whole series is designed to help people understand and design Data Warehousing on Data Lake or Data Lakehouse with help of PySpark, Delta Lake, AWS, Docker and more. This is a complete hands-on series with code explanation with implementation. We will also try to apply all best practises as per industry standards.

Checkout the Introduction & Agenda on YouTube:

https://medium.com/media/7c169d9ae3a73fe13cb15b36aab5319c/href

Make sure to Like and Subscribe.

Follow us on YouTube: https://youtube.com/@easewithdata

If you are new to Data Warehousing checkout — https://youtube.com/playlist?list=PL2IsFZBGM_IE-EvpN9gaZZukj-ysFudag

Buy me a Coffee

If you like my content and wish to buy me a COFFEE. Click the link below or Scan the QR.
Buy Subham a Coffee
*All Payments are secured through Stripe.

Scan the QR to Pay Securely

About the Author

Subham is working as Senior Data Engineer at a Data Analytics and Artificial Intelligence multinational organization.
Checkout portfolio: Subham Khandelwal