The Data (error) Generating Process
Statisticians often approach probabilistic modeling by first understanding the conceptual data generating process. However, when validating messy real-world data, the technical aspects of the data generating process is largely ignored.
In this talk, I will argue the case for developing more semantically meaningful and well-curated data tests by incorporating both conceptual and technical aspects of “how the data gets made”.
To illustrate these concepts, we will explore the NYC subway rides open dataset to see how the simple act of reasoning about real-world events their collection through ETL processes can help craft far more sensitive and expressive data quality checks. I will also illustrate how to implement such checks based on new features which I recently contributed to the open-source dbt-utils
package.
Audience members should leave this talk with a clear framework in mind for ideating better tests for their own pipelines.