- Identify data quality issues at source
- Validate and assure data consistently
- Improve the transparency and standardization of data quality routines
- Reuse rules and algorithms during design and build of new data pipelines
- Reduce Reduce the cost of change and maintenance across multiple data flows.
Categorised in: New Page Examples
This post was written by MetadataWorks