When versioning data schema in a pipeline framework, the exact steps and workflow may vary depending on your tools and requirements. Generally, you need to define your data schema using the appropriate format and syntax for your tools, such as SQL scripts for relational databases, JSON or YAML files for NoSQL databases or APIs, or dbt YAML files for data warehouse models. Additionally, you should store your data schema files in a version control system like Git, where you can track and manage changes with branches, tags, and pull requests. To apply your data schema changes to your data sources, transformations, and destinations, use the tools and methods that suit your pipeline framework. For instance, Flyway or Liquibase to run migration scripts on databases or dbt to run models on a data warehouse. Finally, test and document your data schema changes to ensure they work as expected and are easy to understand. Tools and frameworks that support data schema testing and documentation include dbt tests and docs or pytest and Sphinx.