Why doing it?
Perhaps, your project has a database-first approach. So, you need an instance of your database for debugging and testing of code.
Nowadays teams can be spread over different regions, having no ability to share a single database instance (which would also be a bad practice because of dependency introduction and drift acceleration).
If your production database is not much evolving in its schema and has a moderate size, such database is a good candidate to be handled as an artefact.
Database artefacts are just right for:
- Reproducing of bugs by deploying a certain version of a database.
- Testing migrations by redeploying a version multiple times.
The good case for making an artefact from a production database is when its schema is managed by some centralized software (not one that’s going to be developed) and you have to just adapt to database changes.
What do you need to minimise burden?
To alleviate burdens you need:
You can use SQL Server Data Tools (SSDT) (requires Visual Studio) instead of SqlPackage for manual extraction of a database schema.
Avoid ‘DOMAIN\User’ format for usernames
The thing that can surprise a developer during deployment of a database in his / her local environment is SqlPackage error like:
The command CREATE USER [DOMAIN\...] WITHOUT LOGIN;
somewhy (even on Linux version of SQL Server) expects Windows NT user or group…
So, don’t hesitate to fix a production database with commands like:
Create feed, PAT and authorize
Database artefact is going to be universal package. You need a feed in your project (create it in Azure DevOps Artifacts).
You need to create your Azure DevOps Personal Access Token (PAT) for accessing the feed. Scope Packaging (Read & write) would be enough. Copy the token value from DevOps, then paste it into CLI command:
How to create an artefact from a production database?
Prepare
A local folder named artifacts/artifact-name/
in the root of your project is going to be used, so artifacts/
should be in .gitignore
.
Scripts
./tools/make-db-artifact.ps1
The tool extracts schema of the database and exports data of some tables from a production database to the files in artifacts/artifact-name/
.
Replace artifact-name, server-address, database-name and table names with actual values:
./tools/publish-db-artifact.ps1
Replace org-name, feed-name, artifact-name with actual values:
Execute “make” then “publish” tools provided above.
Manual approach
An extraction can also be done with SQL Server Data Tools:
I recommend to extract “schema only” and use bcp tool for tables. If you opt to “schema and data” having a foreign keys in your database, probably, you have no choice except to export all the data, which can be of a huge amount.
You can upload a package version with the command:
How to deploy a database in a development environment?
Scripts
docker-compose.yml
I hope your development environment is started with docker-compose, so there are some docker-compose.yml
where you put an arbitrary sa password:
In this example I map dockerized SQL Server to the port 11433.
./tools/get-db-artifact.ps1
To get an artefact to ./artifacts/artifact-name/
back from Azure DevOps Artifacts use a script like:
./tools/deploy-db-artifact.ps1
The sa password (from docker-compose) I use in the deployment script, that looks like:
I have added some ./tools/sql/db-artifact-pre-deploy.sql
and ./tools/sql/db-artifact-post-deploy.sql
scripts to the example, which usually are needed to:
- Check if the database exists, drop and recreate it (pre-deploy).
- Create a login and a corresponding user for software you develop.
- Update sequences and other things corresponding to the table data.
Rollout step-by-step
On a developer’s machine there should be, again: Docker, VSTS CLI, sqlcmd, bcp and SqlPackage.
He or she ought to be authorized to Azure DevOps:
To get an environment up he / she need to:
Start containers:
Get an artefact:
Deploy an artefact: