Welcome to Alembic’s documentation!¶
Alembic is a lightweight database migration tool for usage with the SQLAlchemy Database Toolkit for Python.
Front Matter¶
Information about the Alembic project.
Project Homepage¶
Alembic is hosted on Bitbucket - the lead project page is at https://bitbucket.org/zzzeek/alembic. Source code is tracked here using Git.
Changed in version 0.6: The source repository was moved from Mercurial to Git.
Releases and project status are available on Pypi at http://pypi.python.org/pypi/alembic.
The most recent published version of this documentation should be at http://alembic.readthedocs.org/.
Project Status¶
Alembic is currently in beta status and is expected to be fairly stable. Users should take care to report bugs and missing features (see Bugs) on an as-needed basis. It should be expected that the development version may be required for proper implementation of recently repaired issues in between releases; the latest master is always available at https://bitbucket.org/zzzeek/alembic/get/master.tar.gz.
Installation¶
Install released versions of Alembic from the Python package index with pip or a similar tool:
pip install alembic
Installation via source distribution is via the setup.py
script:
python setup.py install
The install will add the alembic
command to the environment. All operations with Alembic
then proceed through the usage of this command.
Dependencies¶
Alembic’s install process will ensure that SQLAlchemy is installed, in addition to other dependencies. Alembic will work with SQLAlchemy as of version 0.7.3, however more features are available with newer versions such as the 0.9 or 1.0 series.
Alembic supports Python versions 2.6 and above.
Community¶
Alembic is developed by Mike Bayer, and is loosely associated with the SQLAlchemy, Pylons, and Openstack projects.
User issues, discussion of potential bugs and features should be posted to the Alembic Google Group at sqlalchemy-alembic.
Bugs¶
Bugs and feature enhancements to Alembic should be reported on the Bitbucket issue tracker.
Tutorial¶
Alembic provides for the creation, management, and invocation of change management scripts for a relational database, using SQLAlchemy as the underlying engine. This tutorial will provide a full introduction to the theory and usage of this tool.
To begin, make sure Alembic is installed as described at Installation.
The Migration Environment¶
Usage of Alembic starts with creation of the Migration Environment. This is a directory of scripts
that is specific to a particular application. The migration environment is created just once,
and is then maintained along with the application’s source code itself. The environment is
created using the init
command of Alembic, and is then customizable to suit the specific
needs of the application.
The structure of this environment, including some generated migration scripts, looks like:
yourproject/
alembic/
env.py
README
script.py.mako
versions/
3512b954651e_add_account.py
2b1ae634e5cd_add_order_id.py
3adcc9a56557_rename_username_field.py
The directory includes these directories/files:
yourproject
- this is the root of your application’s source code, or some directory within it.alembic
- this directory lives within your application’s source tree and is the home of the migration environment. It can be named anything, and a project that uses multiple databases may even have more than one.env.py
- This is a Python script that is run whenever the alembic migration tool is invoked. At the very least, it contains instructions to configure and generate a SQLAlchemy engine, procure a connection from that engine along with a transaction, and then invoke the migration engine, using the connection as a source of database connectivity.The
env.py
script is part of the generated environment so that the way migrations run is entirely customizable. The exact specifics of how to connect are here, as well as the specifics of how the migration environment are invoked. The script can be modified so that multiple engines can be operated upon, custom arguments can be passed into the migration environment, application-specific libraries and models can be loaded in and made available.Alembic includes a set of initialization templates which feature different varieties of
env.py
for different use cases.README
- included with the various environment templates, should have something informative.script.py.mako
- This is a Mako template file which is used to generate new migration scripts. Whatever is here is used to generate new files withinversions/
. This is scriptable so that the structure of each migration file can be controlled, including standard imports to be within each, as well as changes to the structure of theupgrade()
anddowngrade()
functions. For example, themultidb
environment allows for multiple functions to be generated using a naming schemeupgrade_engine1()
,upgrade_engine2()
.versions/
- This directory holds the individual version scripts. Users of other migration tools may notice that the files here don’t use ascending integers, and instead use a partial GUID approach. In Alembic, the ordering of version scripts is relative to directives within the scripts themselves, and it is theoretically possible to “splice” version files in between others, allowing migration sequences from different branches to be merged, albeit carefully by hand.
Creating an Environment¶
With a basic understanding of what the environment is, we can create one using alembic init
.
This will create an environment using the “generic” template:
$ cd yourproject
$ alembic init alembic
Where above, the init
command was called to generate a migrations directory called alembic
:
Creating directory /path/to/yourproject/alembic...done
Creating directory /path/to/yourproject/alembic/versions...done
Generating /path/to/yourproject/alembic.ini...done
Generating /path/to/yourproject/alembic/env.py...done
Generating /path/to/yourproject/alembic/README...done
Generating /path/to/yourproject/alembic/script.py.mako...done
Please edit configuration/connection/logging settings in
'/path/to/yourproject/alembic.ini' before proceeding.
Alembic also includes other environment templates. These can be listed out using the list_templates
command:
$ alembic list_templates
Available templates:
generic - Generic single-database configuration.
multidb - Rudimentary multi-database configuration.
pylons - Configuration that reads from a Pylons project environment.
Templates are used via the 'init' command, e.g.:
alembic init --template pylons ./scripts
Editing the .ini File¶
Alembic placed a file alembic.ini
into the current directory. This is a file that the alembic
script looks for when invoked. This file can be anywhere, either in the same directory
from which the alembic
script will normally be invoked, or if in a different directory, can
be specified by using the --config
option to the alembic
runner.
The file generated with the “generic” configuration looks like:
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# max length of characters to apply to the
# "slug" field
#truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; this defaults
# to alembic/versions. When using multiple version
# directories, initial revisions must be specified with --version-path
# version_locations = %(here)s/bar %(here)s/bat alembic/versions
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
The file is read using Python’s ConfigParser.SafeConfigParser
object. The
%(here)s
variable is provided as a substitution variable, which
can be used to produce absolute pathnames to directories and files, as we do above
with the path to the Alembic script location.
This file contains the following features:
[alembic]
- this is the section read by Alembic to determine configuration. Alembic itself does not directly read any other areas of the file.script_location
- this is the location of the Alembic environment. It is normally specified as a filesystem location, either relative or absolute. If the location is a relative path, it’s interpreted as relative to the current directory.This is the only key required by Alembic in all cases. The generation of the .ini file by the command
alembic init alembic
automatically placed the directory namealembic
here. The special variable%(here)s
can also be used, as in%(here)s/alembic
.For support of applications that package themselves into .egg files, the value can also be specified as a package resource, in which case
resource_filename()
is used to find the file (new in 0.2.2). Any non-absolute URI which contains colons is interpreted here as a resource name, rather than a straight filename.file_template
- this is the naming scheme used to generate new migration files. The value present is the default, so is commented out. Tokens available include:%%(rev)s
- revision id%%(slug)s
- a truncated string derived from the revision message%%(year)d
,%%(month).2d
,%%(day).2d
,%%(hour).2d
,%%(minute).2d
,%%(second).2d
- components of the create date as returned bydatetime.datetime.now()
truncate_slug_length
- defaults to 40, the max number of characters to include in the “slug” field.New in version 0.6.1: - added
truncate_slug_length
configurationsqlalchemy.url
- A URL to connect to the database via SQLAlchemy. This key is in fact only referenced within theenv.py
file that is specific to the “generic” configuration; a file that can be customized by the developer. A multiple database configuration may respond to multiple keys here, or may reference other sections of the file.revision_environment
- this is a flag which when set to the value ‘true’, will indicate that the migration environment scriptenv.py
should be run unconditionally when generating new revision filessourceless
- when set to ‘true’, revision files that only exist as .pyc or .pyo files in the versions directory will be used as versions, allowing “sourceless” versioning folders. When left at the default of ‘false’, only .py files are consumed as version files.New in version 0.6.4.
version_locations
- an optional list of revision file locations, to allow revisions to exist in multiple directories simultaneously. See Working with Multiple Bases for examples.New in version 0.7.0.
output_encoding
- the encoding to use when Alembic writes thescript.py.mako
file into a new migration file. Defaults to'utf-8'
.New in version 0.7.0.
[loggers]
,[handlers]
,[formatters]
,[logger_*]
,[handler_*]
,[formatter_*]
- these sections are all part of Python’s standard logging configuration, the mechanics of which are documented at Configuration File Format. As is the case with the database connection, these directives are used directly as the result of thelogging.config.fileConfig()
call present in theenv.py
script, which you’re free to modify.
For starting up with just a single database and the generic configuration, setting up the SQLAlchemy URL is all that’s needed:
sqlalchemy.url = postgresql://scott:tiger@localhost/test
Create a Migration Script¶
With the environment in place we can create a new revision, using alembic revision
:
$ alembic revision -m "create account table"
Generating /path/to/yourproject/alembic/versions/1975ea83b712_create_accoun
t_table.py...done
A new file 1975ea83b712_create_account_table.py
is generated. Looking inside the file:
"""create account table
Revision ID: 1975ea83b712
Revises:
Create Date: 2011-11-08 11:40:27.089406
"""
# revision identifiers, used by Alembic.
revision = '1975ea83b712'
down_revision = None
branch_labels = None
from alembic import op
import sqlalchemy as sa
def upgrade():
pass
def downgrade():
pass
The file contains some header information, identifiers for the current revision
and a “downgrade” revision, an import of basic Alembic directives,
and empty upgrade()
and downgrade()
functions. Our
job here is to populate the upgrade()
and downgrade()
functions with directives that
will apply a set of changes to our database. Typically, upgrade()
is required
while downgrade()
is only needed if down-revision capability is desired, though it’s
probably a good idea.
Another thing to notice is the down_revision
variable. This is how Alembic
knows the correct order in which to apply migrations. When we create the next revision,
the new file’s down_revision
identifier would point to this one:
# revision identifiers, used by Alembic.
revision = 'ae1027a6acf'
down_revision = '1975ea83b712'
Every time Alembic runs an operation against the versions/
directory, it reads all
the files in, and composes a list based on how the down_revision
identifiers link together,
with the down_revision
of None
representing the first file. In theory, if a
migration environment had thousands of migrations, this could begin to add some latency to
startup, but in practice a project should probably prune old migrations anyway
(see the section Building an Up to Date Database from Scratch for a description on how to do this, while maintaining
the ability to build the current database fully).
We can then add some directives to our script, suppose adding a new table account
:
def upgrade():
op.create_table(
'account',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('name', sa.String(50), nullable=False),
sa.Column('description', sa.Unicode(200)),
)
def downgrade():
op.drop_table('account')
create_table()
and drop_table()
are Alembic directives. Alembic provides
all the basic database migration operations via these directives, which are designed to be as simple and
minimalistic as possible;
there’s no reliance upon existing table metadata for most of these directives. They draw upon
a global “context” that indicates how to get at a database connection (if any; migrations can
dump SQL/DDL directives to files as well) in order to invoke the command. This global
context is set up, like everything else, in the env.py
script.
An overview of all Alembic directives is at Operation Reference.
Running our First Migration¶
We now want to run our migration. Assuming our database is totally clean, it’s as
yet unversioned. The alembic upgrade
command will run upgrade operations, proceeding
from the current database revision, in this example None
, to the given target revision.
We can specify 1975ea83b712
as the revision we’d like to upgrade to, but it’s easier
in most cases just to tell it “the most recent”, in this case head
:
$ alembic upgrade head
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running upgrade None -> 1975ea83b712
Wow that rocked! Note that the information we see on the screen is the result of the
logging configuration set up in alembic.ini
- logging the alembic
stream to the
console (standard error, specifically).
The process which occurred here included that Alembic first checked if the database had
a table called alembic_version
, and if not, created it. It looks in this table
for the current version, if any, and then calculates the path from this version to
the version requested, in this case head
, which is known to be 1975ea83b712
.
It then invokes the upgrade()
method in each file to get to the target revision.
Running our Second Migration¶
Let’s do another one so we have some things to play with. We again create a revision file:
$ alembic revision -m "Add a column"
Generating /path/to/yourapp/alembic/versions/ae1027a6acf_add_a_column.py...
done
Let’s edit this file and add a new column to the account
table:
"""Add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2011-11-08 12:37:36.714947
"""
# revision identifiers, used by Alembic.
revision = 'ae1027a6acf'
down_revision = '1975ea83b712'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('account', sa.Column('last_transaction_date', sa.DateTime))
def downgrade():
op.drop_column('account', 'last_transaction_date')
Running again to head
:
$ alembic upgrade head
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running upgrade 1975ea83b712 -> ae1027a6acf
We’ve now added the last_transaction_date
column to the database.
Partial Revision Identifiers¶
Any time we need to refer to a revision number explicitly, we have the option to use a partial number. As long as this number uniquely identifies the version, it may be used in any command in any place that version numbers are accepted:
$ alembic upgrade ae1
Above, we use ae1
to refer to revision ae1027a6acf
.
Alembic will stop and let you know if more than one version starts with
that prefix.
Relative Migration Identifiers¶
Relative upgrades/downgrades are also supported. To move two versions from the current, a decimal value “+N” can be supplied:
$ alembic upgrade +2
Negative values are accepted for downgrades:
$ alembic downgrade -1
Relative identifiers may also be in terms of a specific revision. For example,
to upgrade to revision ae1027a6acf
plus two additional steps:
$ alembic upgrade ae10+2
New in version 0.7.0: Support for relative migrations in terms of a specific revision.
Getting Information¶
With a few revisions present we can get some information about the state of things.
First we can view the current revision:
$ alembic current
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
Current revision for postgresql://scott:XXXXX@localhost/test: 1975ea83b712 -> ae1027a6acf (head), Add a column
head
is displayed only if the revision identifier for this database matches the head revision.
We can also view history with alembic history
; the --verbose
option
(accepted by several commands, including history
, current
, heads
and branches
) will show us full information about each revision:
$ alembic history --verbose
Rev: ae1027a6acf (head)
Parent: 1975ea83b712
Path: /path/to/yourproject/alembic/versions/ae1027a6acf_add_a_column.py
add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2014-11-20 13:02:54.849677
Rev: 1975ea83b712
Parent: <base>
Path: /path/to/yourproject/alembic/versions/1975ea83b712_add_account_table.py
create account table
Revision ID: 1975ea83b712
Revises:
Create Date: 2014-11-20 13:02:46.257104
Viewing History Ranges¶
Using the -r
option to alembic history
, we can also view various slices
of history. The -r
argument accepts an argument [start]:[end]
, where
either may be a revision number, symbols like head
, heads
or
base
, current
to specify the current revision(s), as well as negative
relative ranges for [start]
and positive relative ranges for [end]
:
$ alembic history -r1975ea:ae1027
A relative range starting from three revs ago up to current migration, which will invoke the migration environment against the database to get the current migration:
$ alembic history -r-3:current
View all revisions from 1975 to the head:
$ alembic history -r1975ea:
New in version 0.6.0: alembic revision
now accepts the -r
argument to
specify specific ranges based on version numbers, symbols, or relative deltas.
Downgrading¶
We can illustrate a downgrade back to nothing, by calling alembic downgrade
back
to the beginning, which in Alembic is called base
:
$ alembic downgrade base
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running downgrade ae1027a6acf -> 1975ea83b712
INFO [alembic.context] Running downgrade 1975ea83b712 -> None
Back to nothing - and up again:
$ alembic upgrade head
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running upgrade None -> 1975ea83b712
INFO [alembic.context] Running upgrade 1975ea83b712 -> ae1027a6acf
Next Steps¶
The vast majority of Alembic environments make heavy use of the “autogenerate” feature. Continue onto the next section, Auto Generating Migrations.
Auto Generating Migrations¶
Alembic can view the status of the database and compare against the table metadata
in the application, generating the “obvious” migrations based on a comparison. This
is achieved using the --autogenerate
option to the alembic revision
command,
which places so-called candidate migrations into our new migrations file. We
review and modify these by hand as needed, then proceed normally.
To use autogenerate, we first need to modify our env.py
so that it gets access
to a table metadata object that contains the target. Suppose our application
has a declarative base
in myapp.mymodel
. This base contains a MetaData
object which
contains Table
objects defining our database. We make sure this
is loaded in env.py
and then passed to EnvironmentContext.configure()
via the
target_metadata
argument. The env.py
sample script used in the
generic template already has a
variable declaration near the top for our convenience, where we replace None
with our MetaData
. Starting with:
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = None
we change to:
from myapp.mymodel import Base
target_metadata = Base.metadata
Note
The above example refers to the generic alembic env.py template, e.g.
the one created by default when calling upon alembic init
, and not
the special-use templates such as multidb
. Please consult the source
code and comments within the env.py
script directly for specific
guidance on where and how the autogenerate metadata is established.
If we look later in the script, down in run_migrations_online()
,
we can see the directive passed to EnvironmentContext.configure()
:
def run_migrations_online():
engine = engine_from_config(
config.get_section(config.config_ini_section), prefix='sqlalchemy.')
with engine.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
We can then use the alembic revision
command in conjunction with the
--autogenerate
option. Suppose
our MetaData
contained a definition for the account
table,
and the database did not. We’d get output like:
$ alembic revision --autogenerate -m "Added account table"
INFO [alembic.context] Detected added table 'account'
Generating /path/to/foo/alembic/versions/27c6a30d7c24.py...done
We can then view our file 27c6a30d7c24.py
and see that a rudimentary migration
is already present:
"""empty message
Revision ID: 27c6a30d7c24
Revises: None
Create Date: 2011-11-08 11:40:27.089406
"""
# revision identifiers, used by Alembic.
revision = '27c6a30d7c24'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table(
'account',
sa.Column('id', sa.Integer()),
sa.Column('name', sa.String(length=50), nullable=False),
sa.Column('description', sa.VARCHAR(200)),
sa.Column('last_transaction_date', sa.DateTime()),
sa.PrimaryKeyConstraint('id')
)
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_table("account")
### end Alembic commands ###
The migration hasn’t actually run yet, of course. We do that via the usual upgrade
command. We should also go into our migration file and alter it as needed, including
adjustments to the directives as well as the addition of other directives which these may
be dependent on - specifically data changes in between creates/alters/drops.
What does Autogenerate Detect (and what does it not detect?)¶
The vast majority of user issues with Alembic centers on the topic of what kinds of changes autogenerate can and cannot detect reliably, as well as how it renders Python code for what it does detect. it is critical to note that autogenerate is not intended to be perfect. It is always necessary to manually review and correct the candidate migrations that autogenererate produces. The feature is getting more and more comprehensive and error-free as releases continue, but one should take note of the current limitations.
Autogenerate will detect:
- Table additions, removals.
- Column additions, removals.
- Change of nullable status on columns.
- Basic changes in indexes and explcitly-named unique constraints
New in version 0.6.1: Support for autogenerate of indexes and unique constraints.
- Basic changes in foreign key constraints
New in version 0.7.1: Support for autogenerate of foreign key constraints.
Autogenerate can optionally detect:
- Change of column type. This will occur if you set
the
EnvironmentContext.configure.compare_type
parameter toTrue
, or to a custom callable function. The feature works well in most cases, but is off by default so that it can be tested on the target schema first. It can also be customized by passing a callable here; see the section Comparing Types for details. - Change of server default. This will occur if you set
the
EnvironmentContext.configure.compare_server_default
parameter toTrue
, or to a custom callable function. This feature works well for simple cases but cannot always produce accurate results. The Postgresql backend will actually invoke the “detected” and “metadata” values against the database to determine equivalence. The feature is off by default so that it can be tested on the target schema first. Like type comparison, it can also be customized by passing a callable; see the function’s documentation for details.
Autogenerate can not detect:
- Changes of table name. These will come out as an add/drop of two different tables, and should be hand-edited into a name change instead.
- Changes of column name. Like table name changes, these are detected as a column add/drop pair, which is not at all the same as a name change.
- Anonymously named constraints. Give your constraints a name,
e.g.
UniqueConstraint('col1', 'col2', name="my_name")
. See the section The Importance of Naming Constraints for background on how to configure automatic naming schemes for constraints. - Special SQLAlchemy types such as
Enum
when generated on a backend which doesn’t support ENUM directly - this because the representation of such a type in the non-supporting database, i.e. a CHAR+ CHECK constraint, could be any kind of CHAR+CHECK. For SQLAlchemy to determine that this is actually an ENUM would only be a guess, something that’s generally a bad idea. To implement your own “guessing” function here, use thesqlalchemy.events.DDLEvents.column_reflect()
event to detect when a CHAR (or whatever the target type is) is reflected, and change it to an ENUM (or whatever type is desired) if it is known that that’s the intent of the type. Thesqlalchemy.events.DDLEvents.after_parent_attach()
can be used within the autogenerate process to intercept and un-attach unwanted CHECK constraints.
Autogenerate can’t currently, but will eventually detect:
- Some free-standing constraint additions and removals, like CHECK, PRIMARY KEY - these are not fully implemented.
- Sequence additions, removals - not yet implemented.
Comparing and Rendering Types¶
The area of autogenerate’s behavior of comparing and rendering Python-based type objects in migration scripts presents a challenge, in that there’s a very wide variety of types to be rendered in scripts, including those part of SQLAlchemy as well as user-defined types. A few options are given to help out with this task.
Controlling the Module Prefix¶
When types are rendered, they are generated with a module prefix, so
that they are available based on a relatively small number of imports.
The rules for what the prefix is is based on the kind of datatype as well
as configurational settings. For example, when Alembic renders SQLAlchemy
types, it will by default prefix the type name with the prefix sa.
:
Column("my_column", sa.Integer())
The use of the sa.
prefix is controllable by altering the value
of EnvironmentContext.configure.sqlalchemy_module_prefix
:
def run_migrations_online():
# ...
context.configure(
connection=connection,
target_metadata=target_metadata,
sqlalchemy_module_prefix="sqla.",
# ...
)
# ...
In either case, the sa.
prefix, or whatever prefix is desired, should
also be included in the imports section of script.py.mako
; it also
defaults to import sqlalchemy as sa
.
For user-defined types, that is, any custom type that
is not within the sqlalchemy.
module namespace, by default Alembic will
use the value of __module__ for the custom type:
Column("my_column", myapp.models.utils.types.MyCustomType())
The imports for the above type again must be made present within the migration,
either manually, or by adding it to script.py.mako
.
Changed in version 0.7.0: The default module prefix rendering for a user-defined type now makes use
of the type’s __module__
attribute to retrieve the prefix, rather than
using the value of
sqlalchemy_module_prefix
.
The above custom type has a long and cumbersome name based on the use
of __module__
directly, which also implies that lots of imports would
be needed in order to accomodate lots of types. For this reason, it is
recommended that user-defined types used in migration scripts be made
available from a single module. Suppose we call it myapp.migration_types
:
# myapp/migration_types.py
from myapp.models.utils.types import MyCustomType
We can first add an import for migration_types
to our script.py.mako
:
from alembic import op
import sqlalchemy as sa
import myapp.migration_types
${imports if imports else ""}
We then override Alembic’s use of __module__
by providing a fixed
prefix, using the EnvironmentContext.configure.user_module_prefix
option:
def run_migrations_online():
# ...
context.configure(
connection=connection,
target_metadata=target_metadata,
user_module_prefix="myapp.migration_types.",
# ...
)
# ...
Above, we now would get a migration like:
Column("my_column", myapp.migration_types.MyCustomType())
Now, when we inevitably refactor our application to move MyCustomType
somewhere else, we only need modify the myapp.migration_types
module,
instead of searching and replacing all instances within our migration scripts.
New in version 0.6.3: Added EnvironmentContext.configure.user_module_prefix
.
Affecting the Rendering of Types Themselves¶
The methodology Alembic uses to generate SQLAlchemy and user-defined type constructs
as Python code is plain old __repr__()
. SQLAlchemy’s built-in types
for the most part have a __repr__()
that faithfully renders a
Python-compatible constructor call, but there are some exceptions, particularly
in those cases when a constructor accepts arguments that aren’t compatible
with __repr__()
, such as a pickling function.
When building a custom type that will be rendered into a migration script,
it is often necessary to explicitly give the type a __repr__()
that will
faithfully reproduce the constructor for that type. This, in combination
with EnvironmentContext.configure.user_module_prefix
, is usually
enough. However, if additional behaviors are needed, a more comprehensive
hook is the EnvironmentContext.configure.render_item
option.
This hook allows one to provide a callable function within env.py
that will fully take
over how a type is rendered, including its module prefix:
def render_item(type_, obj, autogen_context):
"""Apply custom rendering for selected items."""
if type_ == 'type' and isinstance(obj, MySpecialType):
return "mypackage.%r" % obj
# default rendering for other objects
return False
def run_migrations_online():
# ...
context.configure(
connection=connection,
target_metadata=target_metadata,
render_item=render_item,
# ...
)
# ...
In the above example, we’d ensure our MySpecialType
includes an appropriate
__repr__()
method, which is invoked when we call it against "%r"
.
The callable we use for EnvironmentContext.configure.render_item
can also add imports to our migration script. The AutogenContext
passed in
contains a datamember called AutogenContext.imports
, which is a Python
set()
for which we can add new imports. For example, if MySpecialType
were in a module called mymodel.types
, we can add the import for it
as we encounter the type:
def render_item(type_, obj, autogen_context):
"""Apply custom rendering for selected items."""
if type_ == 'type' and isinstance(obj, MySpecialType):
# add import for this type
autogen_context.imports.add("from mymodel import types")
return "types.%r" % obj
# default rendering for other objects
return False
Changed in version 0.8: The autogen_context
data member passed to
the render_item
callable is now an instance of AutogenContext
.
Changed in version 0.8.3: The “imports” data member of the autogen context
is restored to the new AutogenContext
object as
AutogenContext.imports
.
The finished migration script will include our imports where the
${imports}
expression is used, producing output such as:
from alembic import op
import sqlalchemy as sa
from mymodel import types
def upgrade():
op.add_column('sometable', Column('mycolumn', types.MySpecialType()))
Comparing Types¶
The default type comparison logic will work for SQLAlchemy built in types as
well as basic user defined types. This logic is only enabled if the
EnvironmentContext.configure.compare_type
parameter
is set to True:
context.configure(
# ...
compare_type = True
)
Alternatively, the EnvironmentContext.configure.compare_type
parameter accepts a callable function which may be used to implement custom type
comparison logic, for cases such as where special user defined types
are being used:
def my_compare_type(context, inspected_column,
metadata_column, inspected_type, metadata_type):
# return True if the types are different,
# False if not, or None to allow the default implementation
# to compare these types
return None
context.configure(
# ...
compare_type = my_compare_type
)
Above, inspected_column
is a sqlalchemy.schema.Column
as
returned by
sqlalchemy.engine.reflection.Inspector.reflecttable()
, whereas
metadata_column
is a sqlalchemy.schema.Column
from the
local model environment. A return value of None
indicates that default
type comparison to proceed.
Additionally, custom types that are part of imported or third party
packages which have special behaviors such as per-dialect behavior
should implement a method called compare_against_backend()
on their SQLAlchemy type. If this method is present, it will be called
where it can also return True or False to specify the types compare as
equivalent or not; if it returns None, default type comparison logic
will proceed:
class MySpecialType(TypeDecorator):
# ...
def compare_against_backend(self, dialect, conn_type):
# return True if the types are different,
# False if not, or None to allow the default implementation
# to compare these types
if dialect.name == 'postgresql':
return isinstance(conn_type, postgresql.UUID)
else:
return isinstance(conn_type, String)
The order of precedence regarding the
EnvironmentContext.configure.compare_type
callable vs. the
type itself implementing compare_against_backend
is that the
EnvironmentContext.configure.compare_type
callable is favored
first; if it returns None
, then the compare_against_backend
method
will be used, if present on the metadata type. If that returns None
,
then a basic check for type equivalence is run.
New in version 0.7.6: - added support for the compare_against_backend()
method.
Generating SQL Scripts (a.k.a. “Offline Mode”)¶
A major capability of Alembic is to generate migrations as SQL scripts, instead of running
them against the database - this is also referred to as offline mode.
This is a critical feature when working in large organizations
where access to DDL is restricted, and SQL scripts must be handed off to DBAs. Alembic makes
this easy via the --sql
option passed to any upgrade
or downgrade
command. We
can, for example, generate a script that revises up to rev ae1027a6acf
:
$ alembic upgrade ae1027a6acf --sql
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
BEGIN;
CREATE TABLE alembic_version (
version_num VARCHAR(32) NOT NULL
);
INFO [alembic.context] Running upgrade None -> 1975ea83b712
CREATE TABLE account (
id SERIAL NOT NULL,
name VARCHAR(50) NOT NULL,
description VARCHAR(200),
PRIMARY KEY (id)
);
INFO [alembic.context] Running upgrade 1975ea83b712 -> ae1027a6acf
ALTER TABLE account ADD COLUMN last_transaction_date TIMESTAMP WITHOUT TIME ZONE;
INSERT INTO alembic_version (version_num) VALUES ('ae1027a6acf');
COMMIT;
While the logging configuration dumped to standard error, the actual script was dumped to standard output - so in the absence of further configuration (described later in this section), we’d at first be using output redirection to generate a script:
$ alembic upgrade ae1027a6acf --sql > migration.sql
Getting the Start Version¶
Notice that our migration script started at the base - this is the default when using offline
mode, as no database connection is present and there’s no alembic_version
table to read from.
One way to provide a starting version in offline mode is to provide a range to the command line.
This is accomplished by providing the “version” in start:end
syntax:
$ alembic upgrade 1975ea83b712:ae1027a6acf --sql > migration.sql
The start:end
syntax is only allowed in offline mode; in “online” mode, the alembic_version
table is always used to get at the current version.
It’s also possible to have the env.py
script retrieve the “last” version from
the local environment, such as from a local file. A scheme like this would basically
treat a local file in the same way alembic_version
works:
if context.is_offline_mode():
version_file = os.path.join(os.path.dirname(config.config_file_name), "version.txt")
if os.path.exists(version_file):
current_version = open(version_file).read()
else:
current_version = None
context.configure(dialect_name=engine.name, starting_rev=current_version)
context.run_migrations()
end_version = context.get_revision_argument()
if end_version and end_version != current_version:
open(version_file, 'w').write(end_version)
Writing Migration Scripts to Support Script Generation¶
The challenge of SQL script generation is that the scripts we generate can’t rely upon
any client/server database access. This means a migration script that pulls some rows
into memory via a SELECT
statement will not work in --sql
mode. It’s also
important that the Alembic directives, all of which are designed specifically to work
in both “live execution” as well as “offline SQL generation” mode, are used.
Customizing the Environment¶
Users of the --sql
option are encouraged to hack their env.py
files to suit their
needs. The env.py
script as provided is broken into two sections: run_migrations_online()
and run_migrations_offline()
. Which function is run is determined at the bottom of the
script by reading EnvironmentContext.is_offline_mode()
, which basically determines if the
--sql
flag was enabled.
For example, a multiple database configuration may want to run through each
database and set the output of the migrations to different named files - the EnvironmentContext.configure()
function accepts a parameter output_buffer
for this purpose. Below we illustrate
this within the run_migrations_offline()
function:
from alembic import context
import myapp
import sys
db_1 = myapp.db_1
db_2 = myapp.db_2
def run_migrations_offline():
"""Run migrations *without* a SQL connection."""
for name, engine, file_ in [
("db1", db_1, "db1.sql"),
("db2", db_2, "db2.sql"),
]:
context.configure(
url=engine.url,
transactional_ddl=False,
output_buffer=open(file_, 'w'))
context.execute("-- running migrations for '%s'" % name)
context.run_migrations(name=name)
sys.stderr.write("Wrote file '%s'" % file_)
def run_migrations_online():
"""Run migrations *with* a SQL connection."""
for name, engine in [
("db1", db_1),
("db2", db_2),
]:
connection = engine.connect()
context.configure(connection=connection)
try:
context.run_migrations(name=name)
session.commit()
except:
session.rollback()
raise
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
The Importance of Naming Constraints¶
An important topic worth mentioning is that of constraint naming conventions. As we’ve proceeded here, we’ve talked about adding tables and columns, and we’ve also hinted at lots of other operations listed in Operation Reference such as those which support adding or dropping constraints like foreign keys and unique constraints. The way these constraints are referred to in migration scripts is by name, however these names by default are in most cases generated by the relational database in use, when the constraint is created. For example, if you emitted two CREATE TABLE statements like this on Postgresql:
test=> CREATE TABLE user_account (id INTEGER PRIMARY KEY);
CREATE TABLE
test=> CREATE TABLE user_order (
test(> id INTEGER PRIMARY KEY,
test(> user_account_id INTEGER REFERENCES user_account(id));
CREATE TABLE
Suppose we wanted to DROP the REFERENCES that we just applied to the
user_order.user_account_id
column, how do we do that? At the prompt,
we’d use ALTER TABLE <tablename> DROP CONSTRAINT <constraint_name>
, or if
using Alembic we’d be using Operations.drop_constraint()
. But both
of those functions need a name - what’s the name of this constraint?
It does have a name, which in this case we can figure out by looking at the Postgresql catalog tables:
test=> SELECT r.conname FROM
test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
test-> JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
test-> WHERE c.relname='user_order' AND r.contype = 'f'
test-> ;
conname
---------------------------------
user_order_user_account_id_fkey
(1 row)
The name above is not something that Alembic or SQLAlchemy created;
user_order_user_account_id_fkey
is a naming scheme used internally by
Postgresql to name constraints that are otherwise not named.
This scheme doesn’t seem so complicated, and we might want to just use our
knowledge of it so that we know what name to use for our
Operations.drop_constraint()
call. But is that a good idea? What
if for example we needed our code to run on Oracle as well. OK, certainly
Oracle uses this same scheme, right? Or if not, something similar. Let’s
check:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
SQL> CREATE TABLE user_account (id INTEGER PRIMARY KEY);
Table created.
SQL> CREATE TABLE user_order (
2 id INTEGER PRIMARY KEY,
3 user_account_id INTEGER REFERENCES user_account(id));
Table created.
SQL> SELECT constraint_name FROM all_constraints WHERE
2 table_name='USER_ORDER' AND constraint_type in ('R');
CONSTRAINT_NAME
-----------------------------------------------------
SYS_C0029334
Oh, we can see that is.....much worse. Oracle’s names are entirely unpredictable alphanumeric codes, and this will make being able to write migrations quite tedious, as we’d need to look up all these names.
The solution to having to look up names is to make your own names. This is an easy, though tedious thing to do manually. For example, to create our model in SQLAlchemy ensuring we use names for foreign key constraints would look like:
from sqlalchemy import MetaData, Table, Column, Integer, ForeignKey
meta = MetaData()
user_account = Table('user_account', meta,
Column('id', Integer, primary_key=True)
)
user_order = Table('user_order', meta,
Column('id', Integer, primary_key=True),
Column('user_order_id', Integer,
ForeignKey('user_account.id', name='fk_user_order_id'))
)
Simple enough, though this has some disadvantages. The first is that it’s tedious;
we need to remember to use a name for every ForeignKey
object,
not to mention every UniqueConstraint
, CheckConstraint
,
Index
, and maybe even PrimaryKeyConstraint
as well if we wish to be able to alter those too, and beyond all that, all the
names have to be globally unique. Even with all that effort, if we have a naming scheme in mind,
it’s easy to get it wrong when doing it manually each time.
What’s worse is that manually naming constraints (and indexes) gets even more
tedious in that we can no longer use convenience features such as the .unique=True
or .index=True
flag on Column
:
user_account = Table('user_account', meta,
Column('id', Integer, primary_key=True),
Column('name', String(50), unique=True)
)
Above, the unique=True
flag creates a UniqueConstraint
, but again,
it’s not named. If we want to name it, manually we have to forego the usage
of unique=True
and type out the whole constraint:
user_account = Table('user_account', meta,
Column('id', Integer, primary_key=True),
Column('name', String(50)),
UniqueConstraint('name', name='uq_user_account_name')
)
There’s a solution to all this naming work, which is to use an automated
naming convention. For some years, SQLAlchemy has encourgaged the use of
DDL Events in order to create naming schemes. The after_parent_attach()
event in particular is the best place to intercept when Constraint
and Index
objects are being associated with a parent
Table
object, and to assign a .name
to the constraint while making
use of the name of the table and associated columns.
But there is also a better way to go, which is to make use of a feature
new in SQLAlchemy 0.9.2 which makes use of the events behind the scenes known as
naming_convention
. Here, we can
create a new MetaData
object while passing a dictionary referring
to a naming scheme:
convention = {
"ix": 'ix_%(column_0_label)s',
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(constraint_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"
}
metadata = MetaData(naming_convention=convention)
If we define our models using a MetaData
as above, the given
naming convention dictionary will be used to provide names for all constraints
and indexes.
Integration of Naming Conventions into Operations, Autogenerate¶
As of Alembic 0.6.4, the naming convention feature is integrated into the
Operations
object, so that the convention takes effect for any
constraint that is otherwise unnamed. The naming convention is passed to
Operations
using the MigrationsContext.configure.target_metadata
parameter in env.py
, which is normally configured when autogenerate is
used:
# in your application's model:
meta = MetaData(naming_convention={
"ix": 'ix_%(column_0_label)s',
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(constraint_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"
})
# .. in your Alembic env.py:
# add your model's MetaData object here
# for 'autogenerate' support
from myapp import mymodel
target_metadata = mymodel.Base.metadata
# ...
def run_migrations_online():
# ...
context.configure(
connection=connection,
target_metadata=target_metadata
)
Above, when we render a directive like the following:
op.add_column('sometable', Column('q', Boolean(name='q_bool')))
The Boolean type will render a CHECK constraint with the name
"ck_sometable_q_bool"
, assuming the backend in use does not support
native boolean types.
We can also use op directives with constraints and not give them a name
at all, if the naming convention doesn’t require one. The value of
None
will be converted into a name that follows the appopriate naming
conventions:
def upgrade():
op.create_unique_constraint(None, 'some_table', 'x')
When autogenerate renders constraints in a migration script, it renders them
typically with their completed name. If using at least Alembic 0.6.4 as well
as SQLAlchemy 0.9.4, these will be rendered with a special directive
Operations.f()
which denotes that the string has already been
tokenized:
def upgrade():
op.create_unique_constraint(op.f('uq_const_x'), 'some_table', 'x')
For more detail on the naming convention feature, see Configuring Constraint Naming Conventions.
Running “Batch” Migrations for SQLite and Other Databases¶
Note
“Batch mode” for SQLite and other databases is a new and intricate feature within the 0.7.0 series of Alembic, and should be considered as “beta” for the next several releases.
New in version 0.7.0.
The SQLite database presents a challenge to migration tools in that it has almost no support for the ALTER statement upon which relational schema migrations rely upon. The rationale for this stems from philosophical and architectural concerns within SQLite, and they are unlikely to be changed.
Migration tools are instead expected to produce copies of SQLite tables that correspond to the new structure, transfer the data from the existing table to the new one, then drop the old table. For our purposes here we’ll call this “move and copy” workflow, and in order to accommodate it in a way that is reasonably predictable, while also remaining compatible with other databases, Alembic provides the batch operations context.
Within this context, a relational table is named, and then a series of mutation operations to that table alone are specified within the block. When the context is complete, a process begins whereby the “move and copy” procedure begins; the existing table structure is reflected from the database, a new version of this table is created with the given changes, data is copied from the old table to the new table using “INSERT from SELECT”, and finally the old table is dropped and the new one renamed to the original name.
The Operations.batch_alter_table()
method provides the gateway to this
process:
with op.batch_alter_table("some_table") as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
When the above directives are invoked within a migration script, on a SQLite backend we would see SQL like:
CREATE TABLE _alembic_batch_temp (
id INTEGER NOT NULL,
foo INTEGER,
PRIMARY KEY (id)
);
INSERT INTO _alembic_batch_temp (id) SELECT some_table.id FROM some_table;
DROP TABLE some_table;
ALTER TABLE _alembic_batch_temp RENAME TO some_table;
On other backends, we’d see the usual ALTER
statements done as though
there were no batch directive - the batch context by default only does
the “move and copy” process if SQLite is in use, and if there are
migration directives other than Operations.add_column()
present,
which is the one kind of column-level ALTER statement that SQLite supports.
Operations.batch_alter_table()
can be configured
to run “move and copy” unconditionally in all cases, including on databases
other than SQLite; more on this is below.
Controlling Table Reflection¶
The Table
object that is reflected when
“move and copy” proceeds is performed using the standard autoload=True
approach. This call can be affected using the
reflect_args
and
reflect_kwargs
arguments.
For example, to override a Column
within
the reflection process such that a Boolean
object is reflected with the create_constraint
flag set to False
:
with self.op.batch_alter_table(
"bar",
reflect_args=[Column('flag', Boolean(create_constraint=False))]
) as batch_op:
batch_op.alter_column(
'flag', new_column_name='bflag', existing_type=Boolean)
Another use case, add a listener to the Table
as it is reflected so that special logic can be applied to columns or
types, using the column_reflect()
event:
def listen_for_reflect(inspector, table, column_info):
"correct an ENUM type"
if column_info['name'] == 'my_enum':
column_info['type'] = Enum('a', 'b', 'c')
with self.op.batch_alter_table(
"bar",
reflect_kwargs=dict(
listeners=[
('column_reflect', listen_for_reflect)
]
)
) as batch_op:
batch_op.alter_column(
'flag', new_column_name='bflag', existing_type=Boolean)
The reflection process may also be bypassed entirely by sending a
pre-fabricated Table
object; see
Working in Offline Mode for an example.
New in version 0.7.1: added Operations.batch_alter_table.reflect_args
and Operations.batch_alter_table.reflect_kwargs
options.
Dealing with Constraints¶
There are a variety of issues when using “batch” mode with constraints, such as FOREIGN KEY, CHECK and UNIQUE constraints. This section will attempt to detail many of these scenarios.
Dropping Unnamed or Named Foreign Key Constraints¶
SQLite, unlike any other database, allows constraints to exist in the database that have no identifying name. On all other backends, the target database will always generate some kind of name, if one is not given.
The first challenge this represents is that an unnamed constraint can’t
by itself be targeted by the BatchOperations.drop_constraint()
method.
An unnamed FOREIGN KEY constraint is implicit whenever the
ForeignKey
or ForeignKeyConstraint
objects are used without
passing them a name. Only on SQLite will these constraints remain entirely
unnamed when they are created on the target database; an automatically generated
name will be assigned in the case of all other database backends.
A second issue is that SQLAlchemy itself has inconsistent behavior in dealing with SQLite constraints as far as names. Prior to version 1.0, SQLAlchemy omits the name of foreign key constraints when reflecting them against the SQLite backend. So even if the target application has gone through the steps to apply names to the constraints as stated in the database, they still aren’t targetable within the batch reflection process prior to SQLAlchemy 1.0.
Within the scope of batch mode, this presents the issue that the
BatchOperations.drop_constraint()
method requires a constraint name
in order to target the correct constraint.
In order to overcome this, the Operations.batch_alter_table()
method supports a
naming_convention
argument, so that
all reflected constraints, including foreign keys that are unnamed, or
were named but SQLAlchemy isn’t loading this name, may be given a name,
as described in Integration of Naming Conventions into Operations, Autogenerate. Usage is as follows:
naming_convention = {
"fk":
"fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
}
with self.op.batch_alter_table(
"bar", naming_convention=naming_convention) as batch_op:
batch_op.drop_constraint(
"fk_bar_foo_id_foo", type_="foreignkey")
Note that the naming convention feature requires at least SQLAlchemy 0.9.4 for support.
New in version 0.7.1: added naming_convention
to
Operations.batch_alter_table()
.
Including unnamed UNIQUE constraints¶
A similar, but frustratingly slightly different, issue is that in the case of UNIQUE constraints, we again have the issue that SQLite allows unnamed UNIQUE constraints to exist on the database, however in this case, SQLAlchemy prior to version 1.0 doesn’t reflect these constraints at all. It does properly reflect named unique constraints with their names, however.
So in this case, the workaround for foreign key names is still not sufficient
prior to SQLAlchemy 1.0. If our table includes unnamed unique constraints,
and we’d like them to be re-created along with the table, we need to include
them directly, which can be via the
table_args
argument:
with self.op.batch_alter_table(
"bar", table_args=(UniqueConstraint('username'),)
):
batch_op.add_column(Column('foo', Integer))
Including CHECK constraints¶
SQLAlchemy currently doesn’t reflect CHECK constraints on any backend. So again these must be stated explicitly if they are to be included in the recreated table:
with op.batch_alter_table("some_table", table_args=[
CheckConstraint('x > 5')
]) as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
Dealing with Referencing Foreign Keys¶
If the SQLite database is enforcing referential integrity with
PRAGMA FOREIGN KEYS
, this pragma may need to be disabled when the workflow
mode proceeds, else remote constraints which refer to this table may prevent
it from being dropped; additionally, for referential integrity to be
re-enabled, it may be necessary to recreate the
foreign keys on those remote tables to refer again to the new table (this
is definitely the case on other databases, at least). SQLite is normally used
without referential integrity enabled so this won’t be a problem for most
users.
Working in Offline Mode¶
In the preceding sections, we’ve seen how much of an emphasis the
“move and copy” process has on using reflection in order to know the
structure of the table that is to be copied. This means that in the typical
case, “online” mode, where a live database connection is present so that
Operations.batch_alter_table()
can reflect the table from the
database, is required; the --sql
flag cannot be used without extra
steps.
To support offline mode, the system must work without table reflection
present, which means the full table as it intends to be created must be
passed to Operations.batch_alter_table()
using
copy_from
:
meta = MetaData()
some_table = Table(
'some_table', meta,
Column('id', Integer, primary_key=True),
Column('bar', String(50))
)
with op.batch_alter_table("some_table", copy_from=some_table) as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
The above use pattern is pretty tedious and quite far off from Alembic’s preferred style of working; however, if one needs to do SQLite-compatible “move and copy” migrations and need them to generate flat SQL files in “offline” mode, there’s not much alternative.
New in version 0.7.6: Fully implemented the
copy_from
parameter.
Batch mode with Autogenerate¶
The syntax of batch mode is essentially that Operations.batch_alter_table()
is used to enter a batch block, and the returned BatchOperations
context
works just like the regular Operations
context, except that
the “table name” and “schema name” arguments are omitted.
To support rendering of migration commands in batch mode for autogenerate,
configure the EnvironmentContext.configure.render_as_batch
flag in env.py
:
context.configure(
connection=connection,
target_metadata=target_metadata,
render_as_batch=True
)
Autogenerate will now generate along the lines of:
def upgrade():
### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('address', schema=None) as batch_op:
batch_op.add_column(sa.Column('street', sa.String(length=50), nullable=True))
This mode is safe to use in all cases, as the Operations.batch_alter_table()
directive by default only takes place for SQLite; other backends will
behave just as they normally do in the absense of the batch directives.
Note that autogenerate support does not include “offline” mode, where
the Operations.batch_alter_table.copy_from
parameter is used.
The table definition here would need to be entered into migration files
manually if this is needed.
Batch mode with databases other than SQLite¶
There’s an odd use case some shops have, where the “move and copy” style of migration is useful in some cases for databases that do already support ALTER. There’s some cases where an ALTER operation may block access to the table for a long time, which might not be acceptable. “move and copy” can be made to work on other backends, though with a few extra caveats.
The batch mode directive will run the “recreate” system regardless of
backend if the flag recreate='always'
is passed:
with op.batch_alter_table("some_table", recreate='always') as batch_op:
batch_op.add_column(Column('foo', Integer))
The issues that arise in this mode are mostly to do with constraints. Databases such as Postgresql and MySQL with InnoDB will enforce referential integrity (e.g. via foreign keys) in all cases. Unlike SQLite, it’s not as simple to turn off referential integrity across the board (nor would it be desirable). Since a new table is replacing the old one, existing foreign key constraints which refer to the target table will need to be unconditionally dropped before the batch operation, and re-created to refer to the new table afterwards. Batch mode currently does not provide any automation for this.
The Postgresql database and possibly others also have the behavior such that when the new table is created, a naming conflict occurs with the named constraints of the new table, in that they match those of the old table, and on Postgresql, these names need to be unique across all tables. The Postgresql dialect will therefore emit a “DROP CONSTRAINT” directive for all constraints on the old table before the new one is created; this is “safe” in case of a failed operation because Postgresql also supports transactional DDL.
Note that also as is the case with SQLite, CHECK constraints need to be
moved over between old and new table manually using the
Operations.batch_alter_table.table_args
parameter.
f.. _branches:
Working with Branches¶
Note
Alembic 0.7.0 features an all-new versioning model that fully
supports branch points, merge points, and long-lived, labeled branches,
including independent branches originating from multiple bases.
A great emphasis has been placed on there being almost no impact on the
existing Alembic workflow, including that all commands work pretty much
the same as they did before, the format of migration files doesn’t require
any change (though there are some changes that are recommended),
and even the structure of the alembic_version
table does not change at all. However, most alembic commands now offer
new features which will break out an Alembic environment into
“branch mode”, where things become a lot more intricate. Working in
“branch mode” should be considered as a “beta” feature, with many new
paradigms and use cases still to be stress tested in the wild.
Please tread lightly!
New in version 0.7.0.
A branch describes a point in a migration stream when two or more versions refer to the same parent migration as their anscestor. Branches occur naturally when two divergent source trees, both containing Alembic revision files created independently within those source trees, are merged together into one. When this occurs, the challenge of a branch is to merge the branches into a single series of changes, so that databases established from either source tree individually can be upgraded to reference the merged result equally. Another scenario where branches are present are when we create them directly; either at some point in the migration stream we’d like different series of migrations to be managed independently (e.g. we create a tree), or we’d like separate migration streams for different features starting at the root (e.g. a forest). We’ll illustrate all of these cases, starting with the most common which is a source-merge-originated branch that we’ll merge.
Starting with the “account table” example we began in Create a Migration Script,
assume we have our basemost version 1975ea83b712
, which leads into
the second revision ae1027a6acf
, and the migration files for these
two revisions are checked into our source repository.
Consider if we merged into our source repository another code branch which contained
a revision for another table called shopping_cart
. This revision was made
against our first Alembic revision, the one that generated account
. After
loading the second source tree in, a new file
27c6a30d7c24_add_shopping_cart_table.py
exists within our versions
directory.
Both it, as well as ae1027a6acf_add_a_column.py
, reference
1975ea83b712_add_account_table.py
as the “downgrade” revision. To illustrate:
# main source tree:
1975ea83b712 (create account table) -> ae1027a6acf (add a column)
# branched source tree
1975ea83b712 (create account table) -> 27c6a30d7c24 (add shopping cart table)
Above, we can see 1975ea83b712
is our branch point; two distinct versions
both refer to it as its parent. The Alembic command branches
illustrates
this fact:
$ alembic branches --verbose
Rev: 1975ea83b712 (branchpoint)
Parent: <base>
Branches into: 27c6a30d7c24, ae1027a6acf
Path: foo/versions/1975ea83b712_add_account_table.py
create account table
Revision ID: 1975ea83b712
Revises:
Create Date: 2014-11-20 13:02:46.257104
-> 27c6a30d7c24 (head), add shopping cart table
-> ae1027a6acf (head), add a column
History shows it too, illustrating two head
entries as well
as a branchpoint
:
$ alembic history
1975ea83b712 -> 27c6a30d7c24 (head), add shopping cart table
1975ea83b712 -> ae1027a6acf (head), add a column
<base> -> 1975ea83b712 (branchpoint), create account table
We can get a view of just the current heads using alembic heads
:
$ alembic heads --verbose
Rev: 27c6a30d7c24 (head)
Parent: 1975ea83b712
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
add shopping cart table
Revision ID: 27c6a30d7c24
Revises: 1975ea83b712
Create Date: 2014-11-20 13:03:11.436407
Rev: ae1027a6acf (head)
Parent: 1975ea83b712
Path: foo/versions/ae1027a6acf_add_a_column.py
add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2014-11-20 13:02:54.849677
If we try to run an upgrade
to the usual end target of head
, Alembic no
longer considers this to be an unambiguous command. As we have more than
one head
, the upgrade
command wants us to provide more information:
$ alembic upgrade head
FAILED: Multiple head revisions are present for given argument 'head'; please specify a specific
target revision, '<branchname>@head' to narrow to a specific head, or 'heads' for all heads
The upgrade
command gives us quite a few options in which we can proceed
with our upgrade, either giving it information on which head we’d like to upgrade
towards, or alternatively stating that we’d like all heads to be upgraded
towards at once. However, in the typical case of two source trees being
merged, we will want to pursue a third option, which is that we can merge these
branches.
Merging Branches¶
An Alembic merge is a migration file that joins two or more “head” files together. If the two branches we have right now can be said to be a “tree” structure, introducing this merge file will turn it into a “diamond” structure:
-- ae1027a6acf -->
/ \
<base> --> 1975ea83b712 --> --> mergepoint
\ /
-- 27c6a30d7c24 -->
We create the merge file using alembic merge
; with this command, we can
pass to it an argument such as heads
, meaning we’d like to merge all
heads. Or, we can pass it individual revision numbers sequentally:
$ alembic merge -m "merge ae1 and 27c" ae1027 27c6a
Generating /path/to/foo/versions/53fffde5ad5_merge_ae1_and_27c.py ... done
Looking inside the new file, we see it as a regular migration file, with
the only new twist is that down_revision
points to both revisions:
"""merge ae1 and 27c
Revision ID: 53fffde5ad5
Revises: ae1027a6acf, 27c6a30d7c24
Create Date: 2014-11-20 13:31:50.811663
"""
# revision identifiers, used by Alembic.
revision = '53fffde5ad5'
down_revision = ('ae1027a6acf', '27c6a30d7c24')
branch_labels = None
from alembic import op
import sqlalchemy as sa
def upgrade():
pass
def downgrade():
pass
This file is a regular migration file, and if we wish to, we may place
Operations
directives into the upgrade()
and downgrade()
functions like any other migration file. Though it is probably best to limit
the instructions placed here only to those that deal with any kind of
reconciliation that is needed between the two merged branches, if any.
The heads
command now illustrates that the multiple heads in our
versions/
directory have been resolved into our new head:
$ alembic heads --verbose
Rev: 53fffde5ad5 (head) (mergepoint)
Merges: ae1027a6acf, 27c6a30d7c24
Path: foo/versions/53fffde5ad5_merge_ae1_and_27c.py
merge ae1 and 27c
Revision ID: 53fffde5ad5
Revises: ae1027a6acf, 27c6a30d7c24
Create Date: 2014-11-20 13:31:50.811663
History shows a similar result, as the mergepoint becomes our head:
$ alembic history
ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5 (head) (mergepoint), merge ae1 and 27c
1975ea83b712 -> ae1027a6acf, add a column
1975ea83b712 -> 27c6a30d7c24, add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
With a single head
target, a generic upgrade
can proceed:
$ alembic upgrade head
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INFO [alembic.migration] Running upgrade ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5, merge ae1 and 27c
merge mechanics
The upgrade process traverses through all of our migration files using a topological sorting algorithm, treating the list of migration files not as a linked list, but as a directed acyclic graph. The starting points of this traversal are the current heads within our database, and the end point is the “head” revision or revisions specified.
When a migration proceeds across a point at which there are multiple heads,
the alembic_version
table will at that point store multiple rows,
one for each head. Our migration process above will emit SQL against
alembic_version
along these lines:
-- Running upgrade -> 1975ea83b712, create account table INSERT INTO alembic_version (version_num) VALUES ('1975ea83b712') -- Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table UPDATE alembic_version SET version_num='27c6a30d7c24' WHERE alembic_version.version_num = '1975ea83b712' -- Running upgrade 1975ea83b712 -> ae1027a6acf, add a column INSERT INTO alembic_version (version_num) VALUES ('ae1027a6acf') -- Running upgrade ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5, merge ae1 and 27c DELETE FROM alembic_version WHERE alembic_version.version_num = 'ae1027a6acf' UPDATE alembic_version SET version_num='53fffde5ad5' WHERE alembic_version.version_num = '27c6a30d7c24'
At the point at which both 27c6a30d7c24
and ae1027a6acf
exist within our
database, both values are present in alembic_version
, which now has
two rows. If we upgrade to these two versions alone, then stop and
run alembic current
, we will see this:
$ alembic current --verbose
Current revision(s) for postgresql://scott:XXXXX@localhost/test:
Rev: ae1027a6acf
Parent: 1975ea83b712
Path: foo/versions/ae1027a6acf_add_a_column.py
add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2014-11-20 13:02:54.849677
Rev: 27c6a30d7c24
Parent: 1975ea83b712
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
add shopping cart table
Revision ID: 27c6a30d7c24
Revises: 1975ea83b712
Create Date: 2014-11-20 13:03:11.436407
A key advantage to the merge
process is that it will
run equally well on databases that were present on version ae1027a6acf
alone, versus databases that were present on version 27c6a30d7c24
alone;
whichever version was not yet applied, will be applied before the merge point
can be crossed. This brings forth a way of thinking about a merge file,
as well as about any Alembic revision file. As they are considered to
be “nodes” within a set that is subject to topological sorting, each
“node” is a point that cannot be crossed until all of its dependencies
are satisfied.
Prior to Alembic’s support of merge points, the use case of databases sitting on different heads was basically impossible to reconcile; having to manually splice the head files together invariably meant that one migration would occur before the other, thus being incompatible with databases that were present on the other migration.
Working with Explicit Branches¶
The alembic upgrade
command hinted at other options besides merging when
dealing with multiple heads. Let’s back up and assume we’re back where
we have as our heads just ae1027a6acf
and 27c6a30d7c24
:
$ alembic heads
27c6a30d7c24
ae1027a6acf
Earlier, when we did alembic upgrade head
, it gave us an error which
suggested please specify a specific target revision, '<branchname>@head' to
narrow to a specific head, or 'heads' for all heads
in order to proceed
without merging. Let’s cover those cases.
Referring to all heads at once¶
The heads
identifier is a lot like head
, except it explicitly refers
to all heads at once. That is, it’s like telling Alembic to do the operation
for both ae1027a6acf
and 27c6a30d7c24
simultaneously. If we started
from a fresh database and ran upgrade heads
we’d see:
$ alembic upgrade heads
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
Since we’ve upgraded to heads
, and we do in fact have more than one head,
that means these two distinct heads are now in our alembic_version
table.
We can see this if we run alembic current
:
$ alembic current
ae1027a6acf (head)
27c6a30d7c24 (head)
That means there’s two rows in alembic_version
right now. If we downgrade
one step at a time, Alembic will delete from the alembic_version
table
each branch that’s closed out, until only one branch remains; then it will
continue updating the single value down to the previous versions:
$ alembic downgrade -1
INFO [alembic.migration] Running downgrade ae1027a6acf -> 1975ea83b712, add a column
$ alembic current
27c6a30d7c24 (head)
$ alembic downgrade -1
INFO [alembic.migration] Running downgrade 27c6a30d7c24 -> 1975ea83b712, add shopping cart table
$ alembic current
1975ea83b712 (branchpoint)
$ alembic downgrade -1
INFO [alembic.migration] Running downgrade 1975ea83b712 -> , create account table
$ alembic current
Referring to a Specific Version¶
We can pass a specific version number to upgrade
. Alembic will ensure that
all revisions upon which this version depends are invoked, and nothing more.
So if we upgrade
either to 27c6a30d7c24
or ae1027a6acf
specifically,
it guarantees that 1975ea83b712
will have been applied, but not that
any “sibling” versions are applied:
$ alembic upgrade 27c6a
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
With 1975ea83b712
and 27c6a30d7c24
applied, ae1027a6acf
is just
a single additional step:
$ alembic upgrade ae102
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
Working with Branch Labels¶
To satisfy the use case where an environment has long-lived branches, especially
independent branches as will be discussed in the next section, Alembic supports
the concept of branch labels. These are string values that are present
within the migration file, using the new identifier branch_labels
.
For example, if we want to refer to the “shopping cart” branch using the name
“shoppingcart”, we can add that name to our file
27c6a30d7c24_add_shopping_cart_table.py
:
"""add shopping cart table
"""
# revision identifiers, used by Alembic.
revision = '27c6a30d7c24'
down_revision = '1975ea83b712'
branch_labels = ('shoppingcart',)
# ...
The branch_labels
attribute refers to a string name, or a tuple
of names, which will now apply to this revision, all descendants of this
revision, as well as all ancestors of this revision up until the preceding
branch point, in this case 1975ea83b712
. We can see the shoppingcart
label applied to this revision:
$ alembic history
1975ea83b712 -> 27c6a30d7c24 (shoppingcart) (head), add shopping cart table
1975ea83b712 -> ae1027a6acf (head), add a column
<base> -> 1975ea83b712 (branchpoint), create account table
With the label applied, the name shoppingcart
now serves as an alias
for the 27c6a30d7c24
revision specifically. We can illustrate this
by showing it with alembic show
:
$ alembic show shoppingcart
Rev: 27c6a30d7c24 (head)
Parent: 1975ea83b712
Branch names: shoppingcart
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
add shopping cart table
Revision ID: 27c6a30d7c24
Revises: 1975ea83b712
Create Date: 2014-11-20 13:03:11.436407
However, when using branch labels, we usually want to use them using a syntax
known as “branch at” syntax; this syntax allows us to state that we want to
use a specific revision, let’s say a “head” revision, in terms of a specific
branch. While normally, we can’t refer to alembic upgrade head
when
there’s multiple heads, we can refer to this head specifcally using
shoppingcart@head
syntax:
$ alembic upgrade shoppingcart@head
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
The shoppingcart@head
syntax becomes important to us if we wish to
add new migration files to our versions directory while maintaining multiple
branches. Just like the upgrade
command, if we attempted to add a new
revision file to our multiple-heads layout without a specific parent revision,
we’d get a familiar error:
$ alembic revision -m "add a shopping cart column"
FAILED: Multiple heads are present; please specify the head revision on
which the new revision should be based, or perform a merge.
The alembic revision
command is pretty clear in what we need to do;
to add our new revision specifically to the shoppingcart
branch,
we use the --head
argument, either with the specific revision identifier
27c6a30d7c24
, or more generically using our branchname shoppingcart@head
:
$ alembic revision -m "add a shopping cart column" --head shoppingcart@head
Generating /path/to/foo/versions/d747a8a8879_add_a_shopping_cart_column.py ... done
alembic history
shows both files now part of the shoppingcart
branch:
$ alembic history
1975ea83b712 -> ae1027a6acf (head), add a column
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
We can limit our history operation just to this branch as well:
$ alembic history -r shoppingcart:
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
If we want to illustrate the path of shoppingcart
all the way from the
base, we can do that as follows:
$ alembic history -r :shoppingcart@head
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
We can run this operation from the “base” side as well, but we get a different result:
$ alembic history -r shoppingcart@base:
1975ea83b712 -> ae1027a6acf (head), add a column
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
When we list from shoppingcart@base
without an endpoint, it’s really shorthand
for -r shoppingcart@base:heads
, e.g. all heads, and since shoppingcart@base
is the same “base” shared by the ae1027a6acf
revision, we get that
revision in our listing as well. The <branchname>@base
syntax can be
useful when we are dealing with individual bases, as we’ll see in the next
section.
The <branchname>@head
format can also be used with revision numbers
instead of branch names, though this is less convenient. If we wanted to
add a new revision to our branch that includes the un-labeled ae1027a6acf
,
if this weren’t a head already, we could ask for the “head of the branch
that includes ae1027a6acf
” as follows:
$ alembic revision -m "add another account column" --head ae10@head
Generating /path/to/foo/versions/55af2cb1c267_add_another_account_column.py ... done
More Label Syntaxes¶
The heads
symbol can be combined with a branch label, in the case that
your labeled branch itself breaks off into multiple branches:
$ alembic upgrade shoppingcart@heads
Relative identifiers, as introduced in Relative Migration Identifiers,
work with labels too. For example, upgrading to shoppingcart@+2
means to upgrade from current heads on “shoppingcart” upwards two revisions:
$ alembic upgrade shoppingcart@+2
This kind of thing works from history as well:
$ alembic history -r current:shoppingcart@+2
The newer relnum+delta
format can be combined as well, for example
if we wanted to list along shoppingcart
up until two revisions
before the head:
$ alembic history -r :shoppingcart@head-2
Working with Multiple Bases¶
We’ve seen in the previous section that alembic upgrade
is fine
if we have multiple heads, alembic revision
allows us to tell it which
“head” we’d like to associate our new revision file with, and branch labels
allow us to assign names to branches that we can use in subsequent commands.
Let’s put all these together and refer to a new “base”, that is, a whole
new tree of revision files that will be semi-independent of the account/shopping
cart revisions we’ve been working with. This new tree will deal with
database tables involving “networking”.
Setting up Multiple Version Directories¶
While optional, it is often the case that when working with multiple bases,
we’d like different sets of version files to exist within their own directories;
typically, if an application is organized into several sub-modules, each
one would have a version directory containing migrations pertinent to
that module. So to start out, we can edit alembic.ini
to refer
to multiple directories; we’ll also state the current versions
directory as one of them:
# version location specification; this defaults
# to foo/versions. When using multiple version
# directories, initial revisions must be specified with --version-path
version_locations = %(here)s/model/networking %(here)s/alembic/versions
The new directory %(here)s/model/networking
is in terms of where
the alembic.ini
file is, as we are using the symbol %(here)s
which
resolves to this location. When we create our first new revision
targeted at this directory,
model/networking
will be created automatically if it does not
exist yet. Once we’ve created a revision here, the path is used automatically
when generating subsequent revision files that refer to this revision tree.
Creating a Labeled Base Revision¶
We also want our new branch to have its own name, and for that we want to
apply a branch label to the base. In order to achieve this using the
alembic revision
command without editing, we need to ensure our
script.py.mako
file, used
for generating new revision files, has the appropriate substitutions present.
If Alembic version 0.7.0 or greater was used to generate the original
migration environment, this is already done. However when working with an older
environment, script.py.mako
needs to have this directive added, typically
underneath the down_revision
directive:
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
# add this here in order to use revision with branch_label
branch_labels = ${repr(branch_labels)}
With this in place, we can create a new revision file, starting up a branch
that will deal with database tables involving networking; we specify the
--head
version of base
, a --branch-label
of networking
,
and the directory we want this first revision file to be
placed in with --version-path
:
$ alembic revision -m "create networking branch" --head=base --branch-label=networking --version-path=model/networking
Creating directory /path/to/foo/model/networking ... done
Generating /path/to/foo/model/networking/3cac04ae8714_create_networking_branch.py ... done
If we ran the above command and we didn’t have the newer script.py.mako
directive, we’d get this error:
FAILED: Version 3cac04ae8714 specified branch_labels networking, however
the migration file foo/model/networking/3cac04ae8714_create_networking_branch.py
does not have them; have you upgraded your script.py.mako to include the 'branch_labels'
section?
When we receive the above error, and we would like to try again, we need to
either delete the incorrectly generated file in order to run revision
again, or we can edit the 3cac04ae8714_create_networking_branch.py
directly to add the branch_labels
in of our choosing.
Running with Multiple Bases¶
Once we have a new, permanent (for as long as we desire it to be) base in our system, we’ll always have multiple heads present:
$ alembic heads
3cac04ae8714 (networking) (head)
27c6a30d7c24 (shoppingcart) (head)
ae1027a6acf (head)
When we want to add a new revision file to networking
, we specify
networking@head
as the --head
. The appropriate version directory
is now selected automatically based on the head we choose:
$ alembic revision -m "add ip number table" --head=networking@head
Generating /path/to/foo/model/networking/109ec7d132bf_add_ip_number_table.py ... done
It’s important that we refer to the head using networking@head
; if we
only refer to networking
, that refers to only 3cac04ae8714
specifically;
if we specify this and it’s not a head, alembic revision
will make sure
we didn’t mean to specify the head:
$ alembic revision -m "add DNS table" --head=networking
FAILED: Revision 3cac04ae8714 is not a head revision; please
specify --splice to create a new branch from this revision
As mentioned earlier, as this base is independent, we can view its history
from the base using history -r networking@base:
:
$ alembic history -r networking@base:
109ec7d132bf -> 29f859a13ea (networking) (head), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
At the moment, this is the same output we’d get at this point if we used
-r :networking@head
. However, that will change later on as we use
additional directives.
We may now run upgrades or downgrades freely, among individual branches (let’s assume a clean database again):
$ alembic upgrade networking@head
INFO [alembic.migration] Running upgrade -> 3cac04ae8714, create networking branch
INFO [alembic.migration] Running upgrade 3cac04ae8714 -> 109ec7d132bf, add ip number table
INFO [alembic.migration] Running upgrade 109ec7d132bf -> 29f859a13ea, add DNS table
or against the whole thing using heads
:
$ alembic upgrade heads
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
INFO [alembic.migration] Running upgrade 27c6a30d7c24 -> d747a8a8879, add a shopping cart column
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INFO [alembic.migration] Running upgrade ae1027a6acf -> 55af2cb1c267, add another account column
Branch Dependencies¶
When working with multiple roots, it is expected that these different
revision streams will need to refer to one another. For example, a new
revision in networking
which needs to refer to the account
table will want to establish 55af2cb1c267, add another account column
,
the last revision that
works with the account table, as a dependency. From a graph perspective,
this means nothing more that the new file will feature both
55af2cb1c267, add another account column
and 29f859a13ea, add DNS table
as “down” revisions,
and looks just as though we had merged these two branches together. However,
we don’t want to consider these as “merged”; we want the two revision
streams to remain independent, even though a version in networking
is going to reach over into the other stream. To support this use case,
Alembic provides a directive known as depends_on
, which allows
a revision file to refer to another as a “dependency”, very similar to
an entry in down_revision
from a graph perspective, but different
from a semantic perspective.
To use depends_on
, we can specify it as part of our alembic revision
command:
$ alembic revision -m "add ip account table" --head=networking@head --depends-on=55af2cb1c267
Generating /path/to/foo/model/networking/2a95102259be_add_ip_account_table.py ... done
Within our migration file, we’ll see this new directive present:
# revision identifiers, used by Alembic.
revision = '2a95102259be'
down_revision = '29f859a13ea'
branch_labels = None
depends_on='55af2cb1c267'
depends_on
may be either a real revision number or a branch
name. When specified at the command line, a resolution from a
partial revision number will work as well. It can refer
to any number of dependent revisions as well; for example, if we were
to run the command:
$ alembic revision -m "add ip account table" \\
--head=networking@head \\
--depends-on=55af2cb1c267 --depends-on=d747a --depends-on=fa445
Generating /path/to/foo/model/networking/2a95102259be_add_ip_account_table.py ... done
We’d see inside the file:
# revision identifiers, used by Alembic.
revision = '2a95102259be'
down_revision = '29f859a13ea'
branch_labels = None
depends_on = ('55af2cb1c267', 'd747a8a8879', 'fa4456a9201')
We also can of course add or alter this value within the file manually after
it is generated, rather than using the --depends-on
argument.
New in version 0.8: The depends_on
attribute may be set directly
from the alembic revision
command, rather than editing the file
directly. depends_on
identifiers may also be specified as
branch names at the command line or directly within the migration file.
The values may be specified as partial revision numbers from the command
line which will be resolved to full revision numbers in the output file.
We can see the effect this directive has when we view the history
of the networking
branch in terms of “heads”, e.g., all the revisions that
are descendants:
$ alembic history -r :networking@head
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
109ec7d132bf -> 29f859a13ea (networking), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
ae1027a6acf -> 55af2cb1c267 (effective head), add another account column
1975ea83b712 -> ae1027a6acf, Add a column
<base> -> 1975ea83b712 (branchpoint), create account table
What we see is that the full history of the networking
branch, in terms
of an “upgrade” to the “head”, will include that the tree building
up 55af2cb1c267, add another account column
will be pulled in first. Interstingly, we don’t see this displayed
when we display history in the other direction, e.g. from networking@base
:
$ alembic history -r networking@base:
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
109ec7d132bf -> 29f859a13ea (networking), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
The reason for the discrepancy is that displaying history from the base
shows us what would occur if we ran a downgrade operation, instead of an
upgrade. If we downgraded all the files in networking
using
networking@base
, the dependencies aren’t affected, they’re left in place.
We also see something odd if we view heads
at the moment:
$ alembic heads
2a95102259be (networking) (head)
27c6a30d7c24 (shoppingcart) (head)
55af2cb1c267 (effective head)
The head file that we used as a “dependency”, 55af2cb1c267
, is displayed
as an “effective” head, which we can see also in the history display earlier.
What this means is that at the moment, if we were to upgrade all versions
to the top, the 55af2cb1c267
revision number would not actually be
present in the alembic_version
table; this is because it does not have
a branch of its own subsequent to the 2a95102259be
revision which depends
on it:
$ alembic upgrade heads
INFO [alembic.migration] Running upgrade 29f859a13ea, 55af2cb1c267 -> 2a95102259be, add ip account table
$ alembic current
2a95102259be (head)
27c6a30d7c24 (head)
The entry is still displayed in alembic heads
because Alembic knows that
even though this revision isn’t a “real” head, it’s still something that
we developers consider semantically to be a head, so it’s displayed, noting
its special status so that we don’t get quite as confused when we don’t
see it within alembic current
.
If we add a new revision onto 55af2cb1c267
, the branch again becomes
a “real” branch which can have its own entry in the database:
$ alembic revision -m "more account changes" --head=55af2cb@head
Generating /path/to/foo/versions/34e094ad6ef1_more_account_changes.py ... done
$ alembic upgrade heads
INFO [alembic.migration] Running upgrade 55af2cb1c267 -> 34e094ad6ef1, more account changes
$ alembic current
2a95102259be (head)
27c6a30d7c24 (head)
34e094ad6ef1 (head)
For posterity, the revision tree now looks like:
$ alembic history
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
109ec7d132bf -> 29f859a13ea (networking), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
1975ea83b712 -> 27c6a30d7c24 (shoppingcart) (head), add shopping cart table
55af2cb1c267 -> 34e094ad6ef1 (head), more account changes
ae1027a6acf -> 55af2cb1c267, add another account column
1975ea83b712 -> ae1027a6acf, Add a column
<base> -> 1975ea83b712 (branchpoint), create account table
--- 27c6 --> d747 --> <head>
/ (shoppingcart)
<base> --> 1975 -->
\
--- ae10 --> 55af --> <head>
^
+--------+ (dependency)
|
|
<base> --> 3782 -----> 109e ----> 29f8 ---> 2a95 --> <head>
(networking)
If there’s any point to be made here, it’s if you are too freely branching, merging and labeling, things can get pretty crazy! Hence the branching system should be used carefully and thoughtfully for best results.
Operation Reference¶
This file provides documentation on Alembic migration directives.
The directives here are used within user-defined migration files,
within the upgrade()
and downgrade()
functions, as well as
any functions further invoked by those.
All directives exist as methods on a class called Operations
.
When migration scripts are run, this object is made available
to the script via the alembic.op
datamember, which is
a proxy to an actual instance of Operations
.
Currently, alembic.op
is a real Python module, populated
with individual proxies for each method on Operations
,
so symbols can be imported safely from the alembic.op
namespace.
The Operations
system is also fully extensible. See
Operation Plugins for details on this.
A key design philosophy to the Operation Directives methods is that
to the greatest degree possible, they internally generate the
appropriate SQLAlchemy metadata, typically involving
Table
and Constraint
objects. This so that migration instructions can be
given in terms of just the string names and/or flags involved.
The exceptions to this
rule include the add_column()
and create_table()
directives, which require full Column
objects, though the table metadata is still generated here.
The functions here all require that a MigrationContext
has been
configured within the env.py
script first, which is typically
via EnvironmentContext.configure()
. Under normal
circumstances they are called from an actual migration script, which
itself would be invoked by the EnvironmentContext.run_migrations()
method.
-
class
alembic.operations.
Operations
(migration_context, impl=None)¶ Define high level migration operations.
Each operation corresponds to some schema migration operation, executed against a particular
MigrationContext
which in turn represents connectivity to a database, or a file output stream.While
Operations
is normally configured as part of theEnvironmentContext.run_migrations()
method called from anenv.py
script, a standaloneOperations
instance can be made for use cases external to regular Alembic migrations by passing in aMigrationContext
:from alembic.migration import MigrationContext from alembic.operations import Operations conn = myengine.connect() ctx = MigrationContext.configure(conn) op = Operations(ctx) op.alter_column("t", "c", nullable=True)
Note that as of 0.8, most of the methods on this class are produced dynamically using the
Operations.register_operation()
method.Construct a new
Operations
Parameters: migration_context – a MigrationContext
instance.-
add_column
(table_name, column, schema=None)¶ Issue an “add column” instruction using the current migration context.
e.g.:
from alembic import op from sqlalchemy import Column, String op.add_column('organization', Column('name', String()) )
The provided
Column
object can also specify aForeignKey
, referencing a remote table name. Alembic will automatically generate a stub “referenced” table and emit a second ALTER statement in order to add the constraint separately:from alembic import op from sqlalchemy import Column, INTEGER, ForeignKey op.add_column('organization', Column('account_id', INTEGER, ForeignKey('accounts.id')) )
Note that this statement uses the
Column
construct as is from the SQLAlchemy library. In particular, default values to be created on the database side are specified using theserver_default
parameter, and notdefault
which only specifies Python-side defaults:from alembic import op from sqlalchemy import Column, TIMESTAMP, func # specify "DEFAULT NOW" along with the column add op.add_column('account', Column('timestamp', TIMESTAMP, server_default=func.now()) )
Parameters: - table_name – String name of the parent table.
- column – a
sqlalchemy.schema.Column
object representing the new column. - schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct.
-
alter_column
(table_name, column_name, nullable=None, server_default=False, new_column_name=None, type_=None, existing_type=None, existing_server_default=False, existing_nullable=None, schema=None, **kw)¶ Issue an “alter column” instruction using the current migration context.
Generally, only that aspect of the column which is being changed, i.e. name, type, nullability, default, needs to be specified. Multiple changes can also be specified at once and the backend should “do the right thing”, emitting each change either separately or together as the backend allows.
MySQL has special requirements here, since MySQL cannot ALTER a column without a full specification. When producing MySQL-compatible migration files, it is recommended that the
existing_type
,existing_server_default
, andexisting_nullable
parameters be present, if not being altered.Type changes which are against the SQLAlchemy “schema” types
Boolean
andEnum
may also add or drop constraints which accompany those types on backends that don’t support them natively. Theexisting_server_default
argument is used in this case as well to remove a previous constraint.Parameters: - table_name – string name of the target table.
- column_name – string name of the target column, as it exists before the operation begins.
- nullable – Optional; specify
True
orFalse
to alter the column’s nullability. - server_default – Optional; specify a string
SQL expression,
text()
, orDefaultClause
to indicate an alteration to the column’s default value. Set toNone
to have the default removed. - new_column_name – Optional; specify a string name here to indicate the new name within a column rename operation.
- type_ – Optional; a
TypeEngine
type object to specify a change to the column’s type. For SQLAlchemy types that also indicate a constraint (i.e.Boolean
,Enum
), the constraint is also generated. - autoincrement – set the
AUTO_INCREMENT
flag of the column; currently understood by the MySQL dialect. - existing_type – Optional; a
TypeEngine
type object to specify the previous type. This is required for all MySQL column alter operations that don’t otherwise specify a new type, as well as for when nullability is being changed on a SQL Server column. It is also used if the type is a so-called SQLlchemy “schema” type which may define a constraint (i.e.Boolean
,Enum
), so that the constraint can be dropped. - existing_server_default – Optional; The existing default value of the column. Required on MySQL if an existing default is not being changed; else MySQL removes the default.
- existing_nullable – Optional; the existing nullability of the column. Required on MySQL if the existing nullability is not being changed; else MySQL sets this to NULL.
- existing_autoincrement – Optional; the existing autoincrement
of the column. Used for MySQL’s system of altering a column
that specifies
AUTO_INCREMENT
. - schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct.
-
batch_alter_table
(*args, **kwds)¶ Invoke a series of per-table migrations in batch.
Batch mode allows a series of operations specific to a table to be syntactically grouped together, and allows for alternate modes of table migration, in particular the “recreate” style of migration required by SQLite.
“recreate” style is as follows:
- A new table is created with the new specification, based on the migration directives within the batch, using a temporary name.
- the data copied from the existing table to the new table.
- the existing table is dropped.
- the new table is renamed to the existing table name.
The directive by default will only use “recreate” style on the SQLite backend, and only if directives are present which require this form, e.g. anything other than
add_column()
. The batch operation on other backends will proceed using standard ALTER TABLE operations.The method is used as a context manager, which returns an instance of
BatchOperations
; this object is the same asOperations
except that table names and schema names are omitted. E.g.:with op.batch_alter_table("some_table") as batch_op: batch_op.add_column(Column('foo', Integer)) batch_op.drop_column('bar')
The operations within the context manager are invoked at once when the context is ended. When run against SQLite, if the migrations include operations not supported by SQLite’s ALTER TABLE, the entire table will be copied to a new one with the new specification, moving all data across as well.
The copy operation by default uses reflection to retrieve the current structure of the table, and therefore
batch_alter_table()
in this mode requires that the migration is run in “online” mode. Thecopy_from
parameter may be passed which refers to an existingTable
object, which will bypass this reflection step.Note
The table copy operation will currently not copy CHECK constraints, and may not copy UNIQUE constraints that are unnamed, as is possible on SQLite. See the section Dealing with Constraints for workarounds.
Parameters: - table_name – name of table
- schema – optional schema name.
- recreate – under what circumstances the table should be
recreated. At its default of
"auto"
, the SQLite dialect will recreate the table if any operations other thanadd_column()
,create_index()
, ordrop_index()
are present. Other options include"always"
and"never"
. - copy_from –
optional
Table
object that will act as the structure of the table being copied. If omitted, table reflection is used to retrieve the structure of the table.New in version 0.7.6: Fully implemented the
copy_from
parameter. - reflect_args –
a sequence of additional positional arguments that will be applied to the table structure being reflected / copied; this may be used to pass column and constraint overrides to the table that will be reflected, in lieu of passing the whole
Table
usingcopy_from
.New in version 0.7.1.
- reflect_kwargs –
a dictionary of additional keyword arguments that will be applied to the table structure being copied; this may be used to pass additional table and reflection options to the table that will be reflected, in lieu of passing the whole
Table
usingcopy_from
.New in version 0.7.1.
- table_args – a sequence of additional positional arguments that
will be applied to the new
Table
when created, in addition to those copied from the source table. This may be used to provide additional constraints such as CHECK constraints that may not be reflected. - table_kwargs – a dictionary of additional keyword arguments
that will be applied to the new
Table
when created, in addition to those copied from the source table. This may be used to provide for additional table options that may not be reflected.
New in version 0.7.0.
Parameters: naming_convention – a naming convention dictionary of the form described at Integration of Naming Conventions into Operations, Autogenerate which will be applied to the
MetaData
during the reflection process. This is typically required if one wants to drop SQLite constraints, as these constraints will not have names when reflected on this backend. Requires SQLAlchemy 0.9.4 or greater.New in version 0.7.1.
Note
batch mode requires SQLAlchemy 0.8 or above.
-
bulk_insert
(table, rows, multiinsert=True)¶ Issue a “bulk insert” operation using the current migration context.
This provides a means of representing an INSERT of multiple rows which works equally well in the context of executing on a live connection as well as that of generating a SQL script. In the case of a SQL script, the values are rendered inline into the statement.
e.g.:
from alembic import op from datetime import date from sqlalchemy.sql import table, column from sqlalchemy import String, Integer, Date # Create an ad-hoc table to use for the insert statement. accounts_table = table('account', column('id', Integer), column('name', String), column('create_date', Date) ) op.bulk_insert(accounts_table, [ {'id':1, 'name':'John Smith', 'create_date':date(2010, 10, 5)}, {'id':2, 'name':'Ed Williams', 'create_date':date(2007, 5, 27)}, {'id':3, 'name':'Wendy Jones', 'create_date':date(2008, 8, 15)}, ] )
When using –sql mode, some datatypes may not render inline automatically, such as dates and other special types. When this issue is present,
Operations.inline_literal()
may be used:op.bulk_insert(accounts_table, [ {'id':1, 'name':'John Smith', 'create_date':op.inline_literal("2010-10-05")}, {'id':2, 'name':'Ed Williams', 'create_date':op.inline_literal("2007-05-27")}, {'id':3, 'name':'Wendy Jones', 'create_date':op.inline_literal("2008-08-15")}, ], multiinsert=False )
When using
Operations.inline_literal()
in conjunction withOperations.bulk_insert()
, in order for the statement to work in “online” (e.g. non –sql) mode, themultiinsert
flag should be set toFalse
, which will have the effect of individual INSERT statements being emitted to the database, each with a distinct VALUES clause, so that the “inline” values can still be rendered, rather than attempting to pass the values as bound parameters.New in version 0.6.4:
Operations.inline_literal()
can now be used withOperations.bulk_insert()
, and themultiinsert
flag has been added to assist in this usage when running in “online” mode.Parameters: - table – a table object which represents the target of the INSERT.
- rows – a list of dictionaries indicating rows.
- multiinsert –
when at its default of True and –sql mode is not enabled, the INSERT statement will be executed using “executemany()” style, where all elements in the list of dictionaries are passed as bound parameters in a single list. Setting this to False results in individual INSERT statements being emitted per parameter set, and is needed in those cases where non-literal values are present in the parameter sets.
New in version 0.6.4.
-
create_check_constraint
(constraint_name, table_name, condition, schema=None, **kw)¶ Issue a “create check constraint” instruction using the current migration context.
e.g.:
from alembic import op from sqlalchemy.sql import column, func op.create_check_constraint( "ck_user_name_len", "user", func.len(column('name')) > 5 )
CHECK constraints are usually against a SQL expression, so ad-hoc table metadata is usually needed. The function will convert the given arguments into a
sqlalchemy.schema.CheckConstraint
bound to an anonymous table in order to emit the CREATE statement.Parameters: - name – Name of the check constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
Configuring Constraint Naming Conventions,
name
here can beNone
, as the event listener will apply the name to the constraint object when it is associated with the table. - table_name – String name of the source table.
- condition – SQL expression that’s the condition of the constraint. Can be a string or SQLAlchemy expression language structure.
- deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
- initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
- schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct.
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> constraint_name
- source -> table_name
- name – Name of the check constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
Configuring Constraint Naming Conventions,
-
create_foreign_key
(constraint_name, source_table, referent_table, local_cols, remote_cols, onupdate=None, ondelete=None, deferrable=None, initially=None, match=None, source_schema=None, referent_schema=None, **dialect_kw)¶ Issue a “create foreign key” instruction using the current migration context.
e.g.:
from alembic import op op.create_foreign_key( "fk_user_address", "address", "user", ["user_id"], ["id"])
This internally generates a
Table
object containing the necessary columns, then generates a newForeignKeyConstraint
object which it then associates with theTable
. Any event listeners associated with this action will be fired off normally. TheAddConstraint
construct is ultimately used to generate the ALTER statement.Parameters: - name – Name of the foreign key constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
Configuring Constraint Naming Conventions,
name
here can beNone
, as the event listener will apply the name to the constraint object when it is associated with the table. - source_table – String name of the source table.
- referent_table – String name of the destination table.
- local_cols – a list of string column names in the source table.
- remote_cols – a list of string column names in the remote table.
- onupdate – Optional string. If set, emit ON UPDATE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.
- ondelete – Optional string. If set, emit ON DELETE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.
- deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
- source_schema – Optional schema name of the source table.
- referent_schema – Optional schema name of the destination table.
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> constraint_name
- source -> source_table
- referent -> referent_table
- name – Name of the foreign key constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
Configuring Constraint Naming Conventions,
-
create_index
(index_name, table_name, columns, schema=None, unique=False, **kw)¶ Issue a “create index” instruction using the current migration context.
e.g.:
from alembic import op op.create_index('ik_test', 't1', ['foo', 'bar'])
Functional indexes can be produced by using the
sqlalchemy.sql.expression.text()
construct:from alembic import op from sqlalchemy import text op.create_index('ik_test', 't1', [text('lower(foo)')])
New in version 0.6.7: support for making use of the
text()
construct in conjunction withOperations.create_index()
in order to produce functional expressions within CREATE INDEX.Parameters: - index_name – name of the index.
- table_name – name of the owning table.
- columns – a list consisting of string column names and/or
text()
constructs. - schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct. - unique – If True, create a unique index.
- quote – Force quoting of this column’s name on or off, corresponding
to
True
orFalse
. When left at its default ofNone
, the column identifier will be quoted according to whether the name is case sensitive (identifiers with at least one upper case character are treated as case sensitive), or if it’s a reserved word. This flag is only needed to force quoting of a reserved word which is not known by the SQLAlchemy dialect. - **kw – Additional keyword arguments not mentioned above are
dialect specific, and passed in the form
<dialectname>_<argname>
. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> index_name
-
create_primary_key
(constraint_name, table_name, columns, schema=None)¶ Issue a “create primary key” instruction using the current migration context.
e.g.:
from alembic import op op.create_primary_key( "pk_my_table", "my_table", ["id", "version"] )
This internally generates a
Table
object containing the necessary columns, then generates a newPrimaryKeyConstraint
object which it then associates with theTable
. Any event listeners associated with this action will be fired off normally. TheAddConstraint
construct is ultimately used to generate the ALTER statement.Parameters: - name – Name of the primary key constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
Configuring Constraint Naming Conventions
name
here can beNone
, as the event listener will apply the name to the constraint object when it is associated with the table. - table_name – String name of the target table.
- columns – a list of string column names to be applied to the primary key constraint.
- schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct.
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> constraint_name
- cols -> columns
- name – Name of the primary key constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
Configuring Constraint Naming Conventions
-
create_table
(table_name, *columns, **kw)¶ Issue a “create table” instruction using the current migration context.
This directive receives an argument list similar to that of the traditional
sqlalchemy.schema.Table
construct, but without the metadata:from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column from alembic import op op.create_table( 'account', Column('id', INTEGER, primary_key=True), Column('name', VARCHAR(50), nullable=False), Column('description', NVARCHAR(200)), Column('timestamp', TIMESTAMP, server_default=func.now()) )
Note that
create_table()
acceptsColumn
constructs directly from the SQLAlchemy library. In particular, default values to be created on the database side are specified using theserver_default
parameter, and notdefault
which only specifies Python-side defaults:from alembic import op from sqlalchemy import Column, TIMESTAMP, func # specify "DEFAULT NOW" along with the "timestamp" column op.create_table('account', Column('id', INTEGER, primary_key=True), Column('timestamp', TIMESTAMP, server_default=func.now()) )
The function also returns a newly created
Table
object, corresponding to the table specification given, which is suitable for immediate SQL operations, in particularOperations.bulk_insert()
:from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column from alembic import op account_table = op.create_table( 'account', Column('id', INTEGER, primary_key=True), Column('name', VARCHAR(50), nullable=False), Column('description', NVARCHAR(200)), Column('timestamp', TIMESTAMP, server_default=func.now()) ) op.bulk_insert( account_table, [ {"name": "A1", "description": "account 1"}, {"name": "A2", "description": "account 2"}, ] )
New in version 0.7.0.
Parameters: - table_name – Name of the table
- *columns – collection of
Column
objects within the table, as well as optionalConstraint
objects andIndex
objects. - schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct. - **kw – Other keyword arguments are passed to the underlying
sqlalchemy.schema.Table
object created for the command.
Returns: the
Table
object corresponding to the parameters given.New in version 0.7.0: - the
Table
object is returned.Changed in version 0.8.0: The following positional argument names have been changed:
- name -> table_name
-
create_unique_constraint
(constraint_name, table_name, columns, schema=None, **kw)¶ Issue a “create unique constraint” instruction using the current migration context.
e.g.:
from alembic import op op.create_unique_constraint("uq_user_name", "user", ["name"])
This internally generates a
Table
object containing the necessary columns, then generates a newUniqueConstraint
object which it then associates with theTable
. Any event listeners associated with this action will be fired off normally. TheAddConstraint
construct is ultimately used to generate the ALTER statement.Parameters: - name – Name of the unique constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
Configuring Constraint Naming Conventions,
name
here can beNone
, as the event listener will apply the name to the constraint object when it is associated with the table. - table_name – String name of the source table.
- columns – a list of string column names in the source table.
- deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
- initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
- schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct.
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> constraint_name
- source -> table_name
- local_cols -> columns
- name – Name of the unique constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
Configuring Constraint Naming Conventions,
-
drop_column
(table_name, column_name, schema=None, **kw)¶ Issue a “drop column” instruction using the current migration context.
e.g.:
drop_column('organization', 'account_id')
Parameters: - table_name – name of table
- column_name – name of column
- schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct. - mssql_drop_check – Optional boolean. When
True
, on Microsoft SQL Server only, first drop the CHECK constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.check_constraints, then exec’s a separate DROP CONSTRAINT for that constraint. - mssql_drop_default – Optional boolean. When
True
, on Microsoft SQL Server only, first drop the DEFAULT constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.default_constraints, then exec’s a separate DROP CONSTRAINT for that default. - mssql_drop_foreign_key –
Optional boolean. When
True
, on Microsoft SQL Server only, first drop a single FOREIGN KEY constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.foreign_keys/sys.foreign_key_columns, then exec’s a separate DROP CONSTRAINT for that default. Only works if the column has exactly one FK constraint which refers to it, at the moment.New in version 0.6.2.
-
drop_constraint
(constraint_name, table_name, type_=None, schema=None)¶ Drop a constraint of the given name, typically via DROP CONSTRAINT.
Parameters: - constraint_name – name of the constraint.
- table_name – table name.
- type_ – optional, required on MySQL. can be ‘foreignkey’, ‘primary’, ‘unique’, or ‘check’.
- schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct.
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> constraint_name
-
drop_index
(index_name, table_name=None, schema=None)¶ Issue a “drop index” instruction using the current migration context.
e.g.:
drop_index("accounts")
Parameters: - index_name – name of the index.
- table_name – name of the owning table. Some backends such as Microsoft SQL Server require this.
- schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct.
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> index_name
-
drop_table
(table_name, schema=None, **kw)¶ Issue a “drop table” instruction using the current migration context.
e.g.:
drop_table("accounts")
Parameters: - table_name – Name of the table
- schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct. - **kw – Other keyword arguments are passed to the underlying
sqlalchemy.schema.Table
object created for the command.
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> table_name
-
execute
(sqltext, execution_options=None)¶ Execute the given SQL using the current migration context.
In a SQL script context, the statement is emitted directly to the output stream. There is no return result, however, as this function is oriented towards generating a change script that can run in “offline” mode. For full interaction with a connected database, use the “bind” available from the context:
from alembic import op connection = op.get_bind()
Also note that any parameterized statement here will not work in offline mode - INSERT, UPDATE and DELETE statements which refer to literal values would need to render inline expressions. For simple use cases, the
inline_literal()
function can be used for rudimentary quoting of string values. For “bulk” inserts, consider usingbulk_insert()
.For example, to emit an UPDATE statement which is equally compatible with both online and offline mode:
from sqlalchemy.sql import table, column from sqlalchemy import String from alembic import op account = table('account', column('name', String) ) op.execute( account.update().\ where(account.c.name==op.inline_literal('account 1')).\ values({'name':op.inline_literal('account 2')}) )
Note above we also used the SQLAlchemy
sqlalchemy.sql.expression.table()
andsqlalchemy.sql.expression.column()
constructs to make a brief, ad-hoc table construct just for our UPDATE statement. A fullTable
construct of course works perfectly fine as well, though note it’s a recommended practice to at least ensure the definition of a table is self-contained within the migration script, rather than imported from a module that may break compatibility with older migrations.Parameters: sql – Any legal SQLAlchemy expression, including: - a string
- a
sqlalchemy.sql.expression.text()
construct. - a
sqlalchemy.sql.expression.insert()
construct. - a
sqlalchemy.sql.expression.update()
,sqlalchemy.sql.expression.insert()
, orsqlalchemy.sql.expression.delete()
construct. - Pretty much anything that’s “executable” as described in SQL Expression Language Tutorial.
Parameters: execution_options – Optional dictionary of execution options, will be passed to sqlalchemy.engine.Connection.execution_options()
.
-
f
(name)¶ Indicate a string name that has already had a naming convention applied to it.
This feature combines with the SQLAlchemy
naming_convention
feature to disambiguate constraint names that have already had naming conventions applied to them, versus those that have not. This is necessary in the case that the"%(constraint_name)s"
token is used within a naming convention, so that it can be identified that this particular name should remain fixed.If the
Operations.f()
is used on a constraint, the naming convention will not take effect:op.add_column('t', 'x', Boolean(name=op.f('ck_bool_t_x')))
Above, the CHECK constraint generated will have the name
ck_bool_t_x
regardless of whether or not a naming convention is in use.Alternatively, if a naming convention is in use, and ‘f’ is not used, names will be converted along conventions. If the
target_metadata
contains the naming convention{"ck": "ck_bool_%(table_name)s_%(constraint_name)s"}
, then the output of the following:op.add_column(‘t’, ‘x’, Boolean(name=’x’))will be:
CONSTRAINT ck_bool_t_x CHECK (x in (1, 0)))
The function is rendered in the output of autogenerate when a particular constraint name is already converted, for SQLAlchemy version 0.9.4 and greater only. Even though
naming_convention
was introduced in 0.9.2, the string disambiguation service is new as of 0.9.4.New in version 0.6.4.
-
get_bind
()¶ Return the current ‘bind’.
Under normal circumstances, this is the
Connection
currently being used to emit SQL to the database.In a SQL script context, this value is
None
. [TODO: verify this]
-
get_context
()¶ Return the
MigrationContext
object that’s currently in use.
-
classmethod
implementation_for
(op_cls)¶ Register an implementation for a given
MigrateOperation
.This is part of the operation extensibility API.
See also
Operation Plugins - example of use
-
inline_literal
(value, type_=None)¶ Produce an ‘inline literal’ expression, suitable for using in an INSERT, UPDATE, or DELETE statement.
When using Alembic in “offline” mode, CRUD operations aren’t compatible with SQLAlchemy’s default behavior surrounding literal values, which is that they are converted into bound values and passed separately into the
execute()
method of the DBAPI cursor. An offline SQL script needs to have these rendered inline. While it should always be noted that inline literal values are an enormous security hole in an application that handles untrusted input, a schema migration is not run in this context, so literals are safe to render inline, with the caveat that advanced types like dates may not be supported directly by SQLAlchemy.See
execute()
for an example usage ofinline_literal()
.The environment can also be configured to attempt to render “literal” values inline automatically, for those simple types that are supported by the dialect; see
EnvironmentContext.configure.literal_binds
for this more recently added feature.Parameters: - value – The value to render. Strings, integers, and simple numerics should be supported. Other types like boolean, dates, etc. may or may not be supported yet by various backends.
- type_ – optional - a
sqlalchemy.types.TypeEngine
subclass stating the type of this value. In SQLAlchemy expressions, this is usually derived automatically from the Python type of the value itself, as well as based on the context in which the value is used.
-
invoke
(operation)¶ Given a
MigrateOperation
, invoke it in terms of thisOperations
instance.New in version 0.8.0.
-
classmethod
register_operation
(name, sourcename=None)¶ Register a new operation for this class.
This method is normally used to add new operations to the
Operations
class, and possibly theBatchOperations
class as well. All Alembic migration operations are implemented via this system, however the system is also available as a public API to facilitate adding custom operations.New in version 0.8.0.
See also
-
rename_table
(old_table_name, new_table_name, schema=None)¶ Emit an ALTER TABLE to rename a table.
Parameters: - old_table_name – old name.
- new_table_name – new name.
- schema –
Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.New in version 0.7.0: ‘schema’ can now accept a
quoted_name
construct.
-
-
class
alembic.operations.
BatchOperations
(migration_context, impl=None)¶ Modifies the interface
Operations
for batch mode.This basically omits the
table_name
andschema
parameters from associated methods, as these are a given when running under batch mode.See also
Note that as of 0.8, most of the methods on this class are produced dynamically using the
Operations.register_operation()
method.Construct a new
Operations
Parameters: migration_context – a MigrationContext
instance.-
add_column
(column)¶ Issue an “add column” instruction using the current batch migration context.
See also
-
alter_column
(column_name, nullable=None, server_default=False, new_column_name=None, type_=None, existing_type=None, existing_server_default=False, existing_nullable=None, **kw)¶ Issue an “alter column” instruction using the current batch migration context.
See also
-
create_check_constraint
(constraint_name, condition, **kw)¶ Issue a “create check constraint” instruction using the current batch migration context.
The batch form of this call omits the
source
andschema
arguments from the call.See also
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> constraint_name
-
create_foreign_key
(constraint_name, referent_table, local_cols, remote_cols, referent_schema=None, onupdate=None, ondelete=None, deferrable=None, initially=None, match=None, **dialect_kw)¶ Issue a “create foreign key” instruction using the current batch migration context.
The batch form of this call omits the
source
andsource_schema
arguments from the call.e.g.:
with batch_alter_table("address") as batch_op: batch_op.create_foreign_key( "fk_user_address", "user", ["user_id"], ["id"])
See also
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> constraint_name
- referent -> referent_table
-
create_index
(index_name, columns, **kw)¶ Issue a “create index” instruction using the current batch migration context.
See also
-
create_primary_key
(constraint_name, columns)¶ Issue a “create primary key” instruction using the current batch migration context.
The batch form of this call omits the
table_name
andschema
arguments from the call.See also
-
create_unique_constraint
(constraint_name, columns, **kw)¶ Issue a “create unique constraint” instruction using the current batch migration context.
The batch form of this call omits the
source
andschema
arguments from the call.Changed in version 0.8.0: The following positional argument names have been changed:
- name -> constraint_name
-
drop_column
(column_name)¶ Issue a “drop column” instruction using the current batch migration context.
See also
-
drop_constraint
(constraint_name, type_=None)¶ Issue a “drop constraint” instruction using the current batch migration context.
The batch form of this call omits the
table_name
andschema
arguments from the call.See also
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> constraint_name
-
drop_index
(index_name, **kw)¶ Issue a “drop index” instruction using the current batch migration context.
See also
Changed in version 0.8.0: The following positional argument names have been changed:
- name -> index_name
-
-
class
alembic.operations.
MigrateOperation
¶ base class for migration command and organization objects.
This system is part of the operation extensibility API.
New in version 0.8.0.
-
info
¶ A dictionary that may be used to store arbitrary information along with this
MigrateOperation
object.
-
Cookbook¶
A collection of “How-Tos”, highlighting various ways to extend Alembic.
Note
This is a new section where we hope to start cataloguing various “how-tos” we come up with based on user requests. It is often the case that users will request a feature only to learn that simple customization can provide the same thing. There’s only one recipe at the moment but we hope to get more soon!
Building an Up to Date Database from Scratch¶
There’s a theory of database migrations that says that the revisions in existence for a database should be
able to go from an entirely blank schema to the finished product, and back again. Alembic can roll
this way. Though we think it’s kind of overkill, considering that SQLAlchemy itself can emit
the full CREATE statements for any given model using create_all()
. If you check out
a copy of an application, running this will give you the entire database in one shot, without the need
to run through all those migration files, which are instead tailored towards applying incremental
changes to an existing database.
Alembic can integrate with a create_all()
script quite easily. After running the
create operation, tell Alembic to create a new version table, and to stamp it with the most recent
revision (i.e. head
):
# inside of a "create the database" script, first create
# tables:
my_metadata.create_all(engine)
# then, load the Alembic configuration and generate the
# version table, "stamping" it with the most recent rev:
from alembic.config import Config
from alembic import command
alembic_cfg = Config("/path/to/yourapp/alembic.ini")
command.stamp(alembic_cfg, "head")
When this approach is used, the application can generate the database using normal SQLAlchemy techniques instead of iterating through hundreds of migration scripts. Now, the purpose of the migration scripts is relegated just to movement between versions on out-of-date databases, not new databases. You can now remove old migration files that are no longer represented on any existing environments.
To prune old migration files, simply delete the files. Then, in the earliest, still-remaining
migration file, set down_revision
to None
:
# replace this:
#down_revision = '290696571ad2'
# with this:
down_revision = None
That file now becomes the “base” of the migration series.
Conditional Migration Elements¶
This example features the basic idea of a common need, that of affecting how a migration runs based on command line switches.
The technique to use here is simple; within a migration script, inspect
the EnvironmentContext.get_x_argument()
collection for any additional,
user-defined parameters. Then take action based on the presence of those
arguments.
To make it such that the logic to inspect these flags is easy to use and
modify, we modify our script.py.mako
template to make this feature
available in all new revision files:
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
from alembic import context
def upgrade():
schema_upgrades()
if context.get_x_argument(as_dictionary=True).get('data', None):
data_upgrades()
def downgrade():
if context.get_x_argument(as_dictionary=True).get('data', None):
data_downgrades()
schema_downgrades()
def schema_upgrades():
"""schema upgrade migrations go here."""
${upgrades if upgrades else "pass"}
def schema_downgrades():
"""schema downgrade migrations go here."""
${downgrades if downgrades else "pass"}
def data_upgrades():
"""Add any optional data upgrade migrations here!"""
pass
def data_downgrades():
"""Add any optional data downgrade migrations here!"""
pass
Now, when we create a new migration file, the data_upgrades()
and data_downgrades()
placeholders will be available, where we can add optional data migrations:
"""rev one
Revision ID: 3ba2b522d10d
Revises: None
Create Date: 2014-03-04 18:05:36.992867
"""
# revision identifiers, used by Alembic.
revision = '3ba2b522d10d'
down_revision = None
from alembic import op
import sqlalchemy as sa
from sqlalchemy import String, Column
from sqlalchemy.sql import table, column
from alembic import context
def upgrade():
schema_upgrades()
if context.get_x_argument(as_dictionary=True).get('data', None):
data_upgrades()
def downgrade():
if context.get_x_argument(as_dictionary=True).get('data', None):
data_downgrades()
schema_downgrades()
def schema_upgrades():
"""schema upgrade migrations go here."""
op.create_table("my_table", Column('data', String))
def schema_downgrades():
"""schema downgrade migrations go here."""
op.drop_table("my_table")
def data_upgrades():
"""Add any optional data upgrade migrations here!"""
my_table = table('my_table',
column('data', String),
)
op.bulk_insert(my_table,
[
{'data': 'data 1'},
{'data': 'data 2'},
{'data': 'data 3'},
]
)
def data_downgrades():
"""Add any optional data downgrade migrations here!"""
op.execute("delete from my_table")
To invoke our migrations with data included, we use the -x
flag:
alembic -x data=true upgrade head
The EnvironmentContext.get_x_argument()
is an easy way to support
new commandline options within environment and migration scripts.
Sharing a Connection with a Series of Migration Commands and Environments¶
It is often the case that an application will need to call upon a series
of commands within Commands, where it would be advantageous
for all operations to proceed along a single transaction. The connectivity
for a migration is typically solely determined within the env.py
script
of a migration environment, which is called within the scope of a command.
The steps to take here are:
- Produce the
Connection
object to use. - Place it somewhere that
env.py
will be able to access it. This can be either a. a module-level global somewhere, or b. an attribute which we place into theConfig.attributes
dictionary (if we are on an older Alembic version, we may also attach an attribute directly to theConfig
object). - The
env.py
script is modified such that it looks for thisConnection
and makes use of it, in lieu of building up its ownEngine
instance.
We illustrate using Config.attributes
:
from alembic import command, config
cfg = config.Config("/path/to/yourapp/alembic.ini")
with engine.begin() as connection:
cfg.attributes['connection'] = connection
command.upgrade(cfg, "head")
Then in env.py
:
def run_migrations_online():
connectable = config.attributes.get('connection', None)
if connectable is None:
# only create Engine if we don't have a Connection
# from the outside
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
# when connectable is already a Connection object, calling
# connect() gives us a *branched connection*.
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
Branched Connections
Note that we are calling the connect()
method, even if we are
using a Connection
object to start with.
The effect this has when calling connect()
is that SQLAlchemy passes us a branch of the original connection; it
is in every way the same as the Connection
we started with, except it provides nested scope; the
context we have here as well as the
close()
method of this branched
connection doesn’t actually close the outer connection, which stays
active for continued use.
New in version 0.7.5: Added Config.attributes
.
Replaceable Objects¶
This recipe proposes a hypothetical way of dealing with what we might call a replaceable schema object. A replaceable object is a schema object that needs to be created and dropped all at once. Examples of such objects include views, stored procedures, and triggers.
Replaceable objects present a problem in that in order to make incremental changes to them, we have to refer to the whole definition at once. If we need to add a new column to a view, for example, we have to drop it entirely and recreate it fresh with the extra column added, referring to the whole structure; but to make it even tougher, if we wish to support downgrade operarations in our migration scripts, we need to refer to the previous version of that construct fully, and we’d much rather not have to type out the whole definition in multiple places.
This recipe proposes that we may refer to the older version of a replaceable construct by directly naming the migration version in which it was created, and having a migration refer to that previous file as migrations run. We will also demonstrate how to integrate this logic within the Operation Plugins feature introduced in Alembic 0.8. It may be very helpful to review this section first to get an overview of this API.
The Replaceable Object Structure¶
We first need to devise a simple format that represents the “CREATE XYZ” /
“DROP XYZ” aspect of what it is we’re building. We will work with an object
that represents a textual definition; while a SQL view is an object that we can define
using a table-metadata-like system,
this is not so much the case for things like stored procedures, where
we pretty much need to have a full string definition written down somewhere.
We’ll use a simple value object called ReplaceableObject
that can
represent any named set of SQL text to send to a “CREATE” statement of
some kind:
class ReplaceableObject(object):
def __init__(self, name, sqltext):
self.name = name
self.sqltext = sqltext
Using this object in a migration script, assuming a Postgresql-style syntax, looks like:
customer_view = ReplaceableObject(
"customer_view",
"SELECT name, order_count FROM customer WHERE order_count > 0"
)
add_customer_sp = ReplaceableObject(
"add_customer_sp(name varchar, order_count integer)",
"""
RETURNS integer AS $$
BEGIN
insert into customer (name, order_count)
VALUES (in_name, in_order_count);
END;
$$ LANGUAGE plpgsql;
"""
)
The ReplaceableObject
class is only one very simplistic way to do this.
The structure of how we represent our schema objects
is not too important for the purposes of this example; we can just
as well put strings inside of tuples or dictionaries, as well as
that we could define any kind of series of fields and class structures we want.
The only important part is that below we will illustrate how organize the
code that can consume the structure we create here.
Create Operations for the Target Objects¶
We’ll use the Operations
extension API to make new operations
for create, drop, and replace of views and stored procedures. Using this
API is also optional; we can just as well make any kind of Python
function that we would invoke from our migration scripts.
However, using this API gives us operations
built directly into the Alembic op.*
namespace very nicely.
The most intricate class is below. This is the base of our “replaceable”
operation, which includes not just a base operation for emitting
CREATE and DROP instructions on a ReplaceableObject
, it also assumes
a certain model of “reversibility” which makes use of references to
other migration files in order to refer to the “previous” version
of an object:
from alembic.operations import Operations, MigrateOperation
class ReversibleOp(MigrateOperation):
def __init__(self, target):
self.target = target
@classmethod
def invoke_for_target(cls, operations, target):
op = cls(target)
return operations.invoke(op)
def reverse(self):
raise NotImplementedError()
@classmethod
def _get_object_from_version(cls, operations, ident):
version, objname = ident.split(".")
module = operations.get_context().script.get_revision(version).module
obj = getattr(module, objname)
return obj
@classmethod
def replace(cls, operations, target, replaces=None, replace_with=None):
if replaces:
old_obj = cls._get_object_from_version(operations, replaces)
drop_old = cls(old_obj).reverse()
create_new = cls(target)
elif replace_with:
old_obj = cls._get_object_from_version(operations, replace_with)
drop_old = cls(target).reverse()
create_new = cls(old_obj)
else:
raise TypeError("replaces or replace_with is required")
operations.invoke(drop_old)
operations.invoke(create_new)
The workings of this class should become clear as we walk through the
example. To create usable operations from this base, we will build
a series of stub classes and use Operations.register_operation()
to make them part of the op.*
namespace:
@Operations.register_operation("create_view", "invoke_for_target")
@Operations.register_operation("replace_view", "replace")
class CreateViewOp(ReversibleOp):
def reverse(self):
return DropViewOp(self.target)
@Operations.register_operation("drop_view", "invoke_for_target")
class DropViewOp(ReversibleOp):
def reverse(self):
return CreateViewOp(self.view)
@Operations.register_operation("create_sp", "invoke_for_target")
@Operations.register_operation("replace_sp", "replace")
class CreateSPOp(ReversibleOp):
def reverse(self):
return DropSPOp(self.target)
@Operations.register_operation("drop_sp", "invoke_for_target")
class DropSPOp(ReversibleOp):
def reverse(self):
return CreateSPOp(self.target)
To actually run the SQL like “CREATE VIEW” and “DROP SEQUENCE”, we’ll provide
implementations using Operations.implementation_for()
that run straight into Operations.execute()
:
@Operations.implementation_for(CreateViewOp)
def create_view(operations, operation):
operations.execute("CREATE VIEW %s AS %s" % (
operation.target.name,
operation.target.sqltext
))
@Operations.implementation_for(DropViewOp)
def drop_view(operations, operation):
operations.execute("DROP VIEW %s" % operation.target.name)
@Operations.implementation_for(CreateSPOp)
def create_sp(operations, operation):
operations.execute(
"CREATE FUNCTION %s %s" % (
operation.target.name, operation.target.sqltext
)
)
@Operations.implementation_for(DropSPOp)
def drop_sp(operations, operation):
operations.execute("DROP FUNCTION %s" % operation.target.name)
All of the above code can be present anywhere within an application’s
source tree; the only requirement is that when the env.py
script is
invoked, it includes imports that ultimately call upon these classes
as well as the Operations.register_operation()
and
Operations.implementation_for()
sequences.
Create Initial Migrations¶
We can now illustrate how these objects look during use. For the first step, we’ll create a new migration to create a “customer” table:
$ alembic revision -m "create table"
We build the first revision as follows:
"""create table
Revision ID: 3ab8b2dfb055
Revises:
Create Date: 2015-07-27 16:22:44.918507
"""
# revision identifiers, used by Alembic.
revision = '3ab8b2dfb055'
down_revision = None
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
"customer",
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('name', sa.String),
sa.Column('order_count', sa.Integer),
)
def downgrade():
op.drop_table('customer')
For the second migration, we will create a view and a stored procedure which act upon this table:
$ alembic revision -m "create views/sp"
This migration will use the new directives:
"""create views/sp
Revision ID: 28af9800143f
Revises: 3ab8b2dfb055
Create Date: 2015-07-27 16:24:03.589867
"""
# revision identifiers, used by Alembic.
revision = '28af9800143f'
down_revision = '3ab8b2dfb055'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
from foo import ReplaceableObject
customer_view = ReplaceableObject(
"customer_view",
"SELECT name, order_count FROM customer WHERE order_count > 0"
)
add_customer_sp = ReplaceableObject(
"add_customer_sp(name varchar, order_count integer)",
"""
RETURNS integer AS $$
BEGIN
insert into customer (name, order_count)
VALUES (in_name, in_order_count);
END;
$$ LANGUAGE plpgsql;
"""
)
def upgrade():
op.create_view(customer_view)
op.create_sp(add_customer_sp)
def downgrade():
op.drop_view(customer_view)
op.drop_sp(add_customer_sp)
We see the use of our new create_view()
, create_sp()
,
drop_view()
, and drop_sp()
directives. Running these to “head”
we get the following (this includes an edited view of SQL emitted):
$ alembic upgrade 28af9800143
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [sqlalchemy.engine.base.Engine] BEGIN (implicit)
INFO [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
INFO [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
INFO [sqlalchemy.engine.base.Engine] SELECT alembic_version.version_num
FROM alembic_version
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
INFO [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
INFO [alembic.runtime.migration] Running upgrade -> 3ab8b2dfb055, create table
INFO [sqlalchemy.engine.base.Engine]
CREATE TABLE customer (
id SERIAL NOT NULL,
name VARCHAR,
order_count INTEGER,
PRIMARY KEY (id)
)
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] INSERT INTO alembic_version (version_num) VALUES ('3ab8b2dfb055')
INFO [sqlalchemy.engine.base.Engine] {}
INFO [alembic.runtime.migration] Running upgrade 3ab8b2dfb055 -> 28af9800143f, create views/sp
INFO [sqlalchemy.engine.base.Engine] CREATE VIEW customer_view AS SELECT name, order_count FROM customer WHERE order_count > 0
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] CREATE FUNCTION add_customer_sp(name varchar, order_count integer)
RETURNS integer AS $$
BEGIN
insert into customer (name, order_count)
VALUES (in_name, in_order_count);
END;
$$ LANGUAGE plpgsql;
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='28af9800143f' WHERE alembic_version.version_num = '3ab8b2dfb055'
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] COMMIT
We see that our CREATE TABLE proceeded as well as the CREATE VIEW and CREATE FUNCTION operations produced by our new directives.
Create Revision Migrations¶
Finally, we can illustrate how we would “revise” these objects.
Let’s consider we added a new column email
to our customer
table:
$ alembic revision -m "add email col"
The migration is:
"""add email col
Revision ID: 191a2d20b025
Revises: 28af9800143f
Create Date: 2015-07-27 16:25:59.277326
"""
# revision identifiers, used by Alembic.
revision = '191a2d20b025'
down_revision = '28af9800143f'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column("customer", sa.Column("email", sa.String()))
def downgrade():
op.drop_column("customer", "email")
We now need to recreate the customer_view
view and the
add_customer_sp
function. To include downgrade capability, we will
need to refer to the previous version of the construct; the
replace_view()
and replace_sp()
operations we’ve created make
this possible, by allowing us to refer to a specific, previous revision.
the replaces
and replace_with
arguments accept a dot-separated
string, which refers to a revision number and an object name, such
as "28af9800143f.customer_view"
. The ReversibleOp
class makes use
of the Operations.get_context()
method to locate the version file
we refer to:
$ alembic revision -m "update views/sp"
The migration:
"""update views/sp
Revision ID: 199028bf9856
Revises: 191a2d20b025
Create Date: 2015-07-27 16:26:31.344504
"""
# revision identifiers, used by Alembic.
revision = '199028bf9856'
down_revision = '191a2d20b025'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
from foo import ReplaceableObject
customer_view = ReplaceableObject(
"customer_view",
"SELECT name, order_count, email "
"FROM customer WHERE order_count > 0"
)
add_customer_sp = ReplaceableObject(
"add_customer_sp(name varchar, order_count integer, email varchar)",
"""
RETURNS integer AS $$
BEGIN
insert into customer (name, order_count, email)
VALUES (in_name, in_order_count, email);
END;
$$ LANGUAGE plpgsql;
"""
)
def upgrade():
op.replace_view(customer_view, replaces="28af9800143f.customer_view")
op.replace_sp(add_customer_sp, replaces="28af9800143f.add_customer_sp")
def downgrade():
op.replace_view(customer_view, replace_with="28af9800143f.customer_view")
op.replace_sp(add_customer_sp, replace_with="28af9800143f.add_customer_sp")
Above, instead of using create_view()
, create_sp()
,
drop_view()
, and drop_sp()
methods, we now use replace_view()
and
replace_sp()
. The replace operation we’ve built always runs a DROP and
a CREATE. Running an upgrade to head we see:
$ alembic upgrade head
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [sqlalchemy.engine.base.Engine] BEGIN (implicit)
INFO [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
INFO [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
INFO [sqlalchemy.engine.base.Engine] SELECT alembic_version.version_num
FROM alembic_version
INFO [sqlalchemy.engine.base.Engine] {}
INFO [alembic.runtime.migration] Running upgrade 28af9800143f -> 191a2d20b025, add email col
INFO [sqlalchemy.engine.base.Engine] ALTER TABLE customer ADD COLUMN email VARCHAR
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='191a2d20b025' WHERE alembic_version.version_num = '28af9800143f'
INFO [sqlalchemy.engine.base.Engine] {}
INFO [alembic.runtime.migration] Running upgrade 191a2d20b025 -> 199028bf9856, update views/sp
INFO [sqlalchemy.engine.base.Engine] DROP VIEW customer_view
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] CREATE VIEW customer_view AS SELECT name, order_count, email FROM customer WHERE order_count > 0
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] DROP FUNCTION add_customer_sp(name varchar, order_count integer)
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] CREATE FUNCTION add_customer_sp(name varchar, order_count integer, email varchar)
RETURNS integer AS $$
BEGIN
insert into customer (name, order_count, email)
VALUES (in_name, in_order_count, email);
END;
$$ LANGUAGE plpgsql;
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='199028bf9856' WHERE alembic_version.version_num = '191a2d20b025'
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] COMMIT
After adding our new email
column, we see that both customer_view
and add_customer_sp()
are dropped before the new version is created.
If we downgrade back to the old version, we see the old version of these
recreated again within the downgrade for this migration:
$ alembic downgrade 28af9800143
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [sqlalchemy.engine.base.Engine] BEGIN (implicit)
INFO [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
INFO [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
INFO [sqlalchemy.engine.base.Engine] SELECT alembic_version.version_num
FROM alembic_version
INFO [sqlalchemy.engine.base.Engine] {}
INFO [alembic.runtime.migration] Running downgrade 199028bf9856 -> 191a2d20b025, update views/sp
INFO [sqlalchemy.engine.base.Engine] DROP VIEW customer_view
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] CREATE VIEW customer_view AS SELECT name, order_count FROM customer WHERE order_count > 0
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] DROP FUNCTION add_customer_sp(name varchar, order_count integer, email varchar)
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] CREATE FUNCTION add_customer_sp(name varchar, order_count integer)
RETURNS integer AS $$
BEGIN
insert into customer (name, order_count)
VALUES (in_name, in_order_count);
END;
$$ LANGUAGE plpgsql;
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='191a2d20b025' WHERE alembic_version.version_num = '199028bf9856'
INFO [sqlalchemy.engine.base.Engine] {}
INFO [alembic.runtime.migration] Running downgrade 191a2d20b025 -> 28af9800143f, add email col
INFO [sqlalchemy.engine.base.Engine] ALTER TABLE customer DROP COLUMN email
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='28af9800143f' WHERE alembic_version.version_num = '191a2d20b025'
INFO [sqlalchemy.engine.base.Engine] {}
INFO [sqlalchemy.engine.base.Engine] COMMIT
Don’t Generate Empty Migrations with Autogenerate¶
A common request is to have the alembic revision --autogenerate
command not
actually generate a revision file if no changes to the schema is detected. Using
the EnvironmentContext.configure.process_revision_directives
hook, this is straightforward; place a process_revision_directives
hook in MigrationContext.configure()
which removes the
single MigrationScript
directive if it is empty of
any operations:
def run_migrations_online():
# ...
def process_revision_directives(context, revision, directives):
if config.cmd_opts.autogenerate:
script = directives[0]
if script.upgrade_ops.is_empty():
directives[:] = []
# connectable = ...
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
process_revision_directives=process_revision_directives
)
with context.begin_transaction():
context.run_migrations()
API Details¶
Alembic’s internal API has many public integration points that can be used to extend Alembic’s functionality as well as to re-use its functionality in new ways. As the project has grown, more APIs are created and exposed for this purpose.
Direct use of the vast majority of API details discussed here is not needed
for rudimentary use of Alembic; the only API that is used normally by end users is
the methods provided by the Operations
class, which is discussed
outside of this subsection, and the parameters that can be passed to
the EnvironmentContext.configure()
method, used when configuring
one’s env.py
environment. However, real-world applications will
usually end up using more of the internal API, in particular being able
to run commands programmatically, as discussed in the section Commands.
Overview¶
Note
this section is a technical overview of the internal API of Alembic. This section is only useful for developers who wish to extend the capabilities of Alembic; for regular users, reading this section is not necessary.
A visualization of the primary features of Alembic’s internals is presented in the following figure. The module and class boxes do not list out all the operations provided by each unit; only a small set of representative elements intended to convey the primary purpose of each system.

The script runner for Alembic is present in the Configuration module.
This module produces a Config
object and passes it to the
appropriate function in Commands. Functions within
Commands will typically instantiate an
ScriptDirectory
instance, which represents the collection of
version files, and an EnvironmentContext
, which is a configurational
facade passed to the environment’s env.py
script.
The EnvironmentContext
object is the primary object used within
the env.py
script, whose main purpose is that of a facade for creating and using
a MigrationContext
object, which is the actual migration engine
that refers to a database implementation. The primary method called
on this object within an env.py
script is the
EnvironmentContext.configure()
method, which sets up the
MigrationContext
with database connectivity and behavioral
configuration. It also supplies methods for transaction demarcation and
migration running, but these methods ultimately call upon the
MigrationContext
that’s been configured.
MigrationContext
is the gateway to the database
for other parts of the application, and produces a DefaultImpl
object which does the actual database communication, and knows how to
create the specific SQL text of the various DDL directives such as
ALTER TABLE; DefaultImpl
has subclasses that are per-database-backend.
In “offline” mode (e.g. --sql
), the MigrationContext
will
produce SQL to a file output stream instead of a database.
During an upgrade or downgrade operation, a specific series of migration
scripts are invoked starting with the MigrationContext
in conjunction
with the ScriptDirectory
; the actual scripts themselves make use
of the Operations
object, which provide the end-user interface to
specific database operations. The Operations
object is generated
based on a series of “operation directive” objects that are user-extensible,
and start out in the alembic.operations.ops.toplevel module.
Another prominent feature of Alembic is the “autogenerate” feature, which
produces new migration scripts that contain Python code. The autogenerate
feature starts in Autogeneration, and is used exclusively
by the alembic.command.revision()
command when the --autogenerate
flag is passed. Autogenerate refers to the MigrationContext
and DefaultImpl
in order to access database connectivity and
access per-backend rules for autogenerate comparisons. It also makes use
of alembic.operations.ops.toplevel in order to represent the operations that
it will render into scripts.
Runtime Objects¶
The “runtime” of Alembic involves the EnvironmentContext
and MigrationContext
objects. These are the objects that are
in play once the env.py
script is loaded up by a command and
a migration operation proceeds.
The Environment Context¶
The EnvironmentContext
class provides most of the
API used within an env.py
script. Within env.py
,
the instantated EnvironmentContext
is made available
via a special proxy module called alembic.context
. That is,
you can import alembic.context
like a regular Python module,
and each name you call upon it is ultimately routed towards the
current EnvironmentContext
in use.
In particular, the key method used within env.py
is EnvironmentContext.configure()
,
which establishes all the details about how the database will be accessed.
-
class
alembic.runtime.environment.
EnvironmentContext
(config, script, **kw)¶ A configurational facade made available in an
env.py
script.The
EnvironmentContext
acts as a facade to the more nuts-and-bolts objects ofMigrationContext
as well as certain aspects ofConfig
, within the context of theenv.py
script that is invoked by most Alembic commands.EnvironmentContext
is normally instantiated when a command inalembic.command
is run. It then makes itself available in thealembic.context
module for the scope of the command. From within anenv.py
script, the currentEnvironmentContext
is available by importing this module.EnvironmentContext
also supports programmatic usage. At this level, it acts as a Python context manager, that is, is intended to be used using thewith:
statement. A typical use ofEnvironmentContext
:from alembic.config import Config from alembic.script import ScriptDirectory config = Config() config.set_main_option("script_location", "myapp:migrations") script = ScriptDirectory.from_config(config) def my_function(rev, context): '''do something with revision "rev", which will be the current database revision, and "context", which is the MigrationContext that the env.py will create''' with EnvironmentContext( config, script, fn = my_function, as_sql = False, starting_rev = 'base', destination_rev = 'head', tag = "sometag" ): script.run_env()
The above script will invoke the
env.py
script within the migration environment. If and whenenv.py
callsMigrationContext.run_migrations()
, themy_function()
function above will be called by theMigrationContext
, given the context itself as well as the current revision in the database.Note
For most API usages other than full blown invocation of migration scripts, the
MigrationContext
andScriptDirectory
objects can be created and used directly. TheEnvironmentContext
object is only needed when you need to actually invoke theenv.py
module present in the migration environment.Construct a new
EnvironmentContext
.Parameters: - config – a
Config
instance. - script – a
ScriptDirectory
instance. - **kw – keyword options that will be ultimately
passed along to the
MigrationContext
whenEnvironmentContext.configure()
is called.
-
begin_transaction
()¶ Return a context manager that will enclose an operation within a “transaction”, as defined by the environment’s offline and transactional DDL settings.
e.g.:
with context.begin_transaction(): context.run_migrations()
begin_transaction()
is intended to “do the right thing” regardless of calling context:- If
is_transactional_ddl()
isFalse
, returns a “do nothing” context manager which otherwise produces no transactional state or directives. - If
is_offline_mode()
isTrue
, returns a context manager that will invoke theDefaultImpl.emit_begin()
andDefaultImpl.emit_commit()
methods, which will produce the string directivesBEGIN
andCOMMIT
on the output stream, as rendered by the target backend (e.g. SQL Server would emitBEGIN TRANSACTION
). - Otherwise, calls
sqlalchemy.engine.Connection.begin()
on the current online connection, which returns asqlalchemy.engine.Transaction
object. This object demarcates a real transaction and is itself a context manager, which will roll back if an exception is raised.
Note that a custom
env.py
script which has more specific transactional needs can of course manipulate theConnection
directly to produce transactional state in “online” mode.- If
-
config
= None¶ An instance of
Config
representing the configuration file contents as well as other variables set programmatically within it.
-
configure
(connection=None, url=None, dialect_name=None, transactional_ddl=None, transaction_per_migration=False, output_buffer=None, starting_rev=None, tag=None, template_args=None, render_as_batch=False, target_metadata=None, include_symbol=None, include_object=None, include_schemas=False, process_revision_directives=None, compare_type=False, compare_server_default=False, render_item=None, literal_binds=False, upgrade_token='upgrades', downgrade_token='downgrades', alembic_module_prefix='op.', sqlalchemy_module_prefix='sa.', user_module_prefix=None, **kw)¶ Configure a
MigrationContext
within thisEnvironmentContext
which will provide database connectivity and other configuration to a series of migration scripts.Many methods on
EnvironmentContext
require that this method has been called in order to function, as they ultimately need to have database access or at least access to the dialect in use. Those which do are documented as such.The important thing needed by
configure()
is a means to determine what kind of database dialect is in use. An actual connection to that database is needed only if theMigrationContext
is to be used in “online” mode.If the
is_offline_mode()
function returnsTrue
, then no connection is needed here. Otherwise, theconnection
parameter should be present as an instance ofsqlalchemy.engine.Connection
.This function is typically called from the
env.py
script within a migration environment. It can be called multiple times for an invocation. The most recentConnection
for which it was called is the one that will be operated upon by the next call torun_migrations()
.General parameters:
Parameters: - connection – a
Connection
to use for SQL execution in “online” mode. When present, is also used to determine the type of dialect in use. - url – a string database url, or a
sqlalchemy.engine.url.URL
object. The type of dialect to be used will be derived from this ifconnection
is not passed. - dialect_name – string name of a dialect, such as
“postgresql”, “mssql”, etc.
The type of dialect to be used will be derived from this if
connection
andurl
are not passed. - transactional_ddl – Force the usage of “transactional” DDL on or off; this otherwise defaults to whether or not the dialect in use supports it.
- transaction_per_migration –
if True, nest each migration script in a transaction rather than the full series of migrations to run.
New in version 0.6.5.
- output_buffer – a file-like object that will be used
for textual output
when the
--sql
option is used to generate SQL scripts. Defaults tosys.stdout
if not passed here and also not present on theConfig
object. The value here overrides that of theConfig
object. - output_encoding – when using
--sql
to generate SQL scripts, apply this encoding to the string output. - literal_binds –
when using
--sql
to generate SQL scripts, pass through theliteral_binds
flag to the compiler so that any literal values that would ordinarily be bound parameters are converted to plain strings.Warning
Dialects can typically only handle simple datatypes like strings and numbers for auto-literal generation. Datatypes like dates, intervals, and others may still require manual formatting, typically using
Operations.inline_literal()
.Note
the
literal_binds
flag is ignored on SQLAlchemy versions prior to 0.8 where this feature is not supported.New in version 0.7.6.
See also
- starting_rev – Override the “starting revision” argument
when using
--sql
mode. - tag – a string tag for usage by custom
env.py
scripts. Set via the--tag
option, can be overridden here. - template_args – dictionary of template arguments which will be added to the template argument environment when running the “revision” command. Note that the script environment is only run within the “revision” command if the –autogenerate option is used, or if the option “revision_environment=true” is present in the alembic.ini file.
- version_table – The name of the Alembic version table.
The default is
'alembic_version'
. - version_table_schema – Optional schema to place version table within.
Parameters specific to the autogenerate feature, when
alembic revision
is run with the--autogenerate
feature:Parameters: - target_metadata – a
sqlalchemy.schema.MetaData
object that will be consulted during autogeneration. The tables present will be compared against what is locally available on the targetConnection
to produce candidate upgrade/downgrade operations. - compare_type –
Indicates type comparison behavior during an autogenerate operation. Defaults to
False
which disables type comparison. Set toTrue
to turn on default type comparison, which has varied accuracy depending on backend. See Comparing Types for an example as well as information on other type comparison options. - compare_server_default –
Indicates server default comparison behavior during an autogenerate operation. Defaults to
False
which disables server default comparison. Set toTrue
to turn on server default comparison, which has varied accuracy depending on backend.To customize server default comparison behavior, a callable may be specified which can filter server default comparisons during an autogenerate operation. defaults during an autogenerate operation. The format of this callable is:
def my_compare_server_default(context, inspected_column, metadata_column, inspected_default, metadata_default, rendered_metadata_default): # return True if the defaults are different, # False if not, or None to allow the default implementation # to compare these defaults return None context.configure( # ... compare_server_default = my_compare_server_default )
inspected_column
is a dictionary structure as returned bysqlalchemy.engine.reflection.Inspector.get_columns()
, whereasmetadata_column
is asqlalchemy.schema.Column
from the local model environment.A return value of
None
indicates to allow default server default comparison to proceed. Note that some backends such as Postgresql actually execute the two defaults on the database side to compare for equivalence. - include_object –
A callable function which is given the chance to return
True
orFalse
for any object, indicating if the given object should be considered in the autogenerate sweep.The function accepts the following positional arguments:
object
: aSchemaItem
object such as aTable
,Column
,Index
UniqueConstraint
, orForeignKeyConstraint
objectname
: the name of the object. This is typically available viaobject.name
.type
: a string describing the type of object; currently"table"
,"column"
,"index"
,"unique_constraint"
, or"foreign_key_constraint"
New in version 0.7.0: Support for indexes and unique constraints within the
include_object
hook.New in version 0.7.1: Support for foreign keys within the
include_object
hook.reflected
:True
if the given object was produced based on table reflection,False
if it’s from a localMetaData
object.compare_to
: the object being compared against, if available, elseNone
.
E.g.:
def include_object(object, name, type_, reflected, compare_to): if (type_ == "column" and not reflected and object.info.get("skip_autogenerate", False)): return False else: return True context.configure( # ... include_object = include_object )
EnvironmentContext.configure.include_object
can also be used to filter on specific schemas to include or omit, when theEnvironmentContext.configure.include_schemas
flag is set toTrue
. TheTable.schema
attribute on eachTable
object reflected will indicate the name of the schema from which theTable
originates.New in version 0.6.0.
- include_symbol –
A callable function which, given a table name and schema name (may be
None
), returnsTrue
orFalse
, indicating if the given table should be considered in the autogenerate sweep.Deprecated since version 0.6.0:
EnvironmentContext.configure.include_symbol
is superceded by the more genericEnvironmentContext.configure.include_object
parameter.E.g.:
def include_symbol(tablename, schema): return tablename not in ("skip_table_one", "skip_table_two") context.configure( # ... include_symbol = include_symbol )
- render_as_batch –
if True, commands which alter elements within a table will be placed under a
with batch_alter_table():
directive, so that batch migrations will take place.New in version 0.7.0.
- include_schemas –
If True, autogenerate will scan across all schemas located by the SQLAlchemy
get_schema_names()
method, and include all differences in tables found across all those schemas. When using this option, you may want to also use theEnvironmentContext.configure.include_object
option to specify a callable which can filter the tables/schemas that get included. - render_item –
Callable that can be used to override how any schema item, i.e. column, constraint, type, etc., is rendered for autogenerate. The callable receives a string describing the type of object, the object, and the autogen context. If it returns False, the default rendering method will be used. If it returns None, the item will not be rendered in the context of a Table construct, that is, can be used to skip columns or constraints within op.create_table():
def my_render_column(type_, col, autogen_context): if type_ == "column" and isinstance(col, MySpecialCol): return repr(col) else: return False context.configure( # ... render_item = my_render_column )
Available values for the type string include:
"column"
,"primary_key"
,"foreign_key"
,"unique"
,"check"
,"type"
,"server_default"
. - upgrade_token – When autogenerate completes, the text of the
candidate upgrade operations will be present in this template
variable when
script.py.mako
is rendered. Defaults toupgrades
. - downgrade_token – When autogenerate completes, the text of the
candidate downgrade operations will be present in this
template variable when
script.py.mako
is rendered. Defaults todowngrades
. - alembic_module_prefix – When autogenerate refers to Alembic
alembic.operations
constructs, this prefix will be used (i.e.op.create_table
) Defaults to “op.
”. Can beNone
to indicate no prefix. - sqlalchemy_module_prefix – When autogenerate refers to
SQLAlchemy
Column
or type classes, this prefix will be used (i.e.sa.Column("somename", sa.Integer)
) Defaults to “sa.
”. Can beNone
to indicate no prefix. Note that when dialect-specific types are rendered, autogenerate will render them using the dialect module name, i.e.mssql.BIT()
,postgresql.UUID()
. - user_module_prefix –
When autogenerate refers to a SQLAlchemy type (e.g.
TypeEngine
) where the module name is not under thesqlalchemy
namespace, this prefix will be used within autogenerate. If left at its default ofNone
, the__module__
attribute of the type is used to render the import module. It’s a good practice to set this and to have all custom types be available from a fixed module space, in order to future-proof migration files against reorganizations in modules.Changed in version 0.7.0:
EnvironmentContext.configure.user_module_prefix
no longer defaults to the value ofEnvironmentContext.configure.sqlalchemy_module_prefix
when left atNone
; the__module__
attribute is now used.New in version 0.6.3: added
EnvironmentContext.configure.user_module_prefix
See also
- process_revision_directives –
a callable function that will be passed a structure representing the end result of an autogenerate or plain “revision” operation, which can be manipulated to affect how the
alembic revision
command ultimately outputs new revision scripts. The structure of the callable is:def process_revision_directives(context, revision, directives): pass
The
directives
parameter is a Python list containing a singleMigrationScript
directive, which represents the revision file to be generated. This list as well as its contents may be freely modified to produce any set of commands. The section Customizing Revision Generation shows an example of doing this. Thecontext
parameter is theMigrationContext
in use, andrevision
is a tuple of revision identifiers representing the current revision of the database.The callable is invoked at all times when the
--autogenerate
option is passed toalembic revision
. If--autogenerate
is not passed, the callable is invoked only if therevision_environment
variable is set to True in the Alembic configuration, in which case the givendirectives
collection will contain emptyUpgradeOps
andDowngradeOps
collections for.upgrade_ops
and.downgrade_ops
. The--autogenerate
option itself can be inferred by inspectingcontext.config.cmd_opts.autogenerate
.The callable function may optionally be an instance of a
Rewriter
object. This is a helper object that assists in the production of autogenerate-stream rewriter functions.New in version 0.8.0.
Changed in version 0.8.1: - The
EnvironmentContext.configure.process_revision_directives
hook can append op directives intoUpgradeOps
andDowngradeOps
which will be rendered in Python regardless of whether the--autogenerate
option is in use or not; therevision_environment
configuration variable should be set to “true” in the config to enable this.
Parameters specific to individual backends:
Parameters: - mssql_batch_separator – The “batch separator” which will
be placed between each statement when generating offline SQL Server
migrations. Defaults to
GO
. Note this is in addition to the customary semicolon;
at the end of each statement; SQL Server considers the “batch separator” to denote the end of an individual statement execution, and cannot group certain dependent operations in one step. - oracle_batch_separator – The “batch separator” which will
be placed between each statement when generating offline
Oracle migrations. Defaults to
/
. Oracle doesn’t add a semicolon between statements like most other backends.
- connection – a
-
execute
(sql, execution_options=None)¶ Execute the given SQL using the current change context.
The behavior of
execute()
is the same as that ofOperations.execute()
. Please see that function’s documentation for full detail including caveats and limitations.This function requires that a
MigrationContext
has first been made available viaconfigure()
.
-
get_bind
()¶ Return the current ‘bind’.
In “online” mode, this is the
sqlalchemy.engine.Connection
currently being used to emit SQL to the database.This function requires that a
MigrationContext
has first been made available viaconfigure()
.
-
get_context
()¶ Return the current
MigrationContext
object.If
EnvironmentContext.configure()
has not been called yet, raises an exception.
-
get_head_revision
()¶ Return the hex identifier of the ‘head’ script revision.
If the script directory has multiple heads, this method raises a
CommandError
;EnvironmentContext.get_head_revisions()
should be preferred.This function does not require that the
MigrationContext
has been configured.
-
get_head_revisions
()¶ Return the hex identifier of the ‘heads’ script revision(s).
This returns a tuple containing the version number of all heads in the script directory.
This function does not require that the
MigrationContext
has been configured.New in version 0.7.0.
-
get_revision_argument
()¶ Get the ‘destination’ revision argument.
This is typically the argument passed to the
upgrade
ordowngrade
command.If it was specified as
head
, the actual version number is returned; if specified asbase
,None
is returned.This function does not require that the
MigrationContext
has been configured.
-
get_starting_revision_argument
()¶ Return the ‘starting revision’ argument, if the revision was passed using
start:end
.This is only meaningful in “offline” mode. Returns
None
if no value is available or was configured.This function does not require that the
MigrationContext
has been configured.
-
get_tag_argument
()¶ Return the value passed for the
--tag
argument, if any.The
--tag
argument is not used directly by Alembic, but is available for customenv.py
configurations that wish to use it; particularly for offline generation scripts that wish to generate tagged filenames.This function does not require that the
MigrationContext
has been configured.See also
EnvironmentContext.get_x_argument()
- a newer and more open ended system of extendingenv.py
scripts via the command line.
-
get_x_argument
(as_dictionary=False)¶ Return the value(s) passed for the
-x
argument, if any.The
-x
argument is an open ended flag that allows any user-defined value or values to be passed on the command line, then available here for consumption by a customenv.py
script.The return value is a list, returned directly from the
argparse
structure. Ifas_dictionary=True
is passed, thex
arguments are parsed usingkey=value
format into a dictionary that is then returned.For example, to support passing a database URL on the command line, the standard
env.py
script can be modified like this:cmd_line_url = context.get_x_argument( as_dictionary=True).get('dbname') if cmd_line_url: engine = create_engine(cmd_line_url) else: engine = engine_from_config( config.get_section(config.config_ini_section), prefix='sqlalchemy.', poolclass=pool.NullPool)
This then takes effect by running the
alembic
script as:alembic -x dbname=postgresql://user:pass@host/dbname upgrade head
This function does not require that the
MigrationContext
has been configured.New in version 0.6.0.
-
is_offline_mode
()¶ Return True if the current migrations environment is running in “offline mode”.
This is
True
orFalse
depending on the the--sql
flag passed.This function does not require that the
MigrationContext
has been configured.
-
is_transactional_ddl
()¶ Return True if the context is configured to expect a transactional DDL capable backend.
This defaults to the type of database in use, and can be overridden by the
transactional_ddl
argument toconfigure()
This function requires that a
MigrationContext
has first been made available viaconfigure()
.
-
run_migrations
(**kw)¶ Run migrations as determined by the current command line configuration as well as versioning information present (or not) in the current database connection (if one is present).
The function accepts optional
**kw
arguments. If these are passed, they are sent directly to theupgrade()
anddowngrade()
functions within each target revision file. By modifying thescript.py.mako
file so that theupgrade()
anddowngrade()
functions accept arguments, parameters can be passed here so that contextual information, usually information to identify a particular database in use, can be passed from a customenv.py
script to the migration functions.This function requires that a
MigrationContext
has first been made available viaconfigure()
.
-
script
= None¶ An instance of
ScriptDirectory
which provides programmatic access to version files within theversions/
directory.
-
static_output
(text)¶ Emit text directly to the “offline” SQL stream.
Typically this is for emitting comments that start with –. The statement is not treated as a SQL execution, no ; or batch separator is added, etc.
- config – a
The Migration Context¶
The MigrationContext
handles the actual work to be performed
against a database backend as migration operations proceed. It is generally
not exposed to the end-user.
-
class
alembic.runtime.migration.
MigrationContext
(dialect, connection, opts, environment_context=None)¶ Represent the database state made available to a migration script.
MigrationContext
is the front end to an actual database connection, or alternatively a string output stream given a particular database dialect, from an Alembic perspective.When inside the
env.py
script, theMigrationContext
is available via theEnvironmentContext.get_context()
method, which is available atalembic.context
:# from within env.py script from alembic import context migration_context = context.get_context()
For usage outside of an
env.py
script, such as for utility routines that want to check the current version in the database, theMigrationContext.configure()
method to create newMigrationContext
objects. For example, to get at the current revision in the database usingMigrationContext.get_current_revision()
:# in any application, outside of an env.py script from alembic.migration import MigrationContext from sqlalchemy import create_engine engine = create_engine("postgresql://mydatabase") conn = engine.connect() context = MigrationContext.configure(conn) current_rev = context.get_current_revision()
The above context can also be used to produce Alembic migration operations with an
Operations
instance:# in any application, outside of the normal Alembic environment from alembic.operations import Operations op = Operations(context) op.alter_column("mytable", "somecolumn", nullable=True)
-
bind
¶ Return the current “bind”.
In online mode, this is an instance of
sqlalchemy.engine.Connection
, and is suitable for ad-hoc execution of any kind of usage described in SQL Expression Language Tutorial as well as for usage with thesqlalchemy.schema.Table.create()
andsqlalchemy.schema.MetaData.create_all()
methods ofTable
,MetaData
.Note that when “standard output” mode is enabled, this bind will be a “mock” connection handler that cannot return results and is only appropriate for a very limited subset of commands.
-
classmethod
configure
(connection=None, url=None, dialect_name=None, dialect=None, environment_context=None, opts=None)¶ Create a new
MigrationContext
.This is a factory method usually called by
EnvironmentContext.configure()
.Parameters: - connection – a
Connection
to use for SQL execution in “online” mode. When present, is also used to determine the type of dialect in use. - url – a string database url, or a
sqlalchemy.engine.url.URL
object. The type of dialect to be used will be derived from this ifconnection
is not passed. - dialect_name – string name of a dialect, such as
“postgresql”, “mssql”, etc. The type of dialect to be used will be
derived from this if
connection
andurl
are not passed. - opts – dictionary of options. Most other options
accepted by
EnvironmentContext.configure()
are passed via this dictionary.
- connection – a
-
execute
(sql, execution_options=None)¶ Execute a SQL construct or string statement.
The underlying execution mechanics are used, that is if this is “offline mode” the SQL is written to the output buffer, otherwise the SQL is emitted on the current SQLAlchemy connection.
-
get_current_heads
()¶ Return a tuple of the current ‘head versions’ that are represented in the target database.
For a migration stream without branches, this will be a single value, synonymous with that of
MigrationContext.get_current_revision()
. However when multiple unmerged branches exist within the target database, the returned tuple will contain a value for each head.If this
MigrationContext
was configured in “offline” mode, that is withas_sql=True
, thestarting_rev
parameter is returned in a one-length tuple.If no version table is present, or if there are no revisions present, an empty tuple is returned.
New in version 0.7.0.
-
get_current_revision
()¶ Return the current revision, usually that which is present in the
alembic_version
table in the database.This method intends to be used only for a migration stream that does not contain unmerged branches in the target database; if there are multiple branches present, an exception is raised. The
MigrationContext.get_current_heads()
should be preferred over this method going forward in order to be compatible with branch migration support.If this
MigrationContext
was configured in “offline” mode, that is withas_sql=True
, thestarting_rev
parameter is returned instead, if any.
-
run_migrations
(**kw)¶ Run the migration scripts established for this
MigrationContext
, if any.The commands in
alembic.command
will set up a function that is ultimately passed to theMigrationContext
as thefn
argument. This function represents the “work” that will be done whenMigrationContext.run_migrations()
is called, typically from within theenv.py
script of the migration environment. The “work function” then provides an iterable of version callables and other version information which in the case of theupgrade
ordowngrade
commands are the list of version scripts to invoke. Other commands yield nothing, in the case that a command wants to run some other operation against the database such as thecurrent
orstamp
commands.Parameters: **kw – keyword arguments here will be passed to each migration callable, that is the upgrade()
ordowngrade()
method within revision scripts.
-
stamp
(script_directory, revision)¶ Stamp the version table with a specific revision.
This method calculates those branches to which the given revision can apply, and updates those branches as though they were migrated towards that revision (either up or down). If no current branches include the revision, it is added as a new branch head.
New in version 0.7.0.
-
Configuration¶
Note
this section discusses the internal API of Alembic as regards internal configuration constructs. This section is only useful for developers who wish to extend the capabilities of Alembic. For documentation on configuration of an Alembic environment, please see Tutorial.
The Config
object represents the configuration
passed to the Alembic environment. From an API usage perspective,
it is needed for the following use cases:
- to create a
ScriptDirectory
, which allows you to work with the actual script files in a migration environment - to create an
EnvironmentContext
, which allows you to actually run theenv.py
module within the migration environment - to programatically run any of the commands in the Commands module.
The Config
is not needed for these cases:
- to instantiate a
MigrationContext
directly - this object only needs a SQLAlchemy connection or dialect name. - to instantiate a
Operations
object - this object only needs aMigrationContext
.
-
class
alembic.config.
Config
(file_=None, ini_section='alembic', output_buffer=None, stdout=<open file '<stdout>', mode 'w'>, cmd_opts=None, config_args=immutabledict({}), attributes=None)¶ Represent an Alembic configuration.
Within an
env.py
script, this is available via theEnvironmentContext.config
attribute, which in turn is available atalembic.context
:from alembic import context some_param = context.config.get_main_option("my option")
When invoking Alembic programatically, a new
Config
can be created by passing the name of an .ini file to the constructor:from alembic.config import Config alembic_cfg = Config("/path/to/yourapp/alembic.ini")
With a
Config
object, you can then run Alembic commands programmatically using the directives inalembic.command
.The
Config
object can also be constructed without a filename. Values can be set programmatically, and new sections will be created as needed:from alembic.config import Config alembic_cfg = Config() alembic_cfg.set_main_option("script_location", "myapp:migrations") alembic_cfg.set_main_option("url", "postgresql://foo/bar") alembic_cfg.set_section_option("mysection", "foo", "bar")
For passing non-string values to environments, such as connections and engines, use the
Config.attributes
dictionary:with engine.begin() as connection: alembic_cfg.attributes['connection'] = connection command.upgrade(alembic_cfg, "head")
Parameters: - file_ – name of the .ini file to open.
- ini_section – name of the main Alembic section within the .ini file
- output_buffer – optional file-like input buffer which
will be passed to the
MigrationContext
- used to redirect the output of “offline generation” when using Alembic programmatically. - stdout –
buffer where the “print” output of commands will be sent. Defaults to
sys.stdout
.New in version 0.4.
- config_args –
A dictionary of keys and values that will be used for substitution in the alembic config file. The dictionary as given is copied to a new one, stored locally as the attribute
.config_args
. When theConfig.file_config
attribute is first invoked, the replacement variablehere
will be added to this dictionary before the dictionary is passed toSafeConfigParser()
to parse the .ini file.New in version 0.7.0.
- attributes –
optional dictionary of arbitrary Python keys/values, which will be populated into the
Config.attributes
dictionary.New in version 0.7.5.
Construct a new
Config
-
attributes
¶ A Python dictionary for storage of additional state.
This is a utility dictionary which can include not just strings but engines, connections, schema objects, or anything else. Use this to pass objects into an env.py script, such as passing a
sqlalchemy.engine.base.Connection
when calling commands fromalembic.command
programmatically.New in version 0.7.5.
-
cmd_opts
= None¶ The command-line options passed to the
alembic
script.Within an
env.py
script this can be accessed via theEnvironmentContext.config
attribute.New in version 0.6.0.
See also
-
config_file_name
= None¶ Filesystem path to the .ini file in use.
-
config_ini_section
= None¶ Name of the config file section to read basic configuration from. Defaults to
alembic
, that is the[alembic]
section of the .ini file. This value is modified using the-n/--name
option to the Alembic runnier.
-
file_config
¶ Return the underlying
ConfigParser
object.Direct access to the .ini file is available here, though the
Config.get_section()
andConfig.get_main_option()
methods provide a possibly simpler interface.
-
get_main_option
(name, default=None)¶ Return an option from the ‘main’ section of the .ini file.
This defaults to being a key from the
[alembic]
section, unless the-n/--name
flag were used to indicate a different section.
-
get_section
(name)¶ Return all the configuration options from a given .ini file section as a dictionary.
-
get_section_option
(section, name, default=None)¶ Return an option from the given section of the .ini file.
-
get_template_directory
()¶ Return the directory where Alembic setup templates are found.
This method is used by the alembic
init
andlist_templates
commands.
-
print_stdout
(text, *arg)¶ Render a message to standard out.
-
set_main_option
(name, value)¶ Set an option programmatically within the ‘main’ section.
This overrides whatever was in the .ini file.
-
set_section_option
(section, name, value)¶ Set an option programmatically within the given section.
The section is created if it doesn’t exist already. The value here will override whatever was in the .ini file.
-
alembic.config.
main
(argv=None, prog=None, **kwargs)¶ The console runner function for Alembic.
Commands¶
Note
this section discusses the internal API of Alembic as regards its command invocation system. This section is only useful for developers who wish to extend the capabilities of Alembic. For documentation on using Alembic commands, please see Tutorial.
Alembic commands are all represented by functions in the Commands
package. They all accept the same style of usage, being sent
the Config
object as the first argument.
Commands can be run programmatically, by first constructing a Config
object, as in:
from alembic.config import Config
from alembic import command
alembic_cfg = Config("/path/to/yourapp/alembic.ini")
command.upgrade(alembic_cfg, "head")
In many cases, and perhaps more often than not, an application will wish
to call upon a series of Alembic commands and/or other features. It is
usually a good idea to link multiple commands along a single connection
and transaction, if feasible. This can be achieved using the
Config.attributes
dictionary in order to share a connection:
with engine.begin() as connection:
alembic_cfg.attributes['connection'] = connection
command.upgrade(alembic_cfg, "head")
This recipe requires that env.py
consumes this connection argument;
see the example in Sharing a Connection with a Series of Migration Commands and Environments for details.
To write small API functions that make direct use of database and script directory
information, rather than just running one of the built-in commands,
use the ScriptDirectory
and MigrationContext
classes directly.
-
alembic.command.
branches
(config, verbose=False)¶ Show current branch points
-
alembic.command.
current
(config, verbose=False, head_only=False)¶ Display the current revision for a database.
-
alembic.command.
downgrade
(config, revision, sql=False, tag=None)¶ Revert to a previous version.
-
alembic.command.
edit
(config, rev)¶ Edit revision script(s) using $EDITOR
-
alembic.command.
heads
(config, verbose=False, resolve_dependencies=False)¶ Show current available heads in the script directory
-
alembic.command.
history
(config, rev_range=None, verbose=False)¶ List changeset scripts in chronological order.
-
alembic.command.
init
(config, directory, template='generic')¶ Initialize a new scripts directory.
-
alembic.command.
list_templates
(config)¶ List available templates
-
alembic.command.
merge
(config, revisions, message=None, branch_label=None, rev_id=None)¶ Merge two revisions together. Creates a new migration file.
New in version 0.7.0.
See also
branches
-
alembic.command.
revision
(config, message=None, autogenerate=False, sql=False, head='head', splice=False, branch_label=None, version_path=None, rev_id=None, depends_on=None)¶ Create a new revision file.
-
alembic.command.
show
(config, rev)¶ Show the revision(s) denoted by the given symbol.
-
alembic.command.
stamp
(config, revision, sql=False, tag=None)¶ ‘stamp’ the revision table with the given revision; don’t run any migrations.
-
alembic.command.
upgrade
(config, revision, sql=False, tag=None)¶ Upgrade to a later version.
Operation Directives¶
Note
this section discusses the internal API of Alembic as regards the internal system of defining migration operation directives. This section is only useful for developers who wish to extend the capabilities of Alembic. For end-user guidance on Alembic migration operations, please see Operation Reference.
Within migration scripts, actual database migration operations are handled
via an instance of Operations
. The Operations
class
lists out available migration operations that are linked to a
MigrationContext
, which communicates instructions originated
by the Operations
object into SQL that is sent to a database or SQL
output stream.
Most methods on the Operations
class are generated dynamically
using a “plugin” system, described in the next section
Operation Plugins. Additionally, when Alembic migration scripts
actually run, the methods on the current Operations
object are
proxied out to the alembic.op
module, so that they are available
using module-style access.
For an overview of how to use an Operations
object directly
in programs, as well as for reference to the standard operation methods
as well as “batch” methods, see Operation Reference.
Operation Plugins¶
The Operations object is extensible using a plugin system. This system
allows one to add new op.<some_operation>
methods at runtime. The
steps to use this system are to first create a subclass of
MigrateOperation
, register it using the Operations.register_operation()
class decorator, then build a default “implementation” function which is
established using the Operations.implementation_for()
decorator.
New in version 0.8.0: - the Operations
class is now an
open namespace that is extensible via the creation of new
MigrateOperation
subclasses.
Below we illustrate a very simple operation CreateSequenceOp
which
will implement a new method op.create_sequence()
for use in
migration scripts:
from alembic.operations import Operations, MigrateOperation
@Operations.register_operation("create_sequence")
class CreateSequenceOp(MigrateOperation):
"""Create a SEQUENCE."""
def __init__(self, sequence_name, schema=None):
self.sequence_name = sequence_name
self.schema = schema
@classmethod
def create_sequence(cls, operations, sequence_name, **kw):
"""Issue a "CREATE SEQUENCE" instruction."""
op = CreateSequenceOp(sequence_name, **kw)
return operations.invoke(op)
def reverse(self):
# only needed to support autogenerate
return DropSequenceOp(self.sequence_name, schema=self.schema)
@Operations.register_operation("drop_sequence")
class DropSequenceOp(MigrateOperation):
"""Drop a SEQUENCE."""
def __init__(self, sequence_name, schema=None):
self.sequence_name = sequence_name
self.schema = schema
@classmethod
def drop_sequence(cls, operations, sequence_name, **kw):
"""Issue a "DROP SEQUENCE" instruction."""
op = DropSequenceOp(sequence_name, **kw)
return operations.invoke(op)
def reverse(self):
# only needed to support autogenerate
return CreateSequenceOp(self.sequence_name, schema=self.schema)
Above, the CreateSequenceOp
and DropSequenceOp
classes represent
new operations that will
be available as op.create_sequence()
and op.drop_sequence()
.
The reason the operations
are represented as stateful classes is so that an operation and a specific
set of arguments can be represented generically; the state can then correspond
to different kinds of operations, such as invoking the instruction against
a database, or autogenerating Python code for the operation into a
script.
In order to establish the migrate-script behavior of the new operations,
we use the Operations.implementation_for()
decorator:
@Operations.implementation_for(CreateSequenceOp)
def create_sequence(operations, operation):
if operation.schema is not None:
name = "%s.%s" % (operation.schema, operation.sequence_name)
else:
name = operation.sequence_name
operations.execute("CREATE SEQUENCE %s" % name)
@Operations.implementation_for(DropSequenceOp)
def drop_sequence(operations, operation):
if operation.schema is not None:
name = "%s.%s" % (operation.schema, operation.sequence_name)
else:
name = operation.sequence_name
operations.execute("DROP SEQUENCE %s" % name)
Above, we use the simplest possible technique of invoking our DDL, which
is just to call Operations.execute()
with literal SQL. If this is
all a custom operation needs, then this is fine. However, options for
more comprehensive support include building out a custom SQL construct,
as documented at Custom SQL Constructs and Compilation Extension.
With the above two steps, a migration script can now use new methods
op.create_sequence()
and op.drop_sequence()
that will proxy to
our object as a classmethod:
def upgrade():
op.create_sequence("my_sequence")
def downgrade():
op.drop_sequence("my_sequence")
The registration of new operations only needs to occur in time for the
env.py
script to invoke MigrationContext.run_migrations()
;
within the module level of the env.py
script is sufficient.
See also
Autogenerating Custom Operation Directives - how to add autogenerate support to custom operations.
New in version 0.8: - the migration operations available via the
Operations
class as well as the alembic.op
namespace
is now extensible using a plugin system.
Built-in Operation Objects¶
The migration operations present on Operations
are themselves
delivered via operation objects that represent an operation and its
arguments. All operations descend from the MigrateOperation
class, and are registered with the Operations
class using
the Operations.register_operation()
class decorator. The
MigrateOperation
objects also serve as the basis for how the
autogenerate system renders new migration scripts.
The built-in operation objects are listed below.
-
class
alembic.operations.ops.
AddColumnOp
(table_name, column, schema=None)¶ Represent an add column operation.
-
classmethod
add_column
(operations, table_name, column, schema=None)¶ This method is proxied on the
Operations
class, via theOperations.add_column()
method.
-
classmethod
batch_add_column
(operations, column)¶ This method is proxied on the
BatchOperations
class, via theBatchOperations.add_column()
method.
-
classmethod
-
class
alembic.operations.ops.
AddConstraintOp
¶ Represent an add constraint operation.
-
class
alembic.operations.ops.
AlterColumnOp
(table_name, column_name, schema=None, existing_type=None, existing_server_default=False, existing_nullable=None, modify_nullable=None, modify_server_default=False, modify_name=None, modify_type=None, **kw)¶ Represent an alter column operation.
-
classmethod
alter_column
(operations, table_name, column_name, nullable=None, server_default=False, new_column_name=None, type_=None, existing_type=None, existing_server_default=False, existing_nullable=None, schema=None, **kw)¶ This method is proxied on the
Operations
class, via theOperations.alter_column()
method.
-
classmethod
batch_alter_column
(operations, column_name, nullable=None, server_default=False, new_column_name=None, type_=None, existing_type=None, existing_server_default=False, existing_nullable=None, **kw)¶ This method is proxied on the
BatchOperations
class, via theBatchOperations.alter_column()
method.
-
classmethod
-
class
alembic.operations.ops.
AlterTableOp
(table_name, schema=None)¶ Represent an alter table operation.
-
class
alembic.operations.ops.
BulkInsertOp
(table, rows, multiinsert=True)¶ Represent a bulk insert operation.
-
classmethod
bulk_insert
(operations, table, rows, multiinsert=True)¶ This method is proxied on the
Operations
class, via theOperations.bulk_insert()
method.
-
classmethod
-
class
alembic.operations.ops.
CreateCheckConstraintOp
(constraint_name, table_name, condition, schema=None, _orig_constraint=None, **kw)¶ Represent a create check constraint operation.
-
classmethod
batch_create_check_constraint
(operations, constraint_name, condition, **kw)¶ This method is proxied on the
BatchOperations
class, via theBatchOperations.create_check_constraint()
method.
-
classmethod
create_check_constraint
(operations, constraint_name, table_name, condition, schema=None, **kw)¶ This method is proxied on the
Operations
class, via theOperations.create_check_constraint()
method.
-
classmethod
-
class
alembic.operations.ops.
CreateForeignKeyOp
(constraint_name, source_table, referent_table, local_cols, remote_cols, _orig_constraint=None, **kw)¶ Represent a create foreign key constraint operation.
-
classmethod
batch_create_foreign_key
(operations, constraint_name, referent_table, local_cols, remote_cols, referent_schema=None, onupdate=None, ondelete=None, deferrable=None, initially=None, match=None, **dialect_kw)¶ This method is proxied on the
BatchOperations
class, via theBatchOperations.create_foreign_key()
method.
-
classmethod
create_foreign_key
(operations, constraint_name, source_table, referent_table, local_cols, remote_cols, onupdate=None, ondelete=None, deferrable=None, initially=None, match=None, source_schema=None, referent_schema=None, **dialect_kw)¶ This method is proxied on the
Operations
class, via theOperations.create_foreign_key()
method.
-
classmethod
-
class
alembic.operations.ops.
CreateIndexOp
(index_name, table_name, columns, schema=None, unique=False, _orig_index=None, **kw)¶ Represent a create index operation.
-
classmethod
batch_create_index
(operations, index_name, columns, **kw)¶ This method is proxied on the
BatchOperations
class, via theBatchOperations.create_index()
method.
-
classmethod
create_index
(operations, index_name, table_name, columns, schema=None, unique=False, **kw)¶ This method is proxied on the
Operations
class, via theOperations.create_index()
method.
-
classmethod
-
class
alembic.operations.ops.
CreatePrimaryKeyOp
(constraint_name, table_name, columns, schema=None, _orig_constraint=None, **kw)¶ Represent a create primary key operation.
-
classmethod
batch_create_primary_key
(operations, constraint_name, columns)¶ This method is proxied on the
BatchOperations
class, via theBatchOperations.create_primary_key()
method.
-
classmethod
create_primary_key
(operations, constraint_name, table_name, columns, schema=None)¶ This method is proxied on the
Operations
class, via theOperations.create_primary_key()
method.
-
classmethod
-
class
alembic.operations.ops.
CreateTableOp
(table_name, columns, schema=None, _orig_table=None, **kw)¶ Represent a create table operation.
-
classmethod
create_table
(operations, table_name, *columns, **kw)¶ This method is proxied on the
Operations
class, via theOperations.create_table()
method.
-
classmethod
-
class
alembic.operations.ops.
CreateUniqueConstraintOp
(constraint_name, table_name, columns, schema=None, _orig_constraint=None, **kw)¶ Represent a create unique constraint operation.
-
classmethod
batch_create_unique_constraint
(operations, constraint_name, columns, **kw)¶ This method is proxied on the
BatchOperations
class, via theBatchOperations.create_unique_constraint()
method.
-
classmethod
create_unique_constraint
(operations, constraint_name, table_name, columns, schema=None, **kw)¶ This method is proxied on the
Operations
class, via theOperations.create_unique_constraint()
method.
-
classmethod
-
class
alembic.operations.ops.
DowngradeOps
(ops=(), downgrade_token='downgrades')¶ contains a sequence of operations that would apply to the ‘downgrade’ stream of a script.
See also
-
class
alembic.operations.ops.
DropColumnOp
(table_name, column_name, schema=None, _orig_column=None, **kw)¶ Represent a drop column operation.
-
classmethod
batch_drop_column
(operations, column_name)¶ This method is proxied on the
BatchOperations
class, via theBatchOperations.drop_column()
method.
-
classmethod
drop_column
(operations, table_name, column_name, schema=None, **kw)¶ This method is proxied on the
Operations
class, via theOperations.drop_column()
method.
-
classmethod
-
class
alembic.operations.ops.
DropConstraintOp
(constraint_name, table_name, type_=None, schema=None, _orig_constraint=None)¶ Represent a drop constraint operation.
-
classmethod
batch_drop_constraint
(operations, constraint_name, type_=None)¶ This method is proxied on the
BatchOperations
class, via theBatchOperations.drop_constraint()
method.
-
classmethod
drop_constraint
(operations, constraint_name, table_name, type_=None, schema=None)¶ This method is proxied on the
Operations
class, via theOperations.drop_constraint()
method.
-
classmethod
-
class
alembic.operations.ops.
DropIndexOp
(index_name, table_name=None, schema=None, _orig_index=None)¶ Represent a drop index operation.
-
classmethod
batch_drop_index
(operations, index_name, **kw)¶ This method is proxied on the
BatchOperations
class, via theBatchOperations.drop_index()
method.
-
classmethod
drop_index
(operations, index_name, table_name=None, schema=None)¶ This method is proxied on the
Operations
class, via theOperations.drop_index()
method.
-
classmethod
-
class
alembic.operations.ops.
DropTableOp
(table_name, schema=None, table_kw=None, _orig_table=None)¶ Represent a drop table operation.
-
classmethod
drop_table
(operations, table_name, schema=None, **kw)¶ This method is proxied on the
Operations
class, via theOperations.drop_table()
method.
-
classmethod
-
class
alembic.operations.ops.
ExecuteSQLOp
(sqltext, execution_options=None)¶ Represent an execute SQL operation.
-
classmethod
execute
(operations, sqltext, execution_options=None)¶ This method is proxied on the
Operations
class, via theOperations.execute()
method.
-
classmethod
-
class
alembic.operations.ops.
MigrateOperation
¶ base class for migration command and organization objects.
This system is part of the operation extensibility API.
New in version 0.8.0.
-
info
¶ A dictionary that may be used to store arbitrary information along with this
MigrateOperation
object.
-
-
class
alembic.operations.ops.
MigrationScript
(rev_id, upgrade_ops, downgrade_ops, message=None, imports=set([]), head=None, splice=None, branch_label=None, version_path=None, depends_on=None)¶ represents a migration script.
E.g. when autogenerate encounters this object, this corresponds to the production of an actual script file.
A normal
MigrationScript
object would contain a singleUpgradeOps
and a singleDowngradeOps
directive. These are accessible via the.upgrade_ops
and.downgrade_ops
attributes.In the case of an autogenerate operation that runs multiple times, such as the multiple database example in the “multidb” template, the
.upgrade_ops
and.downgrade_ops
attributes are disabled, and instead these objects should be accessed via the.upgrade_ops_list
and.downgrade_ops_list
list-based attributes. These latter attributes are always available at the very least as single-element lists.Changed in version 0.8.1: the
.upgrade_ops
and.downgrade_ops
attributes should be accessed via the.upgrade_ops_list
and.downgrade_ops_list
attributes if multiple autogenerate passes proceed on the sameMigrationScript
object.See also
-
downgrade_ops
¶ An instance of
DowngradeOps
.See also
-
downgrade_ops_list
¶ A list of
DowngradeOps
instances.This is used in place of the
MigrationScript.downgrade_ops
attribute when dealing with a revision operation that does multiple autogenerate passes.New in version 0.8.1.
-
upgrade_ops
¶ An instance of
UpgradeOps
.See also
-
upgrade_ops_list
¶ A list of
UpgradeOps
instances.This is used in place of the
MigrationScript.upgrade_ops
attribute when dealing with a revision operation that does multiple autogenerate passes.New in version 0.8.1.
-
-
class
alembic.operations.ops.
ModifyTableOps
(table_name, ops, schema=None)¶ Contains a sequence of operations that all apply to a single Table.
-
class
alembic.operations.ops.
OpContainer
(ops=())¶ Represent a sequence of operations operation.
-
class
alembic.operations.ops.
RenameTableOp
(old_table_name, new_table_name, schema=None)¶ Represent a rename table operation.
-
classmethod
rename_table
(operations, old_table_name, new_table_name, schema=None)¶ This method is proxied on the
Operations
class, via theOperations.rename_table()
method.
-
classmethod
-
class
alembic.operations.ops.
UpgradeOps
(ops=(), upgrade_token='upgrades')¶ contains a sequence of operations that would apply to the ‘upgrade’ stream of a script.
See also
Autogeneration¶
Note
this section discusses the internal API of Alembic
as regards the autogeneration feature of the alembic revision
command.
This section is only useful for developers who wish to extend the
capabilities of Alembic. For general documentation on the autogenerate
feature, please see Auto Generating Migrations.
The autogeneration system has a wide degree of public API, including the following areas:
- The ability to do a “diff” of a
MetaData
object against a database, and receive a data structure back. This structure is available either as a rudimentary list of changes, or as aMigrateOperation
structure. - The ability to alter how the
alembic revision
command generates revision scripts, including support for multiple revision scripts generated in one pass. - The ability to add new operation directives to autogeneration, including custom schema/model comparison functions and revision script rendering.
Getting Diffs¶
The simplest API autogenerate provides is the “schema comparison” API;
these are simple functions that will run all registered “comparison” functions
between a MetaData
object and a database
backend to produce a structure showing how they differ. The two
functions provided are compare_metadata()
, which is more of the
“legacy” function that produces diff tuples, and produce_migrations()
,
which produces a structure consisting of operation directives detailed in
Operation Directives.
-
alembic.autogenerate.
compare_metadata
(context, metadata)¶ Compare a database schema to that given in a
MetaData
instance.The database connection is presented in the context of a
MigrationContext
object, which provides database connectivity as well as optional comparison functions to use for datatypes and server defaults - see the “autogenerate” arguments atEnvironmentContext.configure()
for details on these.The return format is a list of “diff” directives, each representing individual differences:
from alembic.migration import MigrationContext from alembic.autogenerate import compare_metadata from sqlalchemy.schema import SchemaItem from sqlalchemy.types import TypeEngine from sqlalchemy import (create_engine, MetaData, Column, Integer, String, Table) import pprint engine = create_engine("sqlite://") engine.execute(''' create table foo ( id integer not null primary key, old_data varchar, x integer )''') engine.execute(''' create table bar ( data varchar )''') metadata = MetaData() Table('foo', metadata, Column('id', Integer, primary_key=True), Column('data', Integer), Column('x', Integer, nullable=False) ) Table('bat', metadata, Column('info', String) ) mc = MigrationContext.configure(engine.connect()) diff = compare_metadata(mc, metadata) pprint.pprint(diff, indent=2, width=20)
Output:
[ ( 'add_table', Table('bat', MetaData(bind=None), Column('info', String(), table=<bat>), schema=None)), ( 'remove_table', Table(u'bar', MetaData(bind=None), Column(u'data', VARCHAR(), table=<bar>), schema=None)), ( 'add_column', None, 'foo', Column('data', Integer(), table=<foo>)), ( 'remove_column', None, 'foo', Column(u'old_data', VARCHAR(), table=None)), [ ( 'modify_nullable', None, 'foo', u'x', { 'existing_server_default': None, 'existing_type': INTEGER()}, True, False)]]
Parameters: - context – a
MigrationContext
instance. - metadata – a
MetaData
instance.
See also
produce_migrations()
- produces aMigrationScript
structure based on metadata comparison.- context – a
-
alembic.autogenerate.
produce_migrations
(context, metadata)¶ Produce a
MigrationScript
structure based on schema comparison.This function does essentially what
compare_metadata()
does, but then runs the resulting list of diffs to produce the fullMigrationScript
object. For an example of what this looks like, see the example in Customizing Revision Generation.New in version 0.8.0.
See also
compare_metadata()
- returns more fundamental “diff” data from comparing a schema.
Customizing Revision Generation¶
New in version 0.8.0: - the alembic revision
system is now customizable.
The alembic revision
command, also available programmatically
via command.revision()
, essentially produces a single migration
script after being run. Whether or not the --autogenerate
option
was specified basically determines if this script is a blank revision
script with empty upgrade()
and downgrade()
functions, or was
produced with alembic operation directives as the result of autogenerate.
In either case, the system creates a full plan of what is to be done
in the form of a MigrateOperation
structure, which is then
used to produce the script.
For example, suppose we ran alembic revision --autogenerate
, and the
end result was that it produced a new revision 'eced083f5df'
with the following contents:
"""create the organization table."""
# revision identifiers, used by Alembic.
revision = 'eced083f5df'
down_revision = 'beafc7d709f'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'organization',
sa.Column('id', sa.Integer(), primary_key=True),
sa.Column('name', sa.String(50), nullable=False)
)
op.add_column(
'user',
sa.Column('organization_id', sa.Integer())
)
op.create_foreign_key(
'org_fk', 'user', 'organization', ['organization_id'], ['id']
)
def downgrade():
op.drop_constraint('org_fk', 'user')
op.drop_column('user', 'organization_id')
op.drop_table('organization')
The above script is generated by a MigrateOperation
structure
that looks like this:
from alembic.operations import ops
import sqlalchemy as sa
migration_script = ops.MigrationScript(
'eced083f5df',
ops.UpgradeOps(
ops=[
ops.CreateTableOp(
'organization',
[
sa.Column('id', sa.Integer(), primary_key=True),
sa.Column('name', sa.String(50), nullable=False)
]
),
ops.ModifyTableOps(
'user',
ops=[
ops.AddColumnOp(
'user',
sa.Column('organization_id', sa.Integer())
),
ops.CreateForeignKeyOp(
'org_fk', 'user', 'organization',
['organization_id'], ['id']
)
]
)
]
),
ops.DowngradeOps(
ops=[
ops.ModifyTableOps(
'user',
ops=[
ops.DropConstraintOp('org_fk', 'user'),
ops.DropColumnOp('user', 'organization_id')
]
),
ops.DropTableOp('organization')
]
),
message='create the organization table.'
)
When we deal with a MigrationScript
structure, we can render
the upgrade/downgrade sections into strings for debugging purposes
using the render_python_code()
helper function:
from alembic.autogenerate import render_python_code
print(render_python_code(migration_script.upgrade_ops))
Renders:
### commands auto generated by Alembic - please adjust! ###
op.create_table('organization',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=50), nullable=False),
sa.PrimaryKeyConstraint('id')
)
op.add_column('user', sa.Column('organization_id', sa.Integer(), nullable=True))
op.create_foreign_key('org_fk', 'user', 'organization', ['organization_id'], ['id'])
### end Alembic commands ###
Given that structures like the above are used to generate new revision
files, and that we’d like to be able to alter these as they are created,
we then need a system to access this structure when the
command.revision()
command is used. The
EnvironmentContext.configure.process_revision_directives
parameter gives us a way to alter this. This is a function that
is passed the above structure as generated by Alembic, giving us a chance
to alter it.
For example, if we wanted to put all the “upgrade” operations into
a certain branch, and we wanted our script to not have any “downgrade”
operations at all, we could build an extension as follows, illustrated
within an env.py
script:
def process_revision_directives(context, revision, directives):
script = directives[0]
# set specific branch
script.head = "mybranch@head"
# erase downgrade operations
script.downgrade_ops.ops[:] = []
# ...
def run_migrations_online():
# ...
with engine.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
process_revision_directives=process_revision_directives)
with context.begin_transaction():
context.run_migrations()
Above, the directives
argument is a Python list. We may alter the
given structure within this list in-place, or replace it with a new
structure consisting of zero or more MigrationScript
directives.
The command.revision()
command will then produce scripts corresponding
to whatever is in this list.
-
alembic.autogenerate.
render_python_code
(up_or_down_op, sqlalchemy_module_prefix='sa.', alembic_module_prefix='op.', render_as_batch=False, imports=(), render_item=None)¶ Render Python code given an
UpgradeOps
orDowngradeOps
object.This is a convenience function that can be used to test the autogenerate output of a user-defined
MigrationScript
structure.
Fine-Grained Autogenerate Generation with Rewriters¶
The preceding example illustrated how we can make a simple change to the
structure of the operation directives to produce new autogenerate output.
For the case where we want to affect very specific parts of the autogenerate
stream, we can make a function for
EnvironmentContext.configure.process_revision_directives
which traverses through the whole MigrationScript
structure, locates
the elements we care about and modifies them in-place as needed. However,
to reduce the boilerplate associated with this task, we can use the
Rewriter
object to make this easier. Rewriter
gives
us an object that we can pass directly to
EnvironmentContext.configure.process_revision_directives
which
we can also attach handler functions onto, keyed to specific types of
constructs.
Below is an example where we rewrite ops.AddColumnOp
directives;
based on whether or not the new column is “nullable”, we either return
the existing directive, or we return the existing directive with
the nullable flag changed, inside of a list with a second directive
to alter the nullable flag in a second step:
# ... fragmented env.py script ....
from alembic.autogenerate import rewriter
from alembic.operations import ops
writer = rewriter.Rewriter()
@writer.rewrites(ops.AddColumnOp)
def add_column(context, revision, op):
if op.column.nullable:
return op
else:
op.column.nullable = True
return [
op,
ops.AlterColumnOp(
op.table_name,
op.column.name,
modify_nullable=False,
existing_type=op.column.type,
)
]
# ... later ...
def run_migrations_online():
# ...
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
process_revision_directives=writer
)
with context.begin_transaction():
context.run_migrations()
Above, in a full ops.MigrationScript
structure, the
AddColumn
directives would be present within
the paths MigrationScript->UpgradeOps->ModifyTableOps
and MigrationScript->DowngradeOps->ModifyTableOps
. The
Rewriter
handles traversing into these structures as well
as rewriting them as needed so that we only need to code for the specific
object we care about.
-
class
alembic.autogenerate.rewriter.
Rewriter
¶ A helper object that allows easy ‘rewriting’ of ops streams.
The
Rewriter
object is intended to be passed along to theEnvironmentContext.configure.process_revision_directives
parameter in anenv.py
script. Once constructed, any number of “rewrites” functions can be associated with it, which will be given the opportunity to modify the structure without having to have explicit knowledge of the overall structure.The function is passed the
MigrationContext
object andrevision
tuple that are passed to theEnvironment Context.configure.process_revision_directives
function normally, and the third argument is an individual directive of the type noted in the decorator. The function has the choice of returning a single op directive, which normally can be the directive that was actually passed, or a new directive to replace it, or a list of zero or more directives to replace it.See also
Fine-Grained Autogenerate Generation with Rewriters - usage example
New in version 0.8.
-
chain
(other)¶ Produce a “chain” of this
Rewriter
to another.This allows two rewriters to operate serially on a stream, e.g.:
writer1 = autogenerate.Rewriter() writer2 = autogenerate.Rewriter() @writer1.rewrites(ops.AddColumnOp) def add_column_nullable(context, revision, op): op.column.nullable = True return op @writer2.rewrites(ops.AddColumnOp) def add_column_idx(context, revision, op): idx_op = ops.CreateIndexOp( 'ixc', op.table_name, [op.column.name]) return [ op, idx_op ] writer = writer1.chain(writer2)
Parameters: other – a Rewriter
instanceReturns: a new Rewriter
that will run the operations of this writer, then the “other” writer, in succession.
-
rewrites
(operator)¶ Register a function as rewriter for a given type.
The function should receive three arguments, which are the
MigrationContext
, arevision
tuple, and an op directive of the type indicated. E.g.:@writer1.rewrites(ops.AddColumnOp) def add_column_nullable(context, revision, op): op.column.nullable = True return op
-
Revision Generation with Multiple Engines / run_migrations()
calls¶
A lesser-used technique which allows autogenerated migrations to run
against multiple databse backends at once, generating changes into
a single migration script, is illustrated in the
provided multidb
template. This template features a special env.py
which iterates through multiple Engine
instances
and calls upon MigrationContext.run_migrations()
for each:
for name, rec in engines.items():
logger.info("Migrating database %s" % name)
context.configure(
connection=rec['connection'],
upgrade_token="%s_upgrades" % name,
downgrade_token="%s_downgrades" % name,
target_metadata=target_metadata.get(name)
)
context.run_migrations(engine_name=name)
Above, MigrationContext.run_migrations()
is run multiple times,
once for each engine. Within the context of autogeneration, each time
the method is called the upgrade_token
and downgrade_token
parameters
are changed, so that the collection of template variables gains distinct
entries for each engine, which are then referred to explicitly
within script.py.mako
.
In terms of the
EnvironmentContext.configure.process_revision_directives
hook,
the behavior here is that the process_revision_directives
hook
is invoked multiple times, once for each call to
context.run_migrations(). This means that if
a multi-run_migrations()
approach is to be combined with the
process_revision_directives
hook, care must be taken to use the
hook appropriately.
The first point to note is that when a second call to
run_migrations()
occurs, the .upgrade_ops
and .downgrade_ops
attributes are converted into Python lists, and new
UpgradeOps
and DowngradeOps
objects are appended
to these lists. Each UpgradeOps
and DowngradeOps
object maintains an .upgrade_token
and a .downgrade_token
attribute
respectively, which serves to render their contents into the appropriate
template token.
For example, a multi-engine run that has the engine names engine1
and engine2
will generate tokens of engine1_upgrades
,
engine1_downgrades
, engine2_upgrades
and engine2_downgrades
as
it runs. The resulting migration structure would look like this:
from alembic.operations import ops
import sqlalchemy as sa
migration_script = ops.MigrationScript(
'eced083f5df',
[
ops.UpgradeOps(
ops=[
# upgrade operations for "engine1"
],
upgrade_token="engine1_upgrades"
),
ops.UpgradeOps(
ops=[
# upgrade operations for "engine2"
],
upgrade_token="engine2_upgrades"
),
],
[
ops.DowngradeOps(
ops=[
# downgrade operations for "engine1"
],
downgrade_token="engine1_downgrades"
),
ops.DowngradeOps(
ops=[
# downgrade operations for "engine2"
],
downgrade_token="engine2_downgrades"
)
],
message='migration message'
)
Given the above, the following guidelines should be considered when
the env.py
script calls upon MigrationContext.run_migrations()
mutiple times when running autogenerate:
- If the
process_revision_directives
hook aims to add elements based on inspection of the current database / connection, it should do its operation on each iteration. This is so that each time the hook runs, the database is available. - Alternatively, if the
process_revision_directives
hook aims to modify the list of migration directives in place, this should be called only on the last iteration. This is so that the hook isn’t being given an ever-growing structure each time which it has already modified previously. - The
Rewriter
object, if used, should be called only on the last iteration, because it will always deliver all directives every time, so again to avoid double/triple/etc. processing of directives it should be called only when the structure is complete. - The
MigrationScript.upgrade_ops_list
andMigrationScript.downgrade_ops_list
attributes should be consulted when referring to the collection ofUpgradeOps
andDowngradeOps
objects.
Changed in version 0.8.1: - multiple calls to
MigrationContext.run_migrations()
within an autogenerate operation,
such as that proposed within the multidb
script template,
are now accommodated by the new extensible migration system
introduced in 0.8.0.
Autogenerating Custom Operation Directives¶
In the section Operation Plugins, we talked about adding new
subclasses of MigrateOperation
in order to add new op.
directives. In the preceding section Customizing Revision Generation, we
also learned that these same MigrateOperation
structures are at
the base of how the autogenerate system knows what Python code to render.
Using this knowledge, we can create additional functions that plug into
the autogenerate system so that our new operations can be generated
into migration scripts when alembic revision --autogenerate
is run.
The following sections will detail an example of this using the
the CreateSequenceOp
and DropSequenceOp
directives
we created in Operation Plugins, which correspond to the
SQLAlchemy Sequence
construct.
New in version 0.8.0: - custom operations can be added to the autogenerate system to support new kinds of database objects.
Tracking our Object with the Model¶
The basic job of an autogenerate comparison function is to inspect
a series of objects in the database and compare them against a series
of objects defined in our model. By “in our model”, we mean anything
defined in Python code that we want to track, however most commonly
we’re talking about a series of Table
objects present in a MetaData
collection.
Let’s propose a simple way of seeing what Sequence
objects we want to ensure exist in the database when autogenerate
runs. While these objects do have some integrations with
Table
and MetaData
already, let’s assume they don’t, as the example here intends to illustrate
how we would do this for most any kind of custom construct. We
associate the object with the info
collection of MetaData
, which is a dictionary
we can use for anything, which we also know will be passed to the autogenerate
process:
from sqlalchemy.schema import Sequence
def add_sequence_to_model(sequence, metadata):
metadata.info.setdefault("sequences", set()).add(
(sequence.schema, sequence.name)
)
my_seq = Sequence("my_sequence")
add_sequence_to_model(my_seq, model_metadata)
The info
dictionary is a good place to put things that we want our autogeneration
routines to be able to locate, which can include any object such as
custom DDL objects representing views, triggers, special constraints,
or anything else we want to support.
Registering a Comparison Function¶
We now need to register a comparison hook, which will be used
to compare the database to our model and produce CreateSequenceOp
and DropSequenceOp
directives to be included in our migration
script. Note that we are assuming a
Postgresql backend:
from alembic.autogenerate import comparators
@comparators.dispatch_for("schema")
def compare_sequences(autogen_context, upgrade_ops, schemas):
all_conn_sequences = set()
for sch in schemas:
all_conn_sequences.update([
(sch, row[0]) for row in
autogen_context.connection.execute(
"SELECT relname FROM pg_class c join "
"pg_namespace n on n.oid=c.relnamespace where "
"relkind='S' and n.nspname=%(nspname)s",
# note that we consider a schema of 'None' in our
# model to be the "default" name in the PG database;
# this usually is the name 'public'
nspname=autogen_context.dialect.default_schema_name
if sch is None else sch
)
])
# get the collection of Sequence objects we're storing with
# our MetaData
metadata_sequences = autogen_context.metadata.info.setdefault(
"sequences", set())
# for new names, produce CreateSequenceOp directives
for sch, name in metadata_sequences.difference(all_conn_sequences):
upgrade_ops.ops.append(
CreateSequenceOp(name, schema=sch)
)
# for names that are going away, produce DropSequenceOp
# directives
for sch, name in all_conn_sequences.difference(metadata_sequences):
upgrade_ops.ops.append(
DropSequenceOp(name, schema=sch)
)
Above, we’ve built a new function compare_sequences()
and registered
it as a “schema” level comparison function with autogenerate. The
job that it performs is that it compares the list of sequence names
present in each database schema with that of a list of sequence names
that we are maintaining in our MetaData
object.
When autogenerate completes, it will have a series of
CreateSequenceOp
and DropSequenceOp
directives in the list of
“upgrade” operations; the list of “downgrade” operations is generated
directly from these using the
CreateSequenceOp.reverse()
and DropSequenceOp.reverse()
methods
that we’ve implemented on these objects.
The registration of our function at the scope of “schema” means our autogenerate comparison function is called outside of the context of any specific table or column. The three available scopes are “schema”, “table”, and “column”, summarized as follows:
Schema level - these hooks are passed a
AutogenContext
, anUpgradeOps
collection, and a collection of string schema names to be operated upon. If theUpgradeOps
collection contains changes after all hooks are run, it is included in the migration script:@comparators.dispatch_for("schema") def compare_schema_level(autogen_context, upgrade_ops, schemas): pass
Table level - these hooks are passed a
AutogenContext
, aModifyTableOps
collection, a schema name, table name, aTable
reflected from the database if any orNone
, and aTable
present in the localMetaData
. If theModifyTableOps
collection contains changes after all hooks are run, it is included in the migration script:@comparators.dispatch_for("table") def compare_table_level(autogen_context, modify_ops, schemaname, tablename, conn_table, metadata_table): pass
Column level - these hooks are passed a
AutogenContext
, anAlterColumnOp
object, a schema name, table name, column name, aColumn
reflected from the database and aColumn
present in the local table. If theAlterColumnOp
contains changes after all hooks are run, it is included in the migration script; a “change” is considered to be present if any of themodify_
attributes are set to a non-default value, or there are any keys in the.kw
collection with the prefix"modify_"
:@comparators.dispatch_for("column") def compare_column_level(autogen_context, alter_column_op, schemaname, tname, cname, conn_col, metadata_col): pass
The AutogenContext
passed to these hooks is documented below.
-
class
alembic.autogenerate.api.
AutogenContext
(migration_context, metadata=None, opts=None, autogenerate=True)¶ Maintains configuration and state that’s specific to an autogenerate operation.
-
connection
= None¶ The
Connection
object currently connected to the database backend being compared.This is obtained from the
MigrationContext.bind
and is utimately set up in theenv.py
script.
-
dialect
= None¶ The
Dialect
object currently in use.This is normally obtained from the
dialect
attribute.
-
imports
= None¶ A
set()
which contains string Python import directives.The directives are to be rendered into the
${imports}
section of a script template. The set is normally empty and can be modified within hooks such as theEnvironmentContext.configure.render_item
hook.New in version 0.8.3.
-
metadata
= None¶ The
MetaData
object representing the destination.This object is the one that is passed within
env.py
to theEnvironmentContext.configure.target_metadata
parameter. It represents the structure ofTable
and other objects as stated in the current database model, and represents the destination structure for the database being examined.While the
MetaData
object is primarily known as a collection ofTable
objects, it also has aninfo
dictionary that may be used by end-user schemes to store additional schema-level objects that are to be compared in custom autogeneration schemes.
-
migration_context
= None¶ The
MigrationContext
established by theenv.py
script.
-
run_filters
(object_, name, type_, reflected, compare_to)¶ Run the context’s object filters and return True if the targets should be part of the autogenerate operation.
This method should be run for every kind of object encountered within an autogenerate operation, giving the environment the chance to filter what objects should be included in the comparison. The filters here are produced directly via the
EnvironmentContext.configure.include_object
andEnvironmentContext.configure.include_symbol
functions, if present.
-
Creating a Render Function¶
The second autogenerate integration hook is to provide a “render” function; since the autogenerate system renders Python code, we need to build a function that renders the correct “op” instructions for our directive:
from alembic.autogenerate import renderers
@renderers.dispatch_for(CreateSequenceOp)
def render_create_sequence(autogen_context, op):
return "op.create_sequence(%r, **%r)" % (
op.sequence_name,
{"schema": op.schema}
)
@renderers.dispatch_for(DropSequenceOp)
def render_drop_sequence(autogen_context, op):
return "op.drop_sequence(%r, **%r)" % (
op.sequence_name,
{"schema": op.schema}
)
The above functions will render Python code corresponding to the
presence of CreateSequenceOp
and DropSequenceOp
instructions
in the list that our comparison function generates.
Running It¶
All the above code can be organized however the developer sees fit;
the only thing that needs to make it work is that when the
Alembic environment env.py
is invoked, it either imports modules
which contain all the above routines, or they are locally present,
or some combination thereof.
If we then have code in our model (which of course also needs to be invoked
when env.py
runs!) like this:
from sqlalchemy.schema import Sequence
my_seq_1 = Sequence("my_sequence_1")
add_sequence_to_model(my_seq_1, target_metadata)
When we first run alembic revision --autogenerate
, we’ll see this
in our migration file:
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_sequence('my_sequence_1', **{'schema': None})
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_sequence('my_sequence_1', **{'schema': None})
### end Alembic commands ###
These are our custom directives that will invoke when alembic upgrade
or alembic downgrade
is run.
Script Directory¶
The ScriptDirectory
object provides programmatic access
to the Alembic version files present in the filesystem.
-
class
alembic.script.
ScriptDirectory
(dir, file_template='%(rev)s_%(slug)s', truncate_slug_length=40, version_locations=None, sourceless=False, output_encoding='utf-8')¶ Provides operations upon an Alembic script directory.
This object is useful to get information as to current revisions, most notably being able to get at the “head” revision, for schemes that want to test if the current revision in the database is the most recent:
from alembic.script import ScriptDirectory from alembic.config import Config config = Config() config.set_main_option("script_location", "myapp:migrations") script = ScriptDirectory.from_config(config) head_revision = script.get_current_head()
-
as_revision_number
(id_)¶ Convert a symbolic revision, i.e. ‘head’ or ‘base’, into an actual revision number.
-
classmethod
from_config
(config)¶ Produce a new
ScriptDirectory
given aConfig
instance.The
Config
need only have thescript_location
key present.
-
generate_revision
(revid, message, head=None, refresh=False, splice=False, branch_labels=None, version_path=None, depends_on=None, **kw)¶ Generate a new revision file.
This runs the
script.py.mako
template, given template arguments, and creates a new file.Parameters: - revid – String revision id. Typically this
comes from
alembic.util.rev_id()
. - message – the revision message, the one passed
by the -m argument to the
revision
command. - head –
the head revision to generate against. Defaults to the current “head” if no branches are present, else raises an exception.
New in version 0.7.0.
- splice – if True, allow the “head” version to not be an actual head; otherwise, the selected head must be a head (e.g. endpoint) revision.
- refresh – deprecated.
- revid – String revision id. Typically this
comes from
-
get_base
()¶ Return the “base” revision as a string.
This is the revision number of the script that has a
down_revision
of None.If the script directory has multiple bases, an error is raised;
ScriptDirectory.get_bases()
should be preferred.
-
get_bases
()¶ return all “base” revisions as strings.
This is the revision number of all scripts that have a
down_revision
of None.New in version 0.7.0.
-
get_current_head
()¶ Return the current head revision.
If the script directory has multiple heads due to branching, an error is raised;
ScriptDirectory.get_heads()
should be preferred.Returns: a string revision number. See also
-
get_heads
()¶ Return all “versioned head” revisions as strings.
This is normally a list of length one, unless branches are present. The
ScriptDirectory.get_current_head()
method can be used normally when a script directory has only one head.Returns: a tuple of string revision numbers.
-
get_revisions
(id_)¶ Return the
Script
instance with the given rev identifier, symbolic name, or sequence of identifiers.New in version 0.7.0.
-
iterate_revisions
(upper, lower)¶ Iterate through script revisions, starting at the given upper revision identifier and ending at the lower.
The traversal uses strictly the down_revision marker inside each migration script, so it is a requirement that upper >= lower, else you’ll get nothing back.
The iterator yields
Script
objects.See also
-
run_env
()¶ Run the script environment.
This basically runs the
env.py
script present in the migration environment. It is called exclusively by the command functions inalembic.command
.
-
walk_revisions
(base='base', head='heads')¶ Iterate through all revisions.
Parameters: - base – the base revision, or “base” to start from the empty revision.
- head –
the head revision; defaults to “heads” to indicate all head revisions. May also be “head” to indicate a single head revision.
Changed in version 0.7.0: the “head” identifier now refers to the head of a non-branched repository only; use “heads” to refer to the set of all head branches simultaneously.
-
-
class
alembic.script.
Script
(module, rev_id, path)¶ Represent a single revision file in a
versions/
directory.The
Script
instance is returned by methods such asScriptDirectory.iterate_revisions()
.-
doc
¶ Return the docstring given in the script.
-
longdoc
¶ Return the docstring given in the script.
-
Revision¶
The RevisionMap
object serves as the basis for revision
management, used exclusively by ScriptDirectory
.
-
class
alembic.script.revision.
Revision
(revision, down_revision, dependencies=None, branch_labels=None)¶ Base class for revisioned objects.
The
Revision
class is the base of the more public-facingScript
object, which represents a migration script. The mechanics of revision management and traversal are encapsulated withinRevision
, whileScript
applies this logic to Python files in a version directory.-
branch_labels
= None¶ Optional string/tuple of symbolic names to apply to this revision’s branch
-
dependencies
= None¶ Additional revisions which this revision is dependent on.
From a migration standpoint, these dependencies are added to the down_revision to form the full iteration. However, the separation of down_revision from “dependencies” is to assist in navigating a history that contains many branches, typically a multi-root scenario.
-
down_revision
= None¶ The
down_revision
identifier(s) within the migration script.Note that the total set of “down” revisions is down_revision + dependencies.
-
is_branch_point
¶ Return True if this
Script
is a branch point.A branchpoint is defined as a
Script
which is referred to by more than one succeedingScript
, that is more than oneScript
has a down_revision identifier pointing here.
-
is_head
¶ Return True if this
Revision
is a ‘head’ revision.This is determined based on whether any other
Script
within theScriptDirectory
refers to thisScript
. Multiple heads can be present.
-
nextrev
= frozenset([])¶ following revisions, based on down_revision only.
-
revision
= None¶ The string revision number.
-
-
class
alembic.script.revision.
RevisionMap
(generator)¶ Maintains a map of
Revision
objects.RevisionMap
is used byScriptDirectory
to maintain and traverse the collection ofScript
objects, which are themselves instances ofRevision
.Construct a new
RevisionMap
.Parameters: generator – a zero-arg callable that will generate an iterable of Revision
instances to be used. These are typicallyScript
subclasses within regular Alembic use.-
add_revision
(revision, _replace=False)¶ add a single revision to an existing map.
This method is for single-revision use cases, it’s not appropriate for fully populating an entire revision map.
-
bases
¶ All “base” revisions as strings.
These are revisions that have a
down_revision
of None, or empty tuple.Returns: a tuple of string revision numbers.
-
get_current_head
(branch_label=None)¶ Return the current head revision.
If the script directory has multiple heads due to branching, an error is raised;
ScriptDirectory.get_heads()
should be preferred.Parameters: branch_label – optional branch name which will limit the heads considered to those which include that branch_label. Returns: a string revision number. See also
-
get_revision
(id_)¶ Return the
Revision
instance with the given rev id.If a symbolic name such as “head” or “base” is given, resolves the identifier into the current head or base revision. If the symbolic name refers to multiples,
MultipleHeads
is raised.Supports partial identifiers, where the given identifier is matched against all identifiers that start with the given characters; if there is exactly one match, that determines the full revision.
-
get_revisions
(id_)¶ Return the
Revision
instances with the given rev id or identifiers.May be given a single identifier, a sequence of identifiers, or the special symbols “head” or “base”. The result is a tuple of one or more identifiers, or an empty tuple in the case of “base”.
In the cases where ‘head’, ‘heads’ is requested and the revision map is empty, returns an empty tuple.
Supports partial identifiers, where the given identifier is matched against all identifiers that start with the given characters; if there is exactly one match, that determines the full revision.
-
heads
¶ All “head” revisions as strings.
This is normally a tuple of length one, unless unmerged branches are present.
Returns: a tuple of string revision numbers.
-
iterate_revisions
(upper, lower, implicit_base=False, inclusive=False, assert_relative_length=True)¶ Iterate through script revisions, starting at the given upper revision identifier and ending at the lower.
The traversal uses strictly the down_revision marker inside each migration script, so it is a requirement that upper >= lower, else you’ll get nothing back.
The iterator yields
Revision
objects.
-
DDL Internals¶
These are some of the constructs used to generate migration
instructions. The APIs here build off of the sqlalchemy.schema.DDLElement
and Custom SQL Constructs and Compilation Extension systems.
For programmatic usage of Alembic’s migration directives, the easiest route is to use the higher level functions given by Operation Directives.
-
class
alembic.ddl.base.
AddColumn
(name, column, schema=None)¶
-
class
alembic.ddl.base.
AlterColumn
(name, column_name, schema=None, existing_type=None, existing_nullable=None, existing_server_default=None)¶
-
class
alembic.ddl.base.
AlterTable
(table_name, schema=None)¶ Represent an ALTER TABLE statement.
Only the string name and optional schema name of the table is required, not a full Table object.
-
class
alembic.ddl.base.
ColumnDefault
(name, column_name, default, **kw)¶
-
class
alembic.ddl.base.
ColumnName
(name, column_name, newname, **kw)¶
-
class
alembic.ddl.base.
ColumnNullable
(name, column_name, nullable, **kw)¶
-
class
alembic.ddl.base.
ColumnType
(name, column_name, type_, **kw)¶
-
class
alembic.ddl.base.
DropColumn
(name, column, schema=None)¶
-
class
alembic.ddl.base.
RenameTable
(old_table_name, new_table_name, schema=None)¶
-
alembic.ddl.base.
add_column
(compiler, column, **kw)¶
-
alembic.ddl.base.
alter_column
(compiler, name)¶
-
alembic.ddl.base.
alter_table
(compiler, name, schema)¶
-
alembic.ddl.base.
drop_column
(compiler, name)¶
-
alembic.ddl.base.
format_column_name
(compiler, name)¶
-
alembic.ddl.base.
format_server_default
(compiler, default)¶
-
alembic.ddl.base.
format_table_name
(compiler, name, schema)¶
-
alembic.ddl.base.
format_type
(compiler, type_)¶
-
alembic.ddl.base.
quote_dotted
(name, quote)¶ quote the elements of a dotted name
-
alembic.ddl.base.
visit_add_column
(element, compiler, **kw)¶
-
alembic.ddl.base.
visit_column_default
(element, compiler, **kw)¶
-
alembic.ddl.base.
visit_column_name
(element, compiler, **kw)¶
-
alembic.ddl.base.
visit_column_nullable
(element, compiler, **kw)¶
-
alembic.ddl.base.
visit_column_type
(element, compiler, **kw)¶
-
alembic.ddl.base.
visit_drop_column
(element, compiler, **kw)¶
-
alembic.ddl.base.
visit_rename_table
(element, compiler, **kw)¶
-
class
alembic.ddl.impl.
DefaultImpl
(dialect, connection, as_sql, transactional_ddl, output_buffer, context_opts)¶ Provide the entrypoint for major migration operations, including database-specific behavioral variances.
While individual SQL/DDL constructs already provide for database-specific implementations, variances here allow for entirely different sequences of operations to take place for a particular migration, such as SQL Server’s special ‘IDENTITY INSERT’ step for bulk inserts.
-
add_column
(table_name, column, schema=None)¶
-
add_constraint
(const)¶
-
alter_column
(table_name, column_name, nullable=None, server_default=False, name=None, type_=None, schema=None, autoincrement=None, existing_type=None, existing_server_default=None, existing_nullable=None, existing_autoincrement=None)¶
-
autogen_column_reflect
(inspector, table, column_info)¶ A hook that is attached to the ‘column_reflect’ event for when a Table is reflected from the database during the autogenerate process.
Dialects can elect to modify the information gathered here.
-
bind
¶
-
bulk_insert
(table, rows, multiinsert=True)¶
-
command_terminator
= ';'¶
-
compare_server_default
(inspector_column, metadata_column, rendered_metadata_default, rendered_inspector_default)¶
-
compare_type
(inspector_column, metadata_column)¶
-
correct_for_autogen_constraints
(conn_uniques, conn_indexes, metadata_unique_constraints, metadata_indexes)¶
-
correct_for_autogen_foreignkeys
(conn_fks, metadata_fks)¶
-
create_index
(index)¶
-
create_table
(table)¶
-
drop_column
(table_name, column, schema=None, **kw)¶
-
drop_constraint
(const)¶
-
drop_index
(index)¶
-
drop_table
(table)¶
-
emit_begin
()¶ Emit the string
BEGIN
, or the backend-specific equivalent, on the current connection context.This is used in offline mode and typically via
EnvironmentContext.begin_transaction()
.
-
emit_commit
()¶ Emit the string
COMMIT
, or the backend-specific equivalent, on the current connection context.This is used in offline mode and typically via
EnvironmentContext.begin_transaction()
.
-
execute
(sql, execution_options=None)¶
-
classmethod
get_by_dialect
(dialect)¶
-
prep_table_for_batch
(table)¶ perform any operations needed on a table before a new one is created to replace it in batch mode.
the PG dialect uses this to drop constraints on the table before the new one uses those same names.
-
rename_table
(old_table_name, new_table_name, schema=None)¶
-
requires_recreate_in_batch
(batch_op)¶ Return True if the given
BatchOperationsImpl
would need the table to be recreated and copied in order to proceed.Normally, only returns True on SQLite when operations other than add_column are present.
-
start_migrations
()¶ A hook called when
EnvironmentContext.run_migrations()
is called.Implementations can set up per-migration-run state here.
-
static_output
(text)¶
-
transactional_ddl
= False¶
-
-
class
alembic.ddl.impl.
ImplMeta
(classname, bases, dict_)¶
MySQL¶
-
class
alembic.ddl.mysql.
MySQLAlterDefault
(name, column_name, default, schema=None)¶ Bases:
alembic.ddl.base.AlterColumn
-
class
alembic.ddl.mysql.
MySQLChangeColumn
(name, column_name, schema=None, newname=None, type_=None, nullable=None, default=False, autoincrement=None)¶ Bases:
alembic.ddl.base.AlterColumn
-
class
alembic.ddl.mysql.
MySQLImpl
(dialect, connection, as_sql, transactional_ddl, output_buffer, context_opts)¶ Bases:
alembic.ddl.impl.DefaultImpl
-
alter_column
(table_name, column_name, nullable=None, server_default=False, name=None, type_=None, schema=None, existing_type=None, existing_server_default=None, existing_nullable=None, autoincrement=None, existing_autoincrement=None, **kw)¶
-
compare_server_default
(inspector_column, metadata_column, rendered_metadata_default, rendered_inspector_default)¶
-
correct_for_autogen_constraints
(conn_unique_constraints, conn_indexes, metadata_unique_constraints, metadata_indexes)¶
-
correct_for_autogen_foreignkeys
(conn_fks, metadata_fks)¶
-
transactional_ddl
= False¶
-
-
class
alembic.ddl.mysql.
MySQLModifyColumn
(name, column_name, schema=None, newname=None, type_=None, nullable=None, default=False, autoincrement=None)¶
MS-SQL¶
-
class
alembic.ddl.mssql.
MSSQLImpl
(*arg, **kw)¶ Bases:
alembic.ddl.impl.DefaultImpl
-
alter_column
(table_name, column_name, nullable=None, server_default=False, name=None, type_=None, schema=None, existing_type=None, existing_server_default=None, existing_nullable=None, **kw)¶
-
batch_separator
= 'GO'¶
-
bulk_insert
(table, rows, **kw)¶
-
drop_column
(table_name, column, **kw)¶
-
emit_begin
()¶
-
emit_commit
()¶
-
transactional_ddl
= True¶
-
-
alembic.ddl.mssql.
mssql_add_column
(compiler, column, **kw)¶
-
alembic.ddl.mssql.
visit_add_column
(element, compiler, **kw)¶
-
alembic.ddl.mssql.
visit_column_default
(element, compiler, **kw)¶
-
alembic.ddl.mssql.
visit_column_nullable
(element, compiler, **kw)¶
-
alembic.ddl.mssql.
visit_column_type
(element, compiler, **kw)¶
-
alembic.ddl.mssql.
visit_rename_column
(element, compiler, **kw)¶
-
alembic.ddl.mssql.
visit_rename_table
(element, compiler, **kw)¶
Postgresql¶
-
class
alembic.ddl.postgresql.
PostgresqlImpl
(dialect, connection, as_sql, transactional_ddl, output_buffer, context_opts)¶ Bases:
alembic.ddl.impl.DefaultImpl
-
autogen_column_reflect
(inspector, table, column_info)¶
-
compare_server_default
(inspector_column, metadata_column, rendered_metadata_default, rendered_inspector_default)¶
-
correct_for_autogen_constraints
(conn_unique_constraints, conn_indexes, metadata_unique_constraints, metadata_indexes)¶
-
prep_table_for_batch
(table)¶
-
transactional_ddl
= True¶
-
-
alembic.ddl.postgresql.
visit_rename_table
(element, compiler, **kw)¶
SQLite¶
-
class
alembic.ddl.sqlite.
SQLiteImpl
(dialect, connection, as_sql, transactional_ddl, output_buffer, context_opts)¶ Bases:
alembic.ddl.impl.DefaultImpl
-
add_constraint
(const)¶
-
compare_server_default
(inspector_column, metadata_column, rendered_metadata_default, rendered_inspector_default)¶
-
correct_for_autogen_constraints
(conn_unique_constraints, conn_indexes, metadata_unique_constraints, metadata_indexes)¶
-
drop_constraint
(const)¶
-
requires_recreate_in_batch
(batch_op)¶ Return True if the given
BatchOperationsImpl
would need the table to be recreated and copied in order to proceed.Normally, only returns True on SQLite when operations other than add_column are present.
-
transactional_ddl
= False¶ SQLite supports transactional DDL, but pysqlite does not: see: http://bugs.python.org/issue10740
-
Changelog¶
0.8.4¶
no release datebug¶
[bug] [batch] Fixed bug where the
server_default
parameter ofalter_column()
would not function correctly in batch mode.¶References: #338
[bug] [autogenerate] Adjusted the rendering for index expressions such that a
Column
object present in the sourceIndex
will not be rendered as table-qualified; e.g. the column name will be rendered alone. Table-qualified names here were failing on systems such as Postgresql.¶References: #337
0.8.3¶
Released: October 16, 2015bug¶
[bug] [autogenerate] Fixed an 0.8 regression whereby the “imports” dictionary member of the autogen context was removed; this collection is documented in the “render custom type” documentation as a place to add new imports. The member is now known as
AutogenContext.imports
and the documentation is repaired.¶References: #332
[bug] [batch] Fixed bug in batch mode where a table that had pre-existing indexes would create the same index on the new table with the same name, which on SQLite produces a naming conflict as index names are in a global namespace on that backend. Batch mode now defers the production of both existing and new indexes until after the entire table transfer operation is complete, which also means those indexes no longer take effect during the INSERT from SELECT section as well; the indexes are applied in a single step afterwards.¶
References: #333
[bug] [tests] Added “pytest-xdist” as a tox dependency, so that the -n flag in the test command works if this is not already installed. Pull request courtesy Julien Danjou.¶
References: pull request bitbucket:47
[bug] [postgresql] [autogenerate] Fixed issue in PG server default comparison where model-side defaults configured with Python unicode literals would leak the “u” character from a
repr()
into the SQL used for comparison, creating an invalid SQL expression, as the server-side comparison feature in PG currently repurposes the autogenerate Python rendering feature to get a quoted version of a plain string default.¶References: #324
0.8.2¶
Released: August 25, 2015bug¶
[bug] [autogenerate] Added workaround in new foreign key option detection feature for MySQL’s consideration of the “RESTRICT” option being the default, for which no value is reported from the database; the MySQL impl now corrects for when the model reports RESTRICT but the database reports nothing. A similar rule is in the default FK comparison to accommodate for the default “NO ACTION” setting being present in the model but not necessarily reported by the database, or vice versa.¶
References: #321
0.8.1¶
Released: August 22, 2015feature¶
[feature] [autogenerate] A custom
EnvironmentContext.configure.process_revision_directives
hook can now generate op directives within theUpgradeOps
andDowngradeOps
containers that will be generated as Python code even when the--autogenerate
flag is False; provided thatrevision_environment=True
, the full render operation will be run even in “offline” mode.¶[feature] [autogenerate] Implemented support for autogenerate detection of changes in the
ondelete
,onupdate
,initially
anddeferrable
attributes ofForeignKeyConstraint
objects on SQLAlchemy backends that support these on reflection (as of SQLAlchemy 1.0.8 currently Postgresql for all four, MySQL forondelete
andonupdate
only). A constraint object that modifies these values will be reported as a “diff” and come out as a drop/create of the constraint with the modified values. The fields are ignored for backends which don’t reflect these attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server, others).¶References: #317
bug¶
[bug] [autogenerate] Repaired the render operation for the
ops.AlterColumnOp
object to succeed when the “existing_type” field was not present.¶[bug] [autogenerate] Fixed a regression 0.8 whereby the “multidb” environment template failed to produce independent migration script segments for the output template. This was due to the reorganization of the script rendering system for 0.8. To accommodate this change, the
MigrationScript
structure will in the case of multiple calls toMigrationContext.run_migrations()
produce lists for theMigrationScript.upgrade_ops
andMigrationScript.downgrade_ops
attributes; eachUpgradeOps
andDowngradeOps
instance keeps track of its ownupgrade_token
anddowngrade_token
, and each are rendered individually.See also
Revision Generation with Multiple Engines / run_migrations() calls - additional detail on the workings of the
EnvironmentContext.configure.process_revision_directives
parameter when multiple calls toMigrationContext.run_migrations()
are made.References: #318
0.8.0¶
Released: August 12, 2015feature¶
[feature] [commands] Added new command
alembic edit
. This command takes the same arguments asalembic show
, however runs the target script file within $EDITOR. Makes use of thepython-editor
library in order to facilitate the handling of $EDITOR with reasonable default behaviors across platforms. Pull request courtesy Michel Albert.¶References: pull request bitbucket:46
[feature] [commands] Added new multiple-capable argument
--depends-on
to thealembic revision
command, allowingdepends_on
to be established at the command line level rather than having to edit the file after the fact.depends_on
identifiers may also be specified as branch names at the command line or directly within the migration file. The values may be specified as partial revision numbers from the command line which will be resolved to full revision numbers in the output file.¶References: #311
[feature] [tests] The default test runner via “python setup.py test” is now py.test. nose still works via run_tests.py.¶
[feature] [operations] The internal system for Alembic operations has been reworked to now build upon an extensible system of operation objects. New operations can be added to the
op.
namespace, including that they are available in custom autogenerate schemes.See also
References: #302
[feature] [autogenerate] The internal system for autogenerate been reworked to build upon the extensible system of operation objects present in #302. As part of this change, autogenerate now produces a full object graph representing a list of migration scripts to be written as well as operation objects that will render all the Python code within them; a new hook
EnvironmentContext.configure.process_revision_directives
allows end-user code to fully customize what autogenerate will do, including not just full manipulation of the Python steps to take but also what file or files will be written and where. Additionally, autogenerate is now extensible as far as database objects compared and rendered into scripts; any new operation directive can also be registered into a series of hooks that allow custom database/model comparison functions to run as well as to render new operation directives into autogenerate scripts.See also
bug¶
[bug] [batch] Fixed bug in batch mode where the
batch_op.create_foreign_key()
directive would be incorrectly rendered with the source table and schema names in the argument list.¶References: #315
[bug] [versioning] Fixed bug where in the erroneous case that alembic_version contains duplicate revisions, some commands would fail to process the version history correctly and end up with a KeyError. The fix allows the versioning logic to proceed, however a clear error is emitted later when attempting to update the alembic_version table.¶
References: #314
misc¶
[operations] [change] A range of positional argument names have been changed to be clearer and more consistent across methods within the
Operations
namespace. The most prevalent form of name change is that the descriptive namesconstraint_name
andtable_name
are now used where previously the namename
would be used. This is in support of the newly modularized and extensible system of operation objects inalembic.operations.ops
. An argument translation layer is in place across thealembic.op
namespace that will ensure that named argument calling styles that use the old names will continue to function by transparently translating to the new names, also emitting a warning. This, along with the fact that these arguments are positional in any case and aren’t normally passed with an explicit name, should ensure that the overwhelming majority of applications should be unaffected by this change. The only applications that are impacted are those that:- use the
Operations
object directly in some way, rather than calling upon thealembic.op
namespace, and - invoke the methods on
Operations
using named keyword arguments for positional arguments liketable_name
,constraint_name
, etc., which commonly were namedname
as of 0.7.6. - any application that is using named keyword arguments in place
of positional argument for the recently added
BatchOperations
object may also be affected.
The naming changes are documented as “versionchanged” for 0.8.0:
BatchOperations.create_check_constraint()
BatchOperations.create_foreign_key()
BatchOperations.create_index()
BatchOperations.create_unique_constraint()
BatchOperations.drop_constraint()
BatchOperations.drop_index()
Operations.create_check_constraint()
Operations.create_foreign_key()
Operations.create_primary_key()
Operations.create_index()
Operations.create_table()
Operations.create_unique_constraint()
Operations.drop_constraint()
Operations.drop_index()
Operations.drop_table()
- use the
0.7.7¶
Released: July 22, 2015feature¶
[feature] [batch] Implemented support for
BatchOperations.create_primary_key()
andBatchOperations.create_check_constraint()
. Additionally, table keyword arguments are copied from the original reflected table, such as the “mysql_engine” keyword argument.¶References: #305
bug¶
[bug] [versioning] Fixed critical issue where a complex series of branches/merges would bog down the iteration algorithm working over redundant nodes for millions of cycles. An internal adjustment has been made so that duplicate nodes are skipped within this iteration.¶
References: #310
[bug] [environment] The
MigrationContext.stamp()
method, added as part of the versioning refactor in 0.7 as a more granular version ofcommand.stamp()
, now includes the “create the alembic_version table if not present” step in the same way as the command version, which was previously omitted.¶References: #300
[bug] [autogenerate] Fixed bug where foreign key options including “onupdate”, “ondelete” would not render within the
op.create_foreign_key()
directive, even though they render within a fullForeignKeyConstraint
directive.¶References: #298
[bug] [tests] Repaired warnings that occur when running unit tests against SQLAlchemy 1.0.5 or greater involving the “legacy_schema_aliasing” flag.¶
0.7.6¶
Released: May 5, 2015feature¶
[feature] [versioning] Fixed bug where the case of multiple mergepoints that all have the identical set of ancestor revisions would fail to be upgradable, producing an assertion failure. Merge points were previously assumed to always require at least an UPDATE in alembic_revision from one of the previous revs to the new one, however in this case, if one of the mergepoints has already been reached, the remaining mergepoints have no row to UPDATE therefore they must do an INSERT of their target version.¶
References: #297
[feature] [autogenerate] Added support for type comparison functions to be not just per environment, but also present on the custom types themselves, by supplying a method
compare_against_backend
. Added a new documentation section Comparing Types describing type comparison fully.¶References: #296
[feature] [operations] Added a new option
EnvironmentContext.configure.literal_binds
, which will pass theliteral_binds
flag into the compilation of SQL constructs when using “offline” mode. This has the effect that SQL objects like inserts, updates, deletes as well as textual statements sent usingtext()
will be compiled such that the dialect will attempt to render literal values “inline” automatically. Only a subset of types is typically supported; theOperations.inline_literal()
construct remains as the construct used to force a specific literal representation of a value. TheEnvironmentContext.configure.literal_binds
flag is added to the “offline” section of theenv.py
files generated in new environments.¶References: #255
bug¶
[bug] [batch] Fully implemented the
copy_from
parameter for batch mode, which previously was not functioning. This allows “batch mode” to be usable in conjunction with--sql
.¶References: #289
[bug] [batch] Repaired support for the
BatchOperations.create_index()
directive, which was mis-named internally such that the operation within a batch context could not proceed. The create index operation will proceed as part of a larger “batch table recreate” operation only ifrecreate
is set to “always”, or if the batch operation includes other instructions that require a table recreate.¶References: #287
0.7.5¶
Released: March 19, 2015feature¶
[feature] [commands] Added a new feature
¶Config.attributes
, to help with the use case of sharing state such as engines and connections on the outside with a series of Alembic API calls; also added a new cookbook section to describe this simple but pretty important use case.[feature] [environment] The format of the default
env.py
script has been refined a bit; it now uses context managers not only for the scope of the transaction, but also for connectivity from the starting engine. The engine is also now called a “connectable” in support of the use case of an external connection being passed in.¶[feature] [versioning] Added support for “alembic stamp” to work when given “heads” as an argument, when multiple heads are present.¶
References: #267
bug¶
[bug] [autogenerate] The
--autogenerate
option is not valid when used in conjunction with “offline” mode, e.g.--sql
. This now raises aCommandError
, rather than failing more deeply later on. Pull request courtesy Johannes Erdfelt.¶References: #266, pull request bitbucket:39
[bug] [operations] [mssql] Fixed bug where the mssql DROP COLUMN directive failed to include modifiers such as “schema” when emitting the DDL.¶
References: #284
[bug] [postgresql] [autogenerate] Postgresql “functional” indexes are necessarily skipped from the autogenerate process, as the SQLAlchemy backend currently does not support reflection of these structures. A warning is emitted both from the SQLAlchemy backend as well as from the Alembic backend for Postgresql when such an index is detected.¶
References: #282
[bug] [autogenerate] [mysql] Fixed bug where MySQL backend would report dropped unique indexes and/or constraints as both at the same time. This is because MySQL doesn’t actually have a “unique constraint” construct that reports differently than a “unique index”, so it is present in both lists. The net effect though is that the MySQL backend will report a dropped unique index/constraint as an index in cases where the object was first created as a unique constraint, if no other information is available to make the decision. This differs from other backends like Postgresql which can report on unique constraints and unique indexes separately.¶
References: #276
[bug] [commands] Fixed bug where using a partial revision identifier as the “starting revision” in
--sql
mode in a downgrade operation would fail to resolve properly.As a side effect of this change, the
¶EnvironmentContext.get_starting_revision_argument()
method will return the “starting” revision in its originally- given “partial” form in all cases, whereas previously when running within thecommand.stamp()
command, it would have been resolved to a full number before passing it to theEnvironmentContext
. The resolution of this value to a real revision number has basically been moved to a more fundamental level within the offline migration process.References: #269
0.7.4¶
Released: January 12, 2015bug¶
[bug] [postgresql] [autogenerate] Repaired issue where a server default specified without
text()
that represented a numeric or floating point (e.g. with decimal places) value would fail in the Postgresql-specific check for “compare server default”; as PG accepts the value with quotes in the table specification, it’s still valid. Pull request courtesy Dimitris Theodorou.¶References: #241, pull request bitbucket:37
[bug] [autogenerate] The rendering of a
ForeignKeyConstraint
will now ensure that the names of the source and target columns are the database-side name of each column, and not the value of the.key
attribute as may be set only on the Python side. This is because Alembic generates the DDL for constraints as standalone objects without the need to actually refer to an in-PythonTable
object, so there’s no step that would resolve these Python-only key names to database column names.¶References: #259
[bug] [autogenerate] Fixed bug in foreign key autogenerate where if the in-Python table used custom column keys (e.g. using the
key='foo'
kwarg toColumn
), the comparison of existing foreign keys to those specified in the metadata would fail, as the reflected table would not have these keys available which to match up. Foreign key comparison for autogenerate now ensures it’s looking at the database-side names of the columns in all cases; this matches the same functionality within unique constraints and indexes.¶References: #260
[bug] [autogenerate] Fixed issue in autogenerate type rendering where types that belong to modules that have the name “sqlalchemy” in them would be mistaken as being part of the
sqlalchemy.
namespace. Pull req courtesy Bartosz Burclaf.¶References: #261, pull request github:17
0.7.3¶
Released: December 30, 20140.7.2¶
Released: December 18, 2014bug¶
[bug] [sqlite] [autogenerate] Adjusted the SQLite backend regarding autogen of unique constraints to work fully with the current SQLAlchemy 1.0, which now will report on UNIQUE constraints that have no name.¶
[bug] [batch] Fixed bug in batch where if the target table contained multiple foreign keys to the same target table, the batch mechanics would fail with a “table already exists” error. Thanks for the help on this from Lucas Kahlert.¶
References: #254
[bug] [mysql] Fixed an issue where the MySQL routine to skip foreign-key-implicit indexes would also catch unnamed unique indexes, as they would be named after the column and look like the FK indexes. Pull request courtesy Johannes Erdfelt.¶
References: #251, pull request bitbucket:35
[bug] [oracle] [mssql] Repaired a regression in both the MSSQL and Oracle dialects whereby the overridden
_exec()
method failed to return a value, as is needed now in the 0.7 series.¶References: #253
0.7.1¶
Released: December 3, 2014feature¶
[feature] [autogenerate] Support for autogenerate of FOREIGN KEY constraints has been added. These are delivered within the autogenerate process in the same manner as UNIQUE constraints, including
include_object
support. Big thanks to Ann Kamyshnikova for doing the heavy lifting here.¶References: #178, pull request bitbucket:32
[feature] [batch] Added
¶naming_convention
argument toOperations.batch_alter_table()
, as this is necessary in order to drop foreign key constraints; these are often unnamed on the target database, and in the case that they are named, SQLAlchemy is as of the 0.9 series not including these names yet.[feature] [batch] Added two new arguments
Operations.batch_alter_table.reflect_args
andOperations.batch_alter_table.reflect_kwargs
, so that arguments may be passed directly to suit theTable
object that will be reflected.See also
bug¶
[bug] [batch] The
render_as_batch
flag was inadvertently hardcoded toTrue
, so all autogenerates were spitting out batch mode...this has been fixed so that batch mode again is only when selected in env.py.¶[bug] [batch] Fixed bug where the “source_schema” argument was not correctly passed when calling
BatchOperations.create_foreign_key()
. Pull request courtesy Malte Marquarding.¶References: pull request bitbucket:34
[bug] [batch] Repaired the inspection, copying and rendering of CHECK constraints and so-called “schema” types such as Boolean, Enum within the batch copy system; the CHECK constraint will not be “doubled” when the table is copied, and additionally the inspection of the CHECK constraint for its member columns will no longer fail with an attribute error.¶
References: #249
0.7.0¶
Released: November 24, 2014changed¶
[changed] [commands] The
--head_only
option to thealembic current
command is deprecated; thecurrent
command now lists just the version numbers alone by default; use--verbose
to get at additional output.¶[changed] [autogenerate] The default value of the
EnvironmentContext.configure.user_module_prefix
parameter is no longer the same as the SQLAlchemy prefix. When omitted, user-defined types will now use the__module__
attribute of the type class itself when rendering in an autogenerated module.¶References: #229
[changed] [compatibility] Minimum SQLAlchemy version is now 0.7.6, however at least 0.8.4 is strongly recommended. The overhaul of the test suite allows for fully passing tests on all SQLAlchemy versions from 0.7.6 on forward.¶
feature¶
[feature] [versioning] The “multiple heads / branches” feature has now landed. This is by far the most significant change Alembic has seen since its inception; while the workflow of most commands hasn’t changed, and the format of version files and the
alembic_version
table are unchanged as well, a new suite of features opens up in the case where multiple version files refer to the same parent, or to the “base”. Merging of branches, operating across distinct named heads, and multiple independent bases are now all supported. The feature incurs radical changes to the internals of versioning and traversal, and should be treated as “beta mode” for the next several subsequent releases within 0.7.See also
branches
References: #167
[feature] [versioning] In conjunction with support for multiple independent bases, the specific version directories are now also configurable to include multiple, user-defined directories. When multiple directories exist, the creation of a revision file with no down revision requires that the starting directory is indicated; the creation of subsequent revisions along that lineage will then automatically use that directory for new files.
¶References: #124
[feature] [operations] [sqlite] Added “move and copy” workflow, where a table to be altered is copied to a new one with the new structure and the old one dropped, is now implemented for SQLite as well as all database backends in general using the new
¶Operations.batch_alter_table()
system. This directive provides a table-specific operations context which gathers column- and constraint-level mutations specific to that table, and at the end of the context creates a new table combining the structure of the old one with the given changes, copies data from old table to new, and finally drops the old table, renaming the new one to the existing name. This is required for fully featured SQLite migrations, as SQLite has very little support for the traditional ALTER directive. The batch directive is intended to produce code that is still compatible with other databases, in that the “move and copy” process only occurs for SQLite by default, while still providing some level of sanity to SQLite’s requirement by allowing multiple table mutation operations to proceed within one “move and copy” as well as providing explicit control over when this operation actually occurs. The “move and copy” feature may be optionally applied to other backends as well, however dealing with referential integrity constraints from other tables must still be handled explicitly.References: #21
[feature] [commands] Relative revision identifiers as used with
alembic upgrade
,alembic downgrade
andalembic history
can be combined with specific revisions as well, e.g.alembic upgrade ae10+3
, to produce a migration target relative to the given exact version.¶[feature] [commands] New commands added:
alembic show
,alembic heads
andalembic merge
. Also, a new option--verbose
has been added to several informational commands, such asalembic history
,alembic current
,alembic branches
, andalembic heads
.alembic revision
also contains several new options used within the new branch management system. The output of commands has been altered in many cases to support new fields and attributes; thehistory
command in particular now returns it’s “verbose” output only if--verbose
is sent; without this flag it reverts to it’s older behavior of short line items (which was never changed in the docs).¶[feature] [config] Added new argument
Config.config_args
, allows a dictionary of replacement variables to be passed which will serve as substitution values when an API-producedConfig
consumes the.ini
file. Pull request courtesy Noufal Ibrahim.¶References: pull request bitbucket:33
[feature] [operations] The
Table
object is now returned when theOperations.create_table()
method is used. ThisTable
is suitable for use in subsequent SQL operations, in particular theOperations.bulk_insert()
operation.¶References: #205
[feature] [autogenerate] Indexes and unique constraints are now included in the
EnvironmentContext.configure.include_object
hook. Indexes are sent with type"index"
and unique constraints with type"unique_constraint"
.¶References: #203
[feature] SQLAlchemy’s testing infrastructure is now used to run tests. This system supports both nose and pytest and opens the way for Alembic testing to support any number of backends, parallel testing, and 3rd party dialect testing.¶
bug¶
[bug] [commands] The
alembic revision
command accepts the--sql
option to suit some very obscure use case where therevision_environment
flag is set up, so thatenv.py
is run whenalembic revision
is run even though autogenerate isn’t specified. As this flag is otherwise confusing, error messages are now raised ifalembic revision
is invoked with both--sql
and--autogenerate
or with--sql
withoutrevision_environment
being set.¶References: #248
[bug] [postgresql] [autogenerate] Added a rule for Postgresql to not render a “drop unique” and “drop index” given the same name; for now it is assumed that the “index” is the implicit one Postgreql generates. Future integration with new SQLAlchemy 1.0 features will improve this to be more resilient.¶
References: #247
[bug] [autogenerate] A change in the ordering when columns and constraints are dropped; autogenerate will now place the “drop constraint” calls before the “drop column” calls, so that columns involved in those constraints still exist when the constraint is dropped.¶
References: #247
[bug] [oracle] The Oracle dialect sets “transactional DDL” to False by default, as Oracle does not support transactional DDL.¶
References: #245
[bug] [autogenerate] Fixed a variety of issues surrounding rendering of Python code that contains unicode literals. The first is that the “quoted_name” construct that SQLAlchemy uses to represent table and column names as well as schema names does not
repr()
correctly on Py2K when the value contains unicode characters; therefore an explicit stringification is added to these. Additionally, SQL expressions such as server defaults were not being generated in a unicode-safe fashion leading to decode errors if server defaults contained non-ascii characters.¶References: #243
[bug] [operations] The
Operations.add_column()
directive will now additionally emit the appropriateCREATE INDEX
statement if theColumn
object specifiesindex=True
. Pull request courtesy David Szotten.¶References: #174, pull request bitbucket:29
[bug] [autogenerate] Bound parameters are now resolved as “literal” values within the SQL expression inside of a CheckConstraint(), when rendering the SQL as a text string; supported for SQLAlchemy 0.8.0 and forward.¶
References: #219
[bug] [autogenerate] Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where a column that’s part of an explicit PrimaryKeyConstraint would not have its “nullable” flag set to False, thus producing a false autogenerate. Also added a related correction to MySQL which will correct for MySQL’s implicit server default of ‘0’ when a NULL integer column is turned into a primary key column.¶
References: #199
[bug] [autogenerate] [mysql] Repaired issue related to the fix for #208 and others; a composite foreign key reported by MySQL would cause a KeyError as Alembic attempted to remove MySQL’s implicitly generated indexes from the autogenerate list.¶
References: #240
[bug] [autogenerate] If the “alembic_version” table is present in the target metadata, autogenerate will skip this also. Pull request courtesy Dj Gilcrease.¶
References: #28
[bug] [autogenerate] The
EnvironmentContext.configure.version_table
andEnvironmentContext.configure.version_table_schema
arguments are now honored during the autogenerate process, such that these names will be used as the “skip” names on both the database reflection and target metadata sides.¶References: #77
[bug] [templates] Revision files are now written out using the
'wb'
modifier toopen()
, since Mako reads the templates with'rb'
, thus preventing CRs from being doubled up as has been observed on windows. The encoding of the output now defaults to ‘utf-8’, which can be configured using a newly added config file parameteroutput_encoding
.¶References: #234
[bug] [operations] Added support for use of the
quoted_name
construct when using theschema
argument within operations. This allows a name containing a dot to be fully quoted, as well as to provide configurable quoting on a per-name basis.¶References: #230
[bug] [postgresql] [autogenerate] Added a routine by which the Postgresql Alembic dialect inspects the server default of INTEGER/BIGINT columns as they are reflected during autogenerate for the pattern
nextval(<name>...)
containing a potential sequence name, then queriespg_catalog
to see if this sequence is “owned” by the column being reflected; if so, it assumes this is a SERIAL or BIGSERIAL column and the server default is omitted from the column reflection as well as any kind of server_default comparison or rendering, along with an INFO message in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL columns to keep the SEQUENCE from being unnecessarily present within the autogenerate operation.¶References: #73
[bug] [autogenerate] The system by which autogenerate renders expressions within a
Index
, theserver_default
ofColumn
, and theexisting_server_default
ofOperations.alter_column()
has been overhauled to anticipate arbitrary SQLAlchemy SQL constructs, such asfunc.somefunction()
,cast()
,desc()
, and others. The system does not, as might be preferred, render the full-blown Python expression as originally created within the application’s source code, as this would be exceedingly complex and difficult. Instead, it renders the SQL expression against the target backend that’s subject to the autogenerate, and then renders that SQL inside of atext()
construct as a literal SQL string. This approach still has the downside that the rendered SQL construct may not be backend-agnostic in all cases, so there is still a need for manual intervention in that small number of cases, but overall the majority of cases should work correctly now. Big thanks to Carlos Rivera for pull requests and support on this.¶[bug] [operations] The “match” keyword is not sent to
ForeignKeyConstraint
byOperations.create_foreign_key()
when SQLAlchemy 0.7 is in use; this keyword was added to SQLAlchemy as of 0.8.0.¶
0.6.7¶
Released: September 9, 2014feature¶
[feature] Added support for functional indexes when using the
Operations.create_index()
directive. Within the list of columns, the SQLAlchemytext()
construct can be sent, embedding a literal SQL expression; theOperations.create_index()
will perform some hackery behind the scenes to get theIndex
construct to cooperate. This works around some current limitations inIndex
which should be resolved on the SQLAlchemy side at some point.¶References: #222
bug¶
[bug] [mssql] Fixed bug in MSSQL dialect where “rename table” wasn’t using
sp_rename()
as is required on SQL Server. Pull request courtesy Łukasz Bołdys.¶References: pull request bitbucket:26
0.6.6¶
Released: August 7, 2014feature¶
[feature] Added a new accessor
MigrationContext.config
, when used in conjunction with aEnvironmentContext
andConfig
, this config will be returned. Patch courtesy Marc Abramowitz.¶References: pull request github:10
bug¶
[bug] A file named
__init__.py
in theversions/
directory is now ignored by Alembic when the collection of version files is retrieved. Pull request courtesy Michael Floering.¶References: #95, pull request bitbucket:24
[bug] Fixed Py3K bug where an attempt would be made to sort None against string values when autogenerate would detect tables across multiple schemas, including the default schema. Pull request courtesy paradoxxxzero.¶
References: pull request bitbucket:23
[bug] Autogenerate render will render the arguments within a Table construct using
*[...]
when the number of columns/elements is greater than 255. Pull request courtesy Ryan P. Kelly.¶References: pull request github:15
[bug] Fixed bug where foreign key constraints would fail to render in autogenerate when a schema name was present. Pull request courtesy Andreas Zeidler.¶
References: pull request github:14
[bug] Some deep-in-the-weeds fixes to try to get “server default” comparison working better across platforms and expressions, in particular on the Postgresql backend, mostly dealing with quoting/not quoting of various expressions at the appropriate time and on a per-backend basis. Repaired and tested support for such defaults as Postgresql interval and array defaults.¶
References: #212
[bug] Liberalized even more the check for MySQL indexes that shouldn’t be counted in autogenerate as “drops”; this time it’s been reported that an implicitly created index might be named the same as a composite foreign key constraint, and not the actual columns, so we now skip those when detected as well.¶
References: #208
0.6.5¶
Released: May 3, 2014feature¶
[feature] [environment] Added new feature
EnvironmentContext.configure.transaction_per_migration
, which when True causes the BEGIN/COMMIT pair to incur for each migration individually, rather than for the whole series of migrations. This is to assist with some database directives that need to be within individual transactions, without the need to disable transactional DDL entirely.¶References: #201
bug¶
[bug] [autogenerate] [mysql] This releases’ “autogenerate index detection” bug, when a MySQL table includes an Index with the same name as a column, autogenerate reported it as an “add” even though its not; this is because we ignore reflected indexes of this nature due to MySQL creating them implicitly. Indexes that are named the same as a column are now ignored on MySQL if we see that the backend is reporting that it already exists; this indicates that we can still detect additions of these indexes but not drops, as we cannot distinguish a backend index same-named as the column as one that is user generated or mysql-generated.¶
References: #202
[bug] [autogenerate] Fixed bug where the
include_object()
filter would not receive the originalColumn
object when evaluating a database-only column to be dropped; the object would not include the parentTable
nor other aspects of the column that are important for generating the “downgrade” case where the column is recreated.¶References: #200
[bug] [environment] Fixed bug where
EnvironmentContext.get_x_argument()
would fail if theConfig
in use didn’t actually originate from a command line call.¶References: #195
[bug] [autogenerate] Fixed another bug regarding naming conventions, continuing from #183, where add_index() drop_index() directives would not correctly render the
f()
construct when the index contained a convention-driven name.¶References: #194
0.6.4¶
Released: March 28, 2014feature¶
[feature] The
command.revision()
command now returns theScript
object corresponding to the newly generated revision. From this structure, one can get the revision id, the module documentation, and everything else, for use in scripts that call upon this command. Pull request courtesy Robbie Coomber.¶References: pull request bitbucket:20
bug¶
[bug] [mssql] Added quoting to the table name when the special EXEC is run to drop any existing server defaults or constraints when the
drop_column.mssql_drop_check
ordrop_column.mssql_drop_default
arguments are used.¶References: #186
[bug] [mysql] Added/fixed support for MySQL “SET DEFAULT” / “DROP DEFAULT” phrases, which will now be rendered if only the server default is changing or being dropped (e.g. specify None to alter_column() to indicate “DROP DEFAULT”). Also added support for rendering MODIFY rather than CHANGE when the column name isn’t changing.¶
References: #103
[bug] Added support for the
initially
,match
keyword arguments as well as dialect-specific keyword arguments toOperations.create_foreign_key()
.tags: feature tickets: 163 Altered the support for “sourceless” migration files (e.g. only .pyc or .pyo present) so that the flag “sourceless=true” needs to be in alembic.ini for this behavior to take effect.
¶References: #190
[bug] [mssql] The feature that keeps on giving, index/unique constraint autogenerate detection, has even more fixes, this time to accommodate database dialects that both don’t yet report on unique constraints, but the backend does report unique constraints as indexes. The logic Alembic uses to distinguish between “this is an index!” vs. “this is a unique constraint that is also reported as an index!” has now been further enhanced to not produce unwanted migrations when the dialect is observed to not yet implement get_unique_constraints() (e.g. mssql). Note that such a backend will no longer report index drops for unique indexes, as these cannot be distinguished from an unreported unique index.¶
References: #185
[bug] Extensive changes have been made to more fully support SQLAlchemy’s new naming conventions feature. Note that while SQLAlchemy has added this feature as of 0.9.2, some additional fixes in 0.9.4 are needed to resolve some of the issues:
- The
Operations
object now takes into account the naming conventions that are present on theMetaData
object that’s associated usingtarget_metadata
. WhenOperations
renders a constraint directive likeADD CONSTRAINT
, it now will make use of this naming convention when it produces its own temporaryMetaData
object. - Note however that the autogenerate feature in most cases generates constraints like foreign keys and unique constraints with the final names intact; the only exception are the constraints implicit with a schema-type like Boolean or Enum. In most of these cases, the naming convention feature will not take effect for these constraints and will instead use the given name as is, with one exception....
- Naming conventions which use the
"%(constraint_name)s"
token, that is, produce a new name that uses the original name as a component, will still be pulled into the naming convention converter and be converted. The problem arises when autogenerate renders a constraint with it’s already-generated name present in the migration file’s source code, the name will be doubled up at render time due to the combination of #1 and #2. So to work around this, autogenerate now renders these already-tokenized names using the newOperations.f()
component. This component is only generated if SQLAlchemy 0.9.4 or greater is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4 be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming conventions feature is used.
¶References: #183
- The
[bug] Suppressed IOErrors which can raise when program output pipe is closed under a program like
head
; however this only works on Python 2. On Python 3, there is not yet a known way to suppress the BrokenPipeError warnings without prematurely terminating the program via signals.¶References: #160
[bug] Fixed bug where
Operations.bulk_insert()
would not function properly whenOperations.inline_literal()
values were used, either in –sql or non-sql mode. The values will now render directly in –sql mode. For compatibility with “online” mode, a new flagmultiinsert
can be set to False which will cause each parameter set to be compiled and executed with individual INSERT statements.¶References: #179
[bug] [py3k] Fixed a failure of the system that allows “legacy keyword arguments” to be understood, which arose as of a change in Python 3.4 regarding decorators. A workaround is applied that allows the code to work across Python 3 versions.¶
References: #175
0.6.3¶
Released: February 2, 2014feature¶
[feature] Added new argument
EnvironmentContext.configure.user_module_prefix
. This prefix is applied when autogenerate renders a user-defined type, which here is defined as any type that is from a module outside of thesqlalchemy.
hierarchy. This prefix defaults toNone
, in which case theEnvironmentContext.configure.sqlalchemy_module_prefix
is used, thus preserving the current behavior.¶References: #171
[feature] The
ScriptDirectory
system that loads migration files from aversions/
directory now supports so-called “sourceless” operation, where the.py
files are not present and instead.pyc
or.pyo
files are directly present where the.py
files should be. Note that while Python 3.3 has a new system of locating.pyc
/.pyo
files within a directory called__pycache__
(e.g. PEP-3147), PEP-3147 maintains support for the “source-less imports” use case, where the.pyc
/.pyo
are in present in the “old” location, e.g. next to the.py
file; this is the usage that’s supported even when running Python3.3.¶References: #163
bug¶
[bug] Added a workaround for when we call
fcntl.ioctl()
to get atTERMWIDTH
; if the function returns zero, as is reported to occur in some pseudo-ttys, the message wrapping system is disabled in the same way as ifioctl()
failed.¶References: #172
[bug] Added support for autogenerate covering the use case where
Table
objects specified in the metadata have an explicitschema
attribute whose name matches that of the connection’s default schema (e.g. “public” for Postgresql). Previously, it was assumed that “schema” wasNone
when it matched the “default” schema, now the comparison adjusts for this.¶References: #170
[bug] The
compare_metadata()
public API function now takes into account the settings forEnvironmentContext.configure.include_object
,EnvironmentContext.configure.include_symbol
, andEnvironmentContext.configure.include_schemas
, in the same way that the--autogenerate
command does. Pull request courtesy Roman Podoliaka.¶References: pull request github:9
[bug] Calling
bulk_insert()
with an empty list will not emit any commands on the current connection. This was already the case with--sql
mode, so is now the case with “online” mode.¶References: #168
[bug] Enabled schema support for index and unique constraint autodetection; previously these were non-functional and could in some cases lead to attribute errors. Pull request courtesy Dimitris Theodorou.¶
References: pull request bitbucket:17
[bug] More fixes to index autodetection; indexes created with expressions like DESC or functional indexes will no longer cause AttributeError exceptions when attempting to compare the columns.¶
References: #164
0.6.2¶
Released: Fri Dec 27 2013feature¶
[feature] [mssql] Added new argument
mssql_drop_foreign_key
toOperations.drop_column()
. Likemssql_drop_default
andmssql_drop_check
, will do an inline lookup for a single foreign key which applies to this column, and drop it. For a column with more than one FK, you’d still need to explicitly useOperations.drop_constraint()
given the name, even though only MSSQL has this limitation in the first place.¶
bug¶
[bug] Autogenerate for
op.create_table()
will not include aPrimaryKeyConstraint()
that has no columns.¶[bug] Fixed bug in the not-internally-used
ScriptDirectory.get_base()
method which would fail if called on an empty versions directory.¶[bug] An almost-rewrite of the new unique constraint/index autogenerate detection, to accommodate a variety of issues. The emphasis is on not generating false positives for those cases where no net change is present, as these errors are the ones that impact all autogenerate runs:
- Fixed an issue with unique constraint autogenerate detection where
a named
UniqueConstraint
on both sides with column changes would render with the “add” operation before the “drop”, requiring the user to reverse the order manually. - Corrected for MySQL’s apparent addition of an implicit index for a foreign key column, so that it doesn’t show up as “removed”. This required that the index/constraint autogen system query the dialect-specific implementation for special exceptions.
- reworked the “dedupe” logic to accommodate MySQL’s bi-directional duplication of unique indexes as unique constraints, and unique constraints as unique indexes. Postgresql’s slightly different logic of duplicating unique constraints into unique indexes continues to be accommodated as well. Note that a unique index or unique constraint removal on a backend that duplicates these may show up as a distinct “remove_constraint()” / “remove_index()” pair, which may need to be corrected in the post-autogenerate if multiple backends are being supported.
- added another dialect-specific exception to the SQLite backend when dealing with unnamed unique constraints, as the backend can’t currently report on constraints that were made with this technique, hence they’d come out as “added” on every run.
- the
op.create_table()
directive will be auto-generated with theUniqueConstraint
objects inline, but will not double them up with a separatecreate_unique_constraint()
call, which may have been occurring. Indexes still get rendered as distinctop.create_index()
calls even when the corresponding table was created in the same script. - the inline
UniqueConstraint
withinop.create_table()
includes all the options likedeferrable
,initially
, etc. Previously these weren’t rendering.
References: #157
- Fixed an issue with unique constraint autogenerate detection where
a named
[bug] [mssql] The MSSQL backend will add the batch separator (e.g.
"GO"
) in--sql
mode after the finalCOMMIT
statement, to ensure that statement is also processed in batch mode. Courtesy Derek Harland.¶References: pull request bitbucket:13
0.6.1¶
Released: Wed Nov 27 2013feature¶
[feature] Expanded the size of the “slug” generated by “revision” to 40 characters, which is also configurable by new field
truncate_slug_length
; and also split on the word rather than the character; courtesy Frozenball.¶References: pull request bitbucket:12
[feature] Support for autogeneration detection and rendering of indexes and unique constraints has been added. The logic goes through some effort in order to differentiate between true unique constraints and unique indexes, where there are some quirks on backends like Postgresql. The effort here in producing the feature and tests is courtesy of IJL.¶
References: #107
bug¶
[bug] [mysql] Fixed bug where
op.alter_column()
in the MySQL dialect would fail to apply quotes to column names that had mixed casing or spaces.¶References: #152
[bug] Fixed the output wrapping for Alembic message output, so that we either get the terminal width for “pretty printing” with indentation, or if not we just output the text as is; in any case the text won’t be wrapped too short.¶
References: #135
[bug] Fixes to Py3k in-place compatibity regarding output encoding and related; the use of the new io.* package introduced some incompatibilities on Py2k. These should be resolved, due to the introduction of new adapter types for translating from io.* to Py2k file types, StringIO types. Thanks to Javier Santacruz for help with this.¶
References: pull request bitbucket:9
[bug] Fixed py3k bug where the wrong form of
next()
was being called when using the list_templates command. Courtesy Chris Wilkes.¶References: #145
[bug] Fixed bug introduced by new
include_object
argument where the inspected column would be misinterpreted when using a user-defined type comparison function, causing a KeyError or similar expression-related error. Fix courtesy Maarten van Schaik.¶[bug] Added the “deferrable” keyword argument to
op.create_foreign_key()
so thatDEFERRABLE
constraint generation is supported; courtesy Pedro Romano.¶[bug] Ensured that strings going to stdout go through an encode/decode phase, so that any non-ASCII characters get to the output stream correctly in both Py2k and Py3k. Also added source encoding detection using Mako’s parse_encoding() routine in Py2k so that the __doc__ of a non-ascii revision file can be treated as unicode in Py2k.¶
References: #137
0.6.0¶
Released: Fri July 19 2013feature¶
[feature] Added new kw argument to
EnvironmentContext.configure()
include_object
. This is a more flexible version of theinclude_symbol
argument which allows filtering of columns as well as tables from the autogenerate process, and in the future will also work for types, constraints and other constructs. The fully constructed schema object is passed, including its name and type as well as a flag indicating if the object is from the local application metadata or is reflected.¶References: #101
[feature] The output of the
alembic history
command is now expanded to show information about each change on multiple lines, including the full top message, resembling the formatting of git log.¶[feature] Added
alembic.config.Config.cmd_opts
attribute, allows access to theargparse
options passed to thealembic
runner.¶[feature] Added new command line argument
-x
, allows extra arguments to be appended to the command line which can be consumed within anenv.py
script by looking atcontext.config.cmd_opts.x
, or more simply a new methodEnvironmentContext.get_x_argument()
.¶References: #120
[feature] Added
-r
argument toalembic history
command, allows specification of[start]:[end]
to view a slice of history. Accepts revision numbers, symbols “base”, “head”, a new symbol “current” representing the current migration, as well as relative ranges for one side at a time (i.e.-r-5:head
,-rcurrent:+3
). Courtesy Atsushi Odagiri for this feature.¶[feature] Source base is now in-place for Python 2.6 through 3.3, without the need for 2to3. Support for Python 2.5 and below has been dropped. Huge thanks to Hong Minhee for all the effort on this!¶
References: #55
bug¶
0.5.0¶
Released: Thu Apr 4 2013Note
Alembic 0.5.0 now requires at least version 0.7.3 of SQLAlchemy to run properly. Support for 0.6 has been dropped.
feature¶
[feature] Added
version_table_schema
argument toEnvironmentContext.configure()
, complements theversion_table
argument to set an optional remote schema for the version table. Courtesy Christian Blume.¶References: #76
[feature] Added
output_encoding
option toEnvironmentContext.configure()
, used with--sql
mode to apply an encoding to the output stream.¶References: #90
[feature] Added
Operations.create_primary_key()
operation, will genenerate an ADD CONSTRAINT for a primary key.¶References: #93
[feature] upgrade and downgrade commands will list the first line of the docstring out next to the version number. Courtesy Hong Minhee.¶
References: #115
[feature] Added –head-only option to “alembic current”, will print current version plus the symbol “(head)” if this version is the head or not. Courtesy Charles-Axel Dein.¶
[feature] The rendering of any construct during autogenerate can be customized, in particular to allow special rendering for user-defined column, constraint subclasses, using new
render_item
argument toEnvironmentContext.configure()
.¶References: #108
bug¶
[bug] [postgresql] Fixed format of RENAME for table that includes schema with Postgresql; the schema name shouldn’t be in the “TO” field.¶
References: #32
[bug] [mssql] Fixed bug whereby double quoting would be applied to target column name during an
sp_rename
operation.¶References: #109
[bug] [sqlite] [mysql] transactional_ddl flag for SQLite, MySQL dialects set to False. MySQL doesn’t support it, SQLite does but current pysqlite driver does not.¶
References: #112
[bug] Autogenerate will render additional table keyword arguments like “mysql_engine” and others within op.create_table().¶
References: #110
[bug] Fixed bug whereby create_index() would include in the constraint columns that are added to all Table objects using events, externally to the generation of the constraint. This is the same issue that was fixed for unique constraints in version 0.3.2.¶
[bug] Worked around a backwards-incompatible regression in Python3.3 regarding argparse; running “alembic” with no arguments now yields an informative error in py3.3 as with all previous versions. Courtesy Andrey Antukh.¶
[bug] A host of argument name changes within migration operations for consistency. Keyword arguments will continue to work on the old name for backwards compatibility, however required positional arguments will not:
Operations.alter_column()
-name
->new_column_name
- old name will work for backwards compatibility.Operations.create_index()
-tablename
->table_name
- argument is positional.Operations.drop_index()
-tablename
->table_name
- old name will work for backwards compatibility.Operations.drop_constraint()
-tablename
->table_name
- argument is positional.Operations.drop_constraint()
-type
->type_
- old name will work for backwards compatibilityReferences: #104
0.4.2¶
Released: Fri Jan 11 2013feature¶
bug¶
[bug] [autogenerate] Fixed bug where autogenerate would fail if a Column to be added to a table made use of the ”.key” paramter.¶
References: #99
[bug] [sqlite] The “implicit” constraint generated by a type such as Boolean or Enum will not generate an ALTER statement when run on SQlite, which does not support ALTER for the purpose of adding/removing constraints separate from the column def itself. While SQLite supports adding a CHECK constraint at the column level, SQLAlchemy would need modification to support this. A warning is emitted indicating this constraint cannot be added in this scenario.¶
References: #98
[bug] Added a workaround to setup.py to prevent “NoneType” error from occuring when “setup.py test” is run.¶
References: #96
[bug] Added an append_constraint() step to each condition within test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint does not auto-append prior to 0.8.¶
References: #96
0.4.1¶
Released: Sun Dec 9 2012feature¶
bug¶
[bug] Added support for autogenerate render of ForeignKeyConstraint options onupdate, ondelete, initially, and deferred.¶
References: #92
[bug] Autogenerate will include “autoincrement=False” in the rendered table metadata if this flag was set to false on the source
Column
object.¶References: #94
[bug] Removed erroneous “emit_events” attribute from operations.create_table() documentation.¶
References: #81
[bug] Fixed the minute component in file_template which returned the month part of the create date.¶
0.4.0¶
Released: Mon Oct 01 2012feature¶
[feature] Support for tables in alternate schemas has been added fully to all operations, as well as to the autogenerate feature. When using autogenerate, specifying the flag include_schemas=True to Environment.configure() will also cause autogenerate to scan all schemas located by Inspector.get_schema_names(), which is supported by some (but not all) SQLAlchemy dialects including Postgresql. Enormous thanks to Bruno Binet for a huge effort in implementing as well as writing tests. .¶
References: #33
[feature] The command line runner has been organized into a reusable CommandLine object, so that other front-ends can re-use the argument parsing built in.¶
References: #70
[feature] Added “stdout” option to Config, provides control over where the “print” output of commands like “history”, “init”, “current” etc. are sent.¶
References: #43
[feature] Added support for alteration of MySQL columns that have AUTO_INCREMENT, as well as enabling this flag. Courtesy Moriyoshi Koizumi.¶
bug¶
[bug] Fixed the “multidb” template which was badly out of date. It now generates revision files using the configuration to determine the different upgrade_<xyz>() methods needed as well, instead of needing to hardcode these. Huge thanks to BryceLohr for doing the heavy lifting here.¶
References: #71
[bug] Fixed the regexp that was checking for .py files in the version directory to allow any .py file through. Previously it was doing some kind of defensive checking, probably from some early notions of how this directory works, that was prohibiting various filename patterns such as those which begin with numbers.¶
References: #72
[bug] Fixed MySQL rendering for server_default which didn’t work if the server_default was a generated SQL expression. Courtesy Moriyoshi Koizumi.¶
0.3.6¶
Released: Wed Aug 15 2012feature¶
[feature] Added include_symbol option to EnvironmentContext.configure(), specifies a callable which will include/exclude tables in their entirety from the autogeneration process based on name.¶
References: #27
[feature] Added year, month, day, hour, minute, second variables to file_template.¶
References: #59
[feature] Added ‘primary’ to the list of constraint types recognized for MySQL drop_constraint().¶
[feature] Added –sql argument to the “revision” command, for the use case where the “revision_environment” config option is being used but SQL access isn’t desired.¶
bug¶
[bug] Repaired create_foreign_key() for self-referential foreign keys, which weren’t working at all.¶
[bug] ‘alembic’ command reports an informative error message when the configuration is missing the ‘script_directory’ key.¶
References: #63
[bug] Fixes made to the constraints created/dropped alongside so-called “schema” types such as Boolean and Enum. The create/drop constraint logic does not kick in when using a dialect that doesn’t use constraints for these types, such as postgresql, even when existing_type is specified to alter_column(). Additionally, the constraints are not affected if existing_type is passed but type_ is not, i.e. there’s no net change in type.¶
References: #62
[bug] Improved error message when specifiying non-ordered revision identifiers to cover the case when the “higher” rev is None, improved message overall.¶
References: #66
0.3.5¶
Released: Sun Jul 08 2012feature¶
[feature] Implemented SQL rendering for CheckConstraint() within autogenerate upgrade, including for literal SQL as well as SQL Expression Language expressions.¶
bug¶
[bug] Fixed issue whereby reflected server defaults wouldn’t be quoted correctly; uses repr() now.¶
References: #31
[bug] Fixed issue whereby when autogenerate would render create_table() on the upgrade side for a table that has a Boolean type, an unnecessary CheckConstraint() would be generated.¶
References: #58
0.3.4¶
Released: Sat Jun 02 20120.3.3¶
Released: Sat Jun 02 2012feature¶
[feature] New config argument “revision_environment=true”, causes env.py to be run unconditionally when the “revision” command is run, to support script.py.mako templates with dependencies on custom “template_args”.¶
[feature] Added “template_args” option to configure() so that an env.py can add additional arguments to the template context when running the “revision” command. This requires either –autogenerate or the configuration directive “revision_environment=true”.¶
[feature] Added version_table argument to EnvironmentContext.configure(), allowing for the configuration of the version table name.¶
References: #34
[feature] Added support for “relative” migration identifiers, i.e. “alembic upgrade +2”, “alembic downgrade -1”. Courtesy Atsushi Odagiri for this feature.¶
bug¶
[bug] Added “type” argument to op.drop_constraint(), and implemented full constraint drop support for MySQL. CHECK and undefined raise an error. MySQL needs the constraint type in order to emit a DROP CONSTRAINT.¶
References: #44
[bug] Fixed bug whereby directories inside of the template directories, such as __pycache__ on Pypy, would mistakenly be interpreted as files which are part of the template.¶
References: #49
0.3.2¶
Released: Mon Apr 30 2012feature¶
bug¶
[bug] Fixed support of schema-qualified ForeignKey target in column alter operations, courtesy Alexander Kolov.¶
[bug] Fixed bug whereby create_unique_constraint() would include in the constraint columns that are added to all Table objects using events, externally to the generation of the constraint.¶
0.3.1¶
Released: Sat Apr 07 2012bug¶
[bug] bulk_insert() fixes:
- bulk_insert() operation was not working most likely since the 0.2 series when used with an engine.
- Repaired bulk_insert() to complete when used against a lower-case-t table and executing with only one set of parameters, working around SQLAlchemy bug #2461 in this regard.
- bulk_insert() uses “inline=True” so that phrases like RETURNING and such don’t get invoked for single-row bulk inserts.
- bulk_insert() will check that you’re passing a list of dictionaries in, raises TypeError if not detected.
References: #41
0.3.0¶
Released: Thu Apr 05 2012feature¶
[feature] Added a bit of autogenerate to the public API in the form of the function alembic.autogenerate.compare_metadata.¶
misc¶
[general] The focus of 0.3 is to clean up and more fully document the public API of Alembic, including better accessors on the MigrationContext and ScriptDirectory objects. Methods that are not considered to be public on these objects have been underscored, and methods which should be public have been cleaned up and documented, including:
MigrationContext.get_current_revision() ScriptDirectory.iterate_revisions() ScriptDirectory.get_current_head() ScriptDirectory.get_heads() ScriptDirectory.get_base() ScriptDirectory.generate_revision()
0.2.2¶
Released: Mon Mar 12 2012feature¶
[feature] Informative error message when op.XYZ directives are invoked at module import time.¶
[feature] Added execution_options parameter to op.execute(), will call execution_options() on the Connection before executing.
The immediate use case here is to allow access to the new no_parameters option in SQLAlchemy 0.7.6, which allows some DBAPIs (psycopg2, MySQLdb) to allow percent signs straight through without escaping, thus providing cross-compatible operation with DBAPI execution and static script generation.
¶[feature] script_location can be interpreted by pkg_resources.resource_filename(), if it is a non-absolute URI that contains colons. This scheme is the same one used by Pyramid.¶
References: #29
[feature] added missing support for onupdate/ondelete flags for ForeignKeyConstraint, courtesy Giacomo Bagnoli¶
bug¶
[bug] Fixed inappropriate direct call to util.err() and therefore sys.exit() when Config failed to locate the config file within library usage.¶
References: #35
[bug] Autogenerate will emit CREATE TABLE and DROP TABLE directives according to foreign key dependency order.¶
[bug] implement ‘tablename’ parameter on drop_index() as this is needed by some backends.¶
[bug] setup.py won’t install argparse if on Python 2.7/3.2¶
[bug] fixed a regression regarding an autogenerate error message, as well as various glitches in the Pylons sample template. The Pylons sample template requires that you tell it where to get the Engine from now. courtesy Marcin Kuzminski¶
References: #30
[bug] drop_index() ensures a dummy column is added when it calls “Index”, as SQLAlchemy 0.7.6 will warn on index with no column names.¶
0.2.1¶
Released: Tue Jan 31 20120.2.0¶
Released: Mon Jan 30 2012feature¶
[feature] API rearrangement allows everything Alembic does to be represented by contextual objects, including EnvironmentContext, MigrationContext, and Operations. Other libraries and applications can now use things like “alembic.op” without relying upon global configuration variables. The rearrangement was done such that existing migrations should be OK, as long as they use the pattern of “from alembic import context” and “from alembic import op”, as these are now contextual objects, not modules.¶
References: #19
[feature] The naming of revision files can now be customized to be some combination of “rev id” and “slug”, the latter of which is based on the revision message. By default, the pattern “<rev>_<slug>” is used for new files. New script files should include the “revision” variable for this to work, which is part of the newer script.py.mako scripts.¶
References: #24
[feature] Can create alembic.config.Config with no filename, use set_main_option() to add values. Also added set_section_option() which will add sections.¶
References: #23
bug¶
[bug] env.py templates call connection.close() to better support programmatic usage of commands; use NullPool in conjunction with create_engine() as well so that no connection resources remain afterwards.¶
References: #25
[bug] fix the config.main() function to honor the arguments passed, remove no longer used “scripts/alembic” as setuptools creates this for us.¶
References: #22
[bug] Fixed alteration of column type on MSSQL to not include the keyword “TYPE”.¶
0.1.1¶
Released: Wed Jan 04 2012feature¶
bug¶
[bug] Clean up file write operations so that file handles are closed.¶
[bug] Fix autogenerate so that “pass” is generated between the two comments if no net migrations were present.¶
[bug] Fix autogenerate bug that prevented correct reflection of a foreign-key referenced table in the list of “to remove”.¶
References: #16
[bug] Fix bug where create_table() didn’t handle self-referential foreign key correctly¶
References: #17
[bug] Default prefix for autogenerate directives is “op.”, matching the mako templates.¶
References: #18
[bug] fix quotes not being rendered in ForeignKeConstraint during autogenerate¶
References: #14
0.1.0¶
Released: Wed Nov 30 2011Initial release. Status of features:¶
Alembic is used in at least one production environment, but should still be considered ALPHA LEVEL SOFTWARE as of this release, particularly in that many features are expected to be missing / unimplemented. Major API changes are not anticipated but for the moment nothing should be assumed.
The author asks that you please report all issues, missing features, workarounds etc. to the bugtracker, at https://bitbucket.org/zzzeek/alembic/issues/new .
¶Python 3 is supported and has been tested.¶
The “Pylons” and “MultiDB” environment templates have not been directly tested - these should be considered to be samples to be modified as needed. Multiple database support itself is well tested, however.¶
Postgresql and MS SQL Server environments have been tested for several weeks in a production environment. In particular, some involved workarounds were implemented to allow fully-automated dropping of default- or constraint-holding columns with SQL Server.¶
MySQL support has also been implemented to a basic degree, including MySQL’s awkward style of modifying columns being accommodated.¶
Other database environments not included among those three have not been tested, at all. This includes Firebird, Oracle, Sybase. Adding support for these backends should be straightforward. Please report all missing/ incorrect behaviors to the bugtracker! Patches are welcome here but are optional - please just indicate the exact format expected by the target database.¶
SQLite, as a backend, has almost no support for schema alterations to existing databases. The author would strongly recommend that SQLite not be used in a migration context - just dump your SQLite database into an intermediary format, then dump it back into a new schema. For dev environments, the dev installer should be building the whole DB from scratch. Or just use Postgresql, which is a much better database for non-trivial schemas. Requests for full ALTER support on SQLite should be reported to SQLite’s bug tracker at http://www.sqlite.org/src/wiki?name=Bug+Reports, as Alembic will not be implementing the “rename the table to a temptable then copy the data into a new table” workaround. Note that Alembic will at some point offer an extensible API so that you can implement commands like this yourself.¶
Well-tested directives include add/drop table, add/drop column, including support for SQLAlchemy “schema” types which generate additional CHECK constraints, i.e. Boolean, Enum. Other directives not included here have not been strongly tested in production, i.e. rename table, etc.¶
Both “online” and “offline” migrations, the latter being generated SQL scripts to hand off to a DBA, have been strongly production tested against Postgresql and SQL Server.¶
Modify column type, default status, nullable, is functional and tested across PG, MSSQL, MySQL, but not yet widely tested in production usage.¶
Many migrations are still outright missing, i.e. create/add sequences, etc. As a workaround, execute() can be used for those which are missing, though posting of tickets for new features/missing behaviors is strongly encouraged.¶
Autogenerate feature is implemented and has been tested, though only a little bit in a production setting. In particular, detection of type and server default changes are optional and are off by default; they can also be customized by a callable. Both features work but can have surprises particularly the disparity between BIT/TINYINT and boolean, which hasn’t yet been worked around, as well as format changes performed by the database on defaults when it reports back. When enabled, the PG dialect will execute the two defaults to be compared to see if they are equivalent. Other backends may need to do the same thing.
The autogenerate feature only generates “candidate” commands which must be hand-tailored in any case, so is still a useful feature and is safe to use. Please report missing/broken features of autogenerate! This will be a great feature and will also improve SQLAlchemy’s reflection services.
¶Support for non-ASCII table, column and constraint names is mostly nonexistent. This is also a straightforward feature add as SQLAlchemy itself supports unicode identifiers; Alembic itself will likely need fixes to logging, column identification by key, etc. for full support here.¶