Connection configuration
We provide access to warehouse configuration through the ~/.whale/config/connections.yaml
file. The accepted key/value pairs, however, are warehouse-specific and, as such, are most easily added through the wh init
workflow. However, in the case where this needs to be done manually, refer to the following warehouse-specific documentation below.
Universal connection parameters
name Unique warehouse name. This will be used to name the subdirectory within
~/.whale/metadata
that stores metadata and UGC for each table.metadata_source The type of connection that this yaml section describes. These are case sensitive and can be one of the following:
Bigquery
Neo4j
Presto
Snowflake
database Specify a string here to restrict the scraping to a particular database under your connection. Specifying this modifies the SQLAlchemy conn string used for connection, using this string as the "database" field (in ANSI SQL, this is known as the "catalog"). See the SQLAlchemy docs for more details.
Bigquery
Only one of key_path
and project_credentials
are required.
Cloud spanner
To do: Unlike Bigquery, we currently don't allow you to specify key_path
or project_credentials
explicitly.
Glue
A name
parameter will place all of your glue documentation within a separate folder, as is done with the other extractors. But because Glue is already a metadata aggregator, this may not be optimal, particularly if you connect to other warehouses with whale directly. In this case, the name
parameter can be omitted, and the table stubs will reside within subdirectories named after the underlying warehouse/instance.
For example, with name
, your files will be organized like this:
Without name
, your files will be stored like this:
Hive metastore
For more information the dialect
field, see the SQLAlchemy documentation.
Neo4j
We provide support to scrape metadata from Amundsen's neo4j backend. However, by default we do not install the neo4j drivers within our installation virtual environment. To use this, you must install using make && make install
, then pip install neo4j-driver
within the virtual environment located at ~/.whale/libexec/env
.
Postgres
Presto
Redshift
Snowflake
Splice Machine
Build script
We also support use of custom scripts that handle the metadata scraping and dumping of this data into local files (in the metadata
subdirectory) and manifests (in the manifests
subdirectory). For more information, see Custom extraction.
Last updated