Welcome to the Couchbase Shell cbsh
documentation! If you are new, you will want to start with the quickstart section and then proceed with the introduction. If you are already familar with the shell, feel free to jump right into the command reference.
Note that while the project is maintained by Couchbase, it is not covered under the EE support contract. We are providing community support through this bug tracker.
1. Quickstart
1.1. Installation
The current latest version is 0.75. The version of cbsh
is kept in line with the underlying version of nushell that is supported.
There are a couple ways you can get access to cbsh
, the easiest one is to download our pre-built binaries for your platform of choice:
-
macOS: cbsh-x86_64-apple-darwin.zip
-
Linux aarch64: cbsh-aarch64-unknown-linux-gnu.tar.gz
-
macOS aarch64: cbsh-aarch64-apple-darwin.zip
-
Windows: cbsh-x86_64-pc-windows-msvc.zip
Once you’ve downloaded the zip
file, extract it and switch into the just created directory. The following example shows it for mac, but it works very similar if you are on linux (just align the commands with the file you just downloaded):
$ cbsh-x86_64-apple-darwin.zip
$ ls
cbsh LICENSE LICENSE_AGREEMENT README.md
You can now run the cbsh
binary:
❯ ./cbsh --version
The Couchbase Shell 0.75.0
Tip
|
If you are running a recent macOS release (i.e. 10.15.x), you’ll likely see an error similar to "cbsh" was blocked from use because it is not from an identified developer. This is because our binaries are not yet signed. To run it nonetheless you need to navigate to System Preferences → Security & Privacy and click Allow Anyway . Next time you run the binary you’ll get another prompt but then it should run fine.
|

1.2. Connecting to a Cluster
If all you do is launch the shell through ./cbsh
it will try to connect to localhost
using the Administrator
username and the password
password. You can modify this through CLI arguments (see ./cbsh -h
for more information). If -p
is passed in, you will be prompted to enter a password upon start:
Note: Unless you specify TLS settings then PLAIN authentication is used and your credentials are sent in plaintext.
❯ ./cbsh --username Administrator --hostnames 127.0.0.1 -p
Password:
👤 Administrator 🏠 my-local-cb-node in 🗄 travel-sample
>
While passing in command line arguments is great for exploring, you likely want to look at dotfile configuration support further down in the documentation which allows you to configure multiple clusters, TLS settings and more.
Once in the shell, you can start to execute commands (see the introduction section for more information). As a quick sanity check, list the nodes in the cluster:
> nodes
───┬─────────┬────────────────┬─────────┬─────────────────────────────────────┬───────────────────────┬───────────────────────────┬──────────────┬─────────────┬───────
# │ cluster │ hostname │ status │ services │ version │ os │ memory_total │ memory_free │ cloud
───┼─────────┼────────────────┼─────────┼─────────────────────────────────────┼───────────────────────┼───────────────────────────┼──────────────┼─────────────┼───────
0 │ default │ 127.0.0.1:8091 │ healthy │ backup,analytics,eventing,search,in │ 7.0.1-6102-enterprise │ x86_64-apple-darwin18.7.0 │ 34.4 GB │ 17.6 GB │ false
│ │ │ │ dexing,kv,query │ │ │ │ │
───┴─────────┴────────────────┴─────────┴─────────────────────────────────────┴───────────────────────┴───────────────────────────┴──────────────┴─────────────┴───────
Or if you have the travel-sample
bucket installed you can switch to it and then fetch a document:
> doc get airline_10 --bucket travel-sample | flatten
───┬────────────┬─────────────────────┬────────────┬─────────┬─────────────┬──────┬──────┬──────────┬───────────────┬───────┬─────────
# │ id │ cas │ content_id │ type │ name │ iata │ icao │ callsign │ country │ error │ cluster
───┼────────────┼─────────────────────┼────────────┼─────────┼─────────────┼──────┼──────┼──────────┼───────────────┼───────┼─────────
0 │ airline_10 │ 1629809626107281408 │ 10 │ airline │ 40-Mile Air │ Q5 │ MLA │ MILE-AIR │ United States │ │ default
───┴────────────┴─────────────────────┴────────────┴─────────┴─────────────┴──────┴──────┴──────────┴───────────────┴───────┴─────────
1.3. The config dotfiles
Connecting to a single cluster through the command line is nice when you are starting out, but later on you will likely either connect to the same cluster all the time or even to a multitude of them. To help with this, you can create a .cbsh
dot folder in your home directory and place a config
file in it that the shell will read on startup.
The downloaded zip contains an example already, but here is a small sample config to help you get started as well:
version = 1
[[cluster]]
identifier = "local"
hostnames = ["127.0.0.1"]
default-bucket = "travel-sample"
username = "Administrator"
password = "password"
# [[cluster]]
# identifier = "remote"
# hostnames = ["10.143.200.101"]
# default-bucket = "myapp"
# username = "user"
# password = "pass"
This will register two clusters, one called local
and one called remote
(commented out). The file format is toml
in case you wonder. Now when you start the shell, it will connect to local
automatically and you are all set.
Please check out the reference section on additional parameters you can set as well as how to move the credentials to a separate credentials
file in case you want to share your config with other people and they do not use the same credentials.
2. Introduction
Couchbase Shell is fully featured, so it does not only contain commands related to couchbase but is actually built on top of a general purpose shell called nushell. This allows you to interact with the file system or any other command available on your machine, making it a great tool for both operational and development tasks on top of Couchbase.
The following introduction only touches on the basic concepts to make you productive quickly. We recommend also checking out the great nushell documentation so you can get the most out of it.
2.1. Navigating the Shell
Commands take inputs and produce output in a structured manner, most often represented as tables. Note how both the generic ls
command and the couchbase-specific buckets
command both produce a table as their output:
> ls
────┬──────────────┬──────┬──────────┬────────────────
# │ name │ type │ size │ modified
────┼──────────────┼──────┼──────────┼────────────────
0 │ CHANGELOG.md │ File │ 4.8 KB │ 2 hours ago
1 │ Cargo.lock │ File │ 170.2 KB │ 16 minutes ago
2 │ Cargo.toml │ File │ 1.8 KB │ 16 minutes ago
3 │ LICENSE │ File │ 11.4 KB │ 2 days ago
4 │ README.md │ File │ 8.6 KB │ 9 minutes ago
5 │ docs │ Dir │ 544 B │ 2 days ago
6 │ examples │ Dir │ 192 B │ 2 days ago
7 │ jupyter │ Dir │ 128 B │ 2 days ago
8 │ src │ Dir │ 256 B │ 2 days ago
9 │ target │ Dir │ 224 B │ 32 minutes ago
10 │ tests │ Dir │ 224 B │ 2 days ago
────┴──────────────┴──────┴──────────┴────────────────
> buckets
───┬─────────┬───────────────┬───────────┬──────────┬──────────────────────┬───────────┬───────────────┬────────┬───────
# │ cluster │ name │ type │ replicas │ min_durability_level │ ram_quota │ flush_enabled │ status │ cloud
───┼─────────┼───────────────┼───────────┼──────────┼──────────────────────┼───────────┼───────────────┼────────┼───────
0 │ default │ beer-sample │ couchbase │ 1 │ none │ 209.7 MB │ false │ │ false
1 │ default │ default │ couchbase │ 1 │ none │ 104.9 MB │ true │ │ false
2 │ default │ targetBucket │ couchbase │ 0 │ none │ 104.9 MB │ true │ │ false
3 │ default │ travel-sample │ couchbase │ 1 │ none │ 209.7 MB │ false │ │ false
───┴─────────┴───────────────┴───────────┴──────────┴──────────────────────┴───────────┴───────────────┴────────┴───────
You can pipe the output into other commands, for example if you only want to see buckets that have sample
in their name you can utilize the where
command:
> buckets | where name =~ "sample"
───┬─────────┬───────────────┬───────────┬──────────┬──────────────────────┬───────────┬───────────────┬────────┬───────
# │ cluster │ name │ type │ replicas │ min_durability_level │ ram_quota │ flush_enabled │ status │ cloud
───┼─────────┼───────────────┼───────────┼──────────┼──────────────────────┼───────────┼───────────────┼────────┼───────
0 │ default │ beer-sample │ couchbase │ 1 │ none │ 209.7 MB │ false │ │ false
1 │ default │ travel-sample │ couchbase │ 1 │ none │ 209.7 MB │ false │ │ false
───┴─────────┴───────────────┴───────────┴──────────┴──────────────────────┴───────────┴───────────────┴────────┴───────
In a similar fashion you can turn this structured table into other output formats, for example JSON:
> buckets | where name =~ "sample" | to json --pretty 2
[
{
"cluster": "default",
"name": "beer-sample",
"type": "couchbase",
"replicas": 1,
"min_durability_level": "none",
"ram_quota": 209715200,
"flush_enabled": false,
"status": "",
"cloud": false
},
{
"cluster": "default",
"name": "travel-sample",
"type": "couchbase",
"replicas": 1,
"min_durability_level": "none",
"ram_quota": 209715200,
"flush_enabled": false,
"status": "",
"cloud": false
}
]
Exactly this type of composition takes the unix philosophy of "do one thing well" and meshes it together with the idea of flexible structured pipelines. This allows to build powerful compositions that help you in your daily operations with Couchbase, both from a developer or operations point of view.
2.2. Getting Help
Other than using this documentation for help, each command can be called with -h
or --help
to get information about potential flags, arguments and subcommands. Also, some commands provide additional examples.
> buckets -h
Perform bucket management operations
Usage:
> buckets <subcommand> {flags}
Subcommands:
buckets config - Shows the bucket config (low level)
buckets create - Creates a bucket
buckets drop - Drops buckets through the HTTP API
buckets flush - Flushes buckets through the HTTP API
buckets get - Fetches buckets through the HTTP API
buckets load-sample - Load a sample bucket
buckets update - Updates a bucket
Flags:
-h, --help: Display this help message
--clusters <string>: the clusters which should be contacted
Some commands (like the one above) only act as groupings for subcommands, like from
, to
or doc
. Since they do not serve a purpose on their own, they will render their help output automatically:
> doc
Perform document operations against a bucket or collection
Usage:
> doc <subcommand> {flags}
Subcommands:
doc get - Fetches a document through the data service
doc insert - Insert a document through the data service
doc remove - Removes a document through the data service
doc replace - Replace a document through the data service
doc upsert - Upsert (insert or override) a document through the data service
Flags:
-h, --help: Display this help message
2.3. The Prompt explained
Couchbase Shell uses a custom, two line prompt to show you exactly in what environment you are working in right now. Since you can connect to different clusters, switch buckets etc. it is important to know what is currently "active". Here is a sample prompt that will greet you when starting the shell:
👤 Administrator at 🏠 local in 🗄 travel-sample
>
It tells you that your user is Administrator
, the current active cluster identifier is local
and the active bucket is travel-sample
.
If you have an active scope or collection set then the prompt will also update to reflect that:
👤 Administrator 🏠 dev.local in 🗄 travel-sample.myscope.mycollection
>
In the second line, your actual user prompt starts.
2.4. Pivot mode
Sometimes data is easier to see if the table is pivoted so that the columns become rows and rows become columns.
For example the nodes
command detailed below, by default the output will look like:
> nodes
───┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────
# │ cluster │ hostnam │ status │ service │ version │ os │ memory_ │ memory_
│ │ e │ │ s │ │ │ total │ free
───┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────
0 │ local │ 127.0.0 │ healthy │ analyti │ 6.5.1-6 │ x86_64- │ 34.4 GB │ 8.4 GB
│ │ .1:8091 │ │ cs,even │ 299-ent │ apple-d │ │
│ │ │ │ ting,se │ erprise │ arwin17 │ │
│ │ │ │ arch,in │ │ .7.0 │ │
│ │ │ │ dexing, │ │ │ │
│ │ │ │ kv,quer │ │ │ │
│ │ │ │ y │ │ │ │
───┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────
This is easier to see if the table is pivoted to:
> nodes
──────────────┬─────────────────────────────────────────────
cluster │ local
hostname │ 127.0.0.1:8091
status │ healthy
services │ analytics,eventing,search,indexing,kv,query
version │ 6.5.1-6299-enterprise
os │ x86_64-apple-darwin17.7.0
memory_total │ 34.4 GB
memory_free │ 8.4 GB
──────────────┴─────────────────────────────────────────────
Nushell offers a couple of ways to set pivoting mode:
-
config set pivot_mode off
this is the default setting where pivoting is turned off. -
config set pivot_mode auto
(recommended) will allow Nushell to determine when to apply pivoting (typically when there is only one row in the results). -
config set pivot_mode always
will cause tables to always be pivoted.
2.5. Loading Data into the Shell
If you want to import data into Couchbase, or just load it into the shell for further processing, there are different commands available to help you. Once the data is loaded into the shell it can be sent to one of the couchbase save commands like doc upsert
. Depending on the structure of the data, you may also need to tweak it a little bit so it can be properly stored.
The open
command will look at file endings and try to decode it automatically. Imagine a file named user.json
in your current directy with the following content: {"name": "Michael", "age": 32}
.
> open user.json
───┬─────────┬─────
# │ name │ age
───┼─────────┼─────
0 │ Michael │ 32
───┴─────────┴─────
As you can see, the open
command already decoded the JSON document into the tabular format. If the filename would only be user
, the import would look like this instead:
> open user
{"name": "Michael", "age": 32}
If you are dealing with data that cannot be decoded automatically, you can use the various from
subcommands to help with decoding. In our case we use from json
:
> open user | from json
───┬─────────┬─────
# │ name │ age
───┼─────────┼─────
0 │ Michael │ 32
───┴─────────┴─────
Tip
|
look at the many different import formats from supports, including csv, xml, yaml and even sqlite. With this simple tool at hand you are able to load many different data formats quickly and import them into couchbase!
|
2.6. Exporting Data from the Shell
The export counterparts to open
and from
, are save
and to
. You can use both command to take tabular data from the shell and store it in files of the needed target format.
Like open
, save
will try to discern the format from the file ending. The following example will load a JSON file and save it as CSV:
> cat user.json
{"name":"Michael","age":32}
> open user.json | save user.csv
> cat user.csv
name,age
Michael,32
This example is dealing with only one row for simplicity, but you can save as many rows as you need in one file.
As a motivating example, the following snippet runs a N1QL query and stores the result as a csv file:
> query "select airportname,city,country from `travel-sample` where type = 'airport' limit 10" | save output.csv
> cat output.csv
airportname,city,country
Calais Dunkerque,Calais,France
Peronne St Quentin,Peronne,France
Les Loges,Nangis,France
Couterne,Bagnole-de-l'orne,France
Bray,Albert,France
Le Touquet Paris Plage,Le Tourquet,France
Denain,Valenciennes,France
Glisy,Amiens,France
La Garenne,Agen,France
Cazaux,Cazaux,France
3. cb-env
and the Environment
Whilst multiple clusters can be registered at the same time, there is only ever one cluster (at most) active. The same is true for buckets, scopes, and collections. When a resource is active then it used as the default to run commands against (this can be overridden on a per command basis).
You can run the cb-env
command, which will tell you which resources are currently active (you are also able to tell from the prompt):
> cb-env
────────────┬───────────────
username │ Susan
cluster │ dev.local
bucket │ travel-sample
scope │ inventory
collection │ hotel
────────────┴───────────────
If you were to now run a command then we would be running it:
-
As the user "Susan"
-
Against the "dev.local" cluster
-
Against the "travel-sample" bucket
-
Against the "inventory" scope
-
Against the "hotel" collection
You can also change the active resources with the cb-env
command.
(Note there are extra resources listed here, see cb-env
and the Environment for more information on these)
> cb-env -h
Modify the default execution environment of commands
Usage:
> cb-env <subcommand> {flags}
Subcommands:
cb-env bucket - Sets the active bucket based on its name
cb-env cloud - Sets the active cloud based on its identifier
cb-env capella-organization - Sets the active capella organization based on its identifier
cb-env cluster - Sets the active cluster based on its identifier
cb-env collection - Sets the active collection based on its name
cb-env project - Sets the active project based on its name
cb-env scope - Sets the active scope based on its name
For example if you change the active bucket:
> cb-env bucket beer-sample
────────┬─────────────
bucket │ beer-sample
────────┴─────────────
> cb-env
────────────┬─────────────
username │ Susan
cluster │ dev.local
bucket │ beer-sample
scope │ inventory
collection │ hotel
────────────┴─────────────
Both the output of cb-env
and the prompt will reflect the changes.
3.1. Per command execution environments
On many commands you will notice a set of flags which allow you to override the active execution environment. Different commands support different flags, depending on the command you can expect to see any of:
-
--clusters
-
--bucket
-
--scope
-
--collection
3.1.1. The --clusters
flag
The argument for this flag is an identifier combined with a regular expression. So imagine you have three clusters setup with the following identifiers:
> cb-env | get identifier
───┬────────
0 │ prod-us-west
1 │ prod-us-east
2 │ prod-eu-center
3 │ local-test
───┴────────
If you wanted to run a command against all clusters in prod-us
, you could use --clusters prod-us.*
, e.g.
> buckets --clusters prod-us.*
───┬──────────────┬───────────────┬───────────┬──────────┬──────────────────────┬───────────┬───────────────┬────────┬───────
# │ cluster │ name │ type │ replicas │ min_durability_level │ ram_quota │ flush_enabled │ status │ capella
───┼──────────────┼───────────────┼───────────┼──────────┼──────────────────────┼───────────┼───────────────┼────────┼───────
0 │ prod-us-east │ default │ couchbase │ 1 │ none │ 268.4 MB │ false │ │ false
1 │ prod-us-west │ default │ couchbase │ 1 │ none │ 268.4 MB │ false │ │ false
2 │ prod-us-west │ travel-sample │ couchbase │ 1 │ none │ 209.7 MB │ false │ │ false
───┴──────────────┴───────────────┴───────────┴──────────┴──────────────────────┴───────────┴───────────────┴────────┴───────
In the background this gets passed to a regex engine, so you can go a little crazy with it if needed.
3.1.2. The --bucket
, --scope
, --collection
flags
These flags are a little different to the --clusters
flag, they are not regular expressions and can only be used to defined a single name each.
Unlike --clusters
the name provided to these flags does not have to be already known to Couchbase Shell, they can refer to any bucket, scope, and collection that exist within your active cluster or defined cluster(s).
For example:
> doc get 1 --bucket travel-sample --scope tenant_agent_00 --collection users
───┬────┬─────────────────────┬──────────────────┬───────┬──────────────
# │ id │ cas │ content │ error │ cluster
───┼────┼─────────────────────┼──────────────────┼───────┼──────────────
0 │ 1 │ 1638870288919035904 │ [row 11 columns] │ │ prod-us-west
───┴────┴─────────────────────┴──────────────────┴───────┴──────────────
4. Couchbase Commands
The following sections discuss the individual couchbase specific commands in greater detail. Remember, you can always mix and match them with built-in other shell commands as well as executables from your environment.
4.1. Working with clusters
The cb-env managed
command lists all the clusters you have registered with the shell.
> cb-env managed
───┬────────┬─────┬────────────┬───────────────
# │ active │ tls │ identifier │ username
───┼────────┼─────┼────────────┼───────────────
0 │ No │ No │ local │ Administrator
1 │ Yes │ No │ remote │ Administrator
───┴────────┴─────┴────────────┴───────────────
4.2. Working with buckets
The buckets
command lists all the buckets from your active cluster:
> buckets
───┬─────────┬───────────────┬───────────┬──────────┬────────────────┬─────────────
# │ cluster │ name │ type │ replicas │ quota_per_node │ quota_total
───┼─────────┼───────────────┼───────────┼──────────┼────────────────┼─────────────
0 │ default │ beer-sample │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
1 │ default │ default │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
2 │ default │ memd │ memcached │ 0 │ 104.9 MB │ 104.9 MB
3 │ default │ travel-sample │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
───┴─────────┴───────────────┴───────────┴──────────┴────────────────┴─────────────
As an advanced command, it is also possible to get the configuration for a bucket:
> buckets config beer-sample
────────────────────────┬──────────────────────────────────────────────────────────────────────────────────────────
name │ beer-sample
uuid │ 0af4496fe0612c76885d5dcd3e010c0d
bucketType │ membase
authType │ sasl
uri │ /pools/default/buckets/beer-sample?bucket_uuid=0af4496fe0612c76885d5dcd3e010c0d
streamingUri │ /pools/default/bucketsStreaming/beer-sample?bucket_uuid=0af4496fe0612c76885d5dcd3e010c0d
localRandomKeyUri │ /pools/default/buckets/beer-sample/localRandomKey
controllers │ [row compactAll compactDB purgeDeletes startRecovery]
nodes │ [table 1 rows]
stats │ [row directoryURI nodeStatsListURI uri]
nodeLocator │ vbucket
saslPassword │ 7fd7338a6b8fb30dbfb80205834db634
ddocs │ [row uri]
replicaIndex │ Yes
autoCompactionSettings │ No
vBucketServerMap │ [row hashAlgorithm numReplicas serverList vBucketMap]
maxTTL │ 0
compressionMode │ passive
replicaNumber │ 1
threadsNumber │ 3
quota │ [row ram rawRAM]
basicStats │ [row 8 columns]
evictionPolicy │ valueOnly
conflictResolutionType │ seqno
bucketCapabilitiesVer │
bucketCapabilities │ [table 9 rows]
────────────────────────┴──────────────────────────────────────────────────────────────────────────────────────────
If you are unsure what you would use this for, you probably don’t need it. If you have pivot mode (detailed above) turned off then this command can produce results that are difficult to read.
4.3. Working with scopes
and collections
The scopes
and collections
commands can be used for managing scopes and collection respectively.
4.3.1. Scopes
> scopes -h
Fetches scopes through the HTTP API
Usage:
> scopes <subcommand> {flags}
Subcommands:
scopes create - Creates scopes through the HTTP API
scopes drop - Deletes scopes through the HTTP API
Flags:
-h, --help: Display this help message
--bucket <string>: the name of the bucket
--clusters <string>: the clusters to query against
To list all scopes in the bucket you would use:
> scopes
───┬─────────────────┬──────────────
# │ scope │ cluster
───┼─────────────────┼──────────────
0 │ inventory │ prod-us-west
1 │ tenant_agent_00 │ prod-us-west
2 │ tenant_agent_01 │ prod-us-west
3 │ tenant_agent_02 │ prod-us-west
4 │ tenant_agent_03 │ prod-us-west
5 │ tenant_agent_04 │ prod-us-west
6 │ _default │ prod-us-west
───┴─────────────────┴──────────────
You can also create and remove scopes:
> scopes create tenant_agent_05
> scopes
───┬─────────────────┬──────────────
# │ scope │ cluster
───┼─────────────────┼──────────────
0 │ tenant_agent_05 │ prod-us-west
1 │ inventory │ prod-us-west
2 │ tenant_agent_00 │ prod-us-west
3 │ tenant_agent_01 │ prod-us-west
4 │ tenant_agent_02 │ prod-us-west
5 │ tenant_agent_03 │ prod-us-west
6 │ tenant_agent_04 │ prod-us-west
7 │ _default │ prod-us-west
───┴─────────────────┴──────────────
> scopes drop tenant_agent_05
> scopes
───┬─────────────────┬──────────────
# │ scope │ cluster
───┼─────────────────┼──────────────
0 │ inventory │ prod-us-west
1 │ tenant_agent_00 │ prod-us-west
2 │ tenant_agent_01 │ prod-us-west
3 │ tenant_agent_02 │ prod-us-west
4 │ tenant_agent_03 │ prod-us-west
5 │ tenant_agent_04 │ prod-us-west
6 │ _default │ prod-us-west
───┴─────────────────┴──────────────
4.3.2. Collections
> collections -h
Fetches collections through the HTTP API
Usage:
> collections <subcommand> {flags}
Subcommands:
collections create - Creates collections through the HTTP API
collections drop - Deletes collections through the HTTP API
Flags:
-h, --help: Display this help message
--bucket <string>: the name of the bucket
--scope <string>: the name of the scope
--clusters <string>: the clusters to query against
To list all collection in the bucket you would use:
> collections
────┬─────────────────┬────────────┬────────────┬──────────────
# │ scope │ collection │ max_expiry │ cluster
────┼─────────────────┼────────────┼────────────┼──────────────
0 │ inventory │ hotel │ 0sec │ prod-us-west
1 │ inventory │ airport │ 0sec │ prod-us-west
2 │ inventory │ airline │ 0sec │ prod-us-west
3 │ inventory │ route │ 0sec │ prod-us-west
4 │ inventory │ landmark │ 0sec │ prod-us-west
5 │ tenant_agent_00 │ users │ 0sec │ prod-us-west
6 │ tenant_agent_00 │ bookings │ 0sec │ prod-us-west
7 │ tenant_agent_01 │ bookings │ 0sec │ prod-us-west
8 │ tenant_agent_01 │ users │ 0sec │ prod-us-west
9 │ tenant_agent_02 │ users │ 0sec │ prod-us-west
10 │ tenant_agent_02 │ bookings │ 0sec │ prod-us-west
11 │ tenant_agent_03 │ users │ 0sec │ prod-us-west
12 │ tenant_agent_03 │ bookings │ 0sec │ prod-us-west
13 │ tenant_agent_04 │ bookings │ 0sec │ prod-us-west
14 │ tenant_agent_04 │ users │ 0sec │ prod-us-west
15 │ _default │ _default │ 0sec │ prod-us-west
────┴─────────────────┴────────────┴────────────┴──────────────
You can also create and remove collections:
> collections create staff --scope tenant_agent_00
> collections --scope tenant_agent_00
───┬─────────────────┬────────────┬────────────┬──────────────
# │ scope │ collection │ max_expiry │ cluster
───┼─────────────────┼────────────┼────────────┼──────────────
0 │ tenant_agent_00 │ staff │ 0sec │ prod-us-west
1 │ tenant_agent_00 │ users │ 0sec │ prod-us-west
2 │ tenant_agent_00 │ bookings │ 0sec │ prod-us-west
───┴─────────────────┴────────────┴────────────┴──────────────
> collections drop staff --scope tenant_agent_00
> collections --scope tenant_agent_00
───┬─────────────────┬────────────┬────────────┬──────────────
# │ scope │ collection │ max_expiry │ cluster
───┼─────────────────┼────────────┼────────────┼──────────────
0 │ tenant_agent_00 │ users │ 0sec │ prod-us-west
1 │ tenant_agent_00 │ bookings │ 0sec │ prod-us-west
───┴─────────────────┴────────────┴────────────┴──────────────
4.4. Listing nodes
The nodes
command allows you to list all the nodes of the cluster you are currently connected to.
> nodes
───┬─────────┬─────────────────────┬─────────┬───────────────────┬───────────────────────┬──────────────────────────┬──────────────┬─────────────
# │ cluster │ hostname │ status │ services │ version │ os │ memory_total │ memory_free
───┼─────────┼─────────────────────┼─────────┼───────────────────┼───────────────────────┼──────────────────────────┼──────────────┼─────────────
0 │ remote │ 10.143.200.101:8091 │ healthy │ indexing,kv,query │ 6.5.0-4960-enterprise │ x86_64-unknown-linux-gnu │ 2.1 GB │ 837.7 MB
1 │ remote │ 10.143.200.102:8091 │ healthy │ indexing,kv,query │ 6.5.0-4960-enterprise │ x86_64-unknown-linux-gnu │ 2.1 GB │ 1.0 GB
───┴─────────┴─────────────────────┴─────────┴───────────────────┴───────────────────────┴──────────────────────────┴──────────────┴─────────────
4.5. Reading and Writing `doc`uments
The fastest way to interact with documents is through the key value service (as long as you know the document ID). All those commands are located as subcommands under the doc
namespace.
4.5.1. Reading
You can retrieve a document with doc get
:
> doc get airline_10
─────────┬─────────────────────
id │ airline_10
cas │ 1585811206390153216
content │ [row 7 columns]
─────────┴─────────────────────
To distinguish the actual content from the metadata, the content is nested in the content
field. If you want to have everything at the toplevel, you can pipe to the flatten
command:
> doc get airline_10 | flatten
────────────┬─────────────────────
id │ airline_10
cas │ 1621356820428095488
content_id │ 10
type │ airline
name │ 40-Mile Air
iata │ Q5
icao │ MLA
callsign │ MILE-AIR
country │ United States
error │
────────────┴─────────────────────
If the document is not found, an empty result is returned.
To perform a bulk get operation, the incoming stream can be utilized.
> echo [airline_10 airline_10748 airline_137] | wrap id | doc get
───┬───────────────┬─────────────────────┬─────────────────┬───────
# │ id │ cas │ content │ error
───┼───────────────┼─────────────────────┼─────────────────┼───────
0 │ airline_10 │ 1621356820428095488 │ [row 7 columns] │
1 │ airline_10748 │ 1621356818190237696 │ [row 7 columns] │
2 │ airline_137 │ 1621356823346675712 │ [row 7 columns] │
───┴───────────────┴─────────────────────┴─────────────────┴───────
If doc get
operates on an incoming stream it will extract the document id from the id
column. This behavior can be customized through the --id-column
flag.
4.5.2. Mutating
Documents can be mutated with doc insert
, doc upsert
and doc replace
.
All those three commands take similar arguments. If you only want to mutate a single document, passing in the ID and the content as arguments is the simplest way:
> doc upsert my-doc {"hello": "world"}
───────────┬───
processed │ 1
success │ 1
failed │ 0
───────────┴───
Multiple documents can be mutated through an input stream as well, defaulting to the id
and content
columns:
4.5.3. Removing
Documents can be removed with doc remove
.
> doc remove airline_10
───────────┬───
processed │ 1
success │ 1
failed │ 0
───────────┴───
Similar to doc get
, if you want to delete more than one document at the same time, provide a stream of ids with an id
column:
> echo [airline_10 airline_10748 airline_137] | wrap id | doc remove
───────────┬───
processed │ 3
success │ 2
failed │ 1
───────────┴───
4.6. whoami
?
Sometimes simple commands are helpful when debugging. The whoami
command will ask the same question to the active cluster and return various information about the user.
> whoami
─────────┬────────────────
roles │ [table 1 rows]
id │ Administrator
domain │ admin
cluster │ local
─────────┴────────────────
Since a user can have many roles, if you want to look at them they need to be unnested:
> whoami | get roles
──────┬───────
role │ admin
──────┴───────
4.7. version
The version
command lists the version of the couchbase shell.
> version
─────────┬───────────
version │ 1.0.0-dev
─────────┴───────────
5. Couchbase Capella (DBaaS)
Couchbase Capella is a Database as a Service offering from Couchbase which you can interact with from Couchbase Shell.
5.1. cb-env
and the Environment
For general usage of the cb-env
command see cb-env
and the Environment.
When using Management Commands with Couchbase Capella and Couchbase Shell we need to be able to manage more active resources than the base cb-env
command gives us.
You can use the cb-env --capella
command to see which Capella resources are current active:
> cb-env--capella
──────────────────────┬──────────────────
capella-organization │ mycapellaorg
project │ mycapellaproject
──────────────────────┴──────────────────
The active resource can be changed with the cb-env
command:
> cb-env -h
Modify the default execution environment of commands
Usage:
> cb-env <subcommand> {flags}
Subcommands:
cb-env bucket - Sets the active bucket based on its name
cb-env capella-organization - Sets the active capella organization based on its identifier
cb-env cluster - Sets the active cluster based on its identifier
cb-env collection - Sets the active collection based on its name
cb-env project - Sets the active project based on its name
cb-env scope - Sets the active scope based on its name
5.2. Document Level Commands
Document level commands like those under doc
, analytics
, query
, and search
(with a couple of exceptions) do not require any special changes to the config
config file.
You need to:
-
create a cluster in your Couchbase Capella account
-
ensure that your IP address is whitelisted
-
ensure that you have a database user
-
Populate the relevant cluster section in the
config
file with the public address and database user credentials.
5.3. Management Commands
Management commands (such as cluster
and bucket
management) require a specific section to be added to the config
file as well as an extra entry in the relevant cluster section.
Before being able to perform management operations against Couchbase Capella you must generate access and secret keys to enable Couchbase Shell to interact with your Capella organisation.
5.4. Config file format
The capella-organisation
section in the config
file is used as follows:
# Allows us to evolve in the future without breaking old config files
version = 1
[[cluster]]
identifier = "default"
hostnames = ["127.0.0.1"]
default-bucket = "travel-sample"
# The following can be part of the config or credentials
username = "Administrator"
password = "password"
[[cluster]]
# identifier *must* match the name of the cluster as provided in Couchbase Cloud
identifier = "capellahostedcluster"
hostnames = ["xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.dp.cloud.couchbase.com"]
default-bucket = "couchbasecloudbucket"
username = "dave"
password = "Pa55word!"
data-timeout = "25s"
query-timeout = "1m 15s"
tls-enabled = true
tls-validate-hostnames = false
tls-accept-all-certs = true
capella-organisation = "myhostedorg"
[[capella-organisation]]
identifier = "myhostedorg"
access-key = "x8rLuZ3YBNEqPfb8whKHxt0v9wxf1pdG"
secret-key = "T26Rh3zRaUYFIzdRQfbdPxSQN7bxJatE2jgg1JDQqZ1yyqwGUZt7nx2E6w1yzosY"
default-project = "mydefaultproject"
Note that the identifier of the cluster must match the corresponding names in Couchbase Capella for management operations to succeed. Also note that some operations are not supported by Couchbase Capella.
6. Reference
6.1. Config File Format
The ~/.cbsh/config
file with examples:
# Allows us to evolve in the future without breaking old config files
version = 1
[[cluster]]
identifier = "default"
hostnames = ["127.0.0.1"]
default-bucket = "travel-sample"
default-scope = "my-scope"
default-collection = "my-collection"
# The following can be part of the config or credentials
username = "Administrator"
password = "password"
# TLS defaults to on, accepting all certs
# tls-enabled = true
# tls-cert-path = "/path/to/cert" # either accept all certs or provide a cert path
# tls-accept-all-certs = true
# tls-validate-hostnames = false
# Timeouts broadly apply to the operations that you would expect them to.
# That is:
# * data: commands using the kv service such as `doc`
# * query: `query` commands
# * analytics: `analytics` commands
# * search: `search` commands
# * management: commands that perform management level operations, such as `users`, `bucket`, `health` etc...
data-timeout = "10s"
query-timeout = "75s"
analytics-timeout = "75s"
search-timeout = "1m 15s"
management-timeout = "75s"
# capella-organisation= "org"
# [[capella-organisation]]
# identifier = "org"
# access-key = "x8rLuZ3YBNEqPfb8whKHxt0v9wxf1pdG"
# secret-key = "T26Rh3zRaUYFIzdRQfbdPxSQN7bxJatE2jgg1JDQqZ1yyqwGUZt7nx2E6w1yzosY"
6.2. Credentials File Format
The optional ~/.cbsh/credentials
file with examples:
# Allows us to evolve in the future without breaking old config files
version = 1
[[cluster]]
identifier = "default"
username = "Administrator"
password = "password"
# TLS defaults to on, accepting all certs
# tls-enabled = true
# tls-cert-path = "/path/to/cert" # either accept all certs or provide a cert path
# tls-accept-all-certs = true
# tls-validate-hostnames = false
# [[capella-organisation]]
# identifier = "org"
# access-key = "x8rLuZ3YBNEqPfb8whKHxt0v9wxf1pdG"
# secret-key = "T26Rh3zRaUYFIzdRQfbdPxSQN7bxJatE2jgg1JDQqZ1yyqwGUZt7nx2E6w1yzosY"
7. Recipes
7.1. Importing data
Couchbase Shell supports loading data from a variety of formats and sources.
7.1.1. A Note On Data format
The doc upsert
command requires there to be only two fields/columns for an upsert.
There can be more than two fields/columns in the data but only two can be used.
By default, these two columns are named id
and content
, but these can be overridden with --id-column
and --content-column
.
Given the following document format we need to perform some data manipulation to get it into a format which works with doc upsert
:
> cat mydoc.json
{"id":3719,"cas":1600344369374167040,"type":"airport","airportname":"Columbia Rgnl","city":"Columbia","country":"United States","faa":"COU","icao":"KCOU","tz":"America/Chicago"}
> open mydoc.json | wrap content | insert id {echo $it.content.airportname}
───┬──────────────┬─────────────────
# │ id │ content
───┼──────────────┼─────────────────
0 │ airport_3719 │ [row 9 columns]
───┴──────────────┴─────────────────
7.1.2. From file
From JSON
Single Document
> cat mydoc.json
{"id":3719,"cas":1600344369374167040,"type":"airport","airportname":"Columbia Rgnl","city":"Columbia","country":"United States","faa":"COU","icao":"KCOU","tz":"America/Chicago"}
> open mydoc.json | wrap content | insert id {echo $it.content.airportname} | doc upsert
───┬───────────┬─────────┬────────
# │ processed │ success │ failed
───┼───────────┼─────────┼────────
0 │ 1 │ 1 │ 0
───┴───────────┴─────────┴────────
Multiple Documents
> ls airports
───┬────────────────────────────┬──────┬───────┬────────────
# │ name │ type │ size │ modified
───┼────────────────────────────┼──────┼───────┼────────────
0 │ airports/airport_3719.json │ File │ 151 B │ 2 days ago
1 │ airports/airport_3720.json │ File │ 155 B │ 2 days ago
2 │ airports/airport_3721.json │ File │ 172 B │ 2 days ago
3 │ airports/airport_3722.json │ File │ 161 B │ 2 days ago
4 │ airports/airport_3723.json │ File │ 163 B │ 2 days ago
5 │ airports/airport_3724.json │ File │ 156 B │ 2 days ago
6 │ airports/airport_3725.json │ File │ 148 B │ 2 days ago
7 │ airports/airport_3726.json │ File │ 164 B │ 2 days ago
8 │ airports/airport_3727.json │ File │ 169 B │ 2 days ago
9 │ airports/airport_3728.json │ File │ 152 B │ 2 days ago
───┴────────────────────────────┴──────┴───────┴────────────
> open airports/airport_3719.json
───┬──────┬─────────┬─────────────┬──────────┬─────────┬─────┬──────┬──────────────
# │ id │ type │ airportname │ city │ country │ faa │ icao │ tz
───┼──────┼─────────┼─────────────┼──────────┼─────────┼─────┼──────┼──────────────
0 │ 3719 │ airport │ Columbia │ Columbia │ United │ COU │ KCOU │ America/Chic
│ │ │ Rgnl │ │ States │ │ │ ago
───┴──────┴─────────┴─────────────┴──────────┴─────────┴─────┴──────┴──────────────
> ls airports/ | open $it.name | each { |it| wrap content | insert id {echo $it.content.airportname} } | doc upsert
───┬───────────┬─────────┬────────
# │ processed │ success │ failed
───┼───────────┼─────────┼────────
0 │ 10 │ 10 │ 0
───┴───────────┴─────────┴────────
From CSV
Single Document
> cat mydoc.csv
id,cas,type,airportname,city,country,faa,icao,tz
3719,1600344369374167040,airport,Columbia Rgnl,Columbia,United States,COU,KCOU,America/Chicago
> open mydoc.csv | each { |it| wrap content | insert id {echo $it.content.airportname} } | doc upsert
───┬───────────┬─────────┬────────
# │ processed │ success │ failed
───┼───────────┼─────────┼────────
0 │ 1 │ 1 │ 0
───┴───────────┴─────────┴────────
Multiple Documents
> cat airports.csv
airportname,city,country,faa,icao,id,type,tz
Calais Dunkerque,Calais,France,CQF,LFAC,1254,airport,Europe/Paris
Peronne St Quentin,Peronne,France,,LFAG,1255,airport,Europe/Paris
Les Loges,Nangis,France,,LFAI,1256,airport,Europe/Paris
Couterne,Bagnole-de-l'orne,France,,LFAO,1257,airport,Europe/Paris
Bray,Albert,France,,LFAQ,1258,airport,Europe/Paris
> open airports.csv | each { |it| wrap content | insert id {echo $it.content.airportname} } | doc upsert
───┬───────────┬─────────┬────────
# │ processed │ success │ failed
───┼───────────┼─────────┼────────
0 │ 10 │ 10 │ 0
───┴───────────┴─────────┴────────
7.1.3. Faking data
> cat user.tera
{
"id": "{{ uuid() }}",
"content": {
"name": "{{ name() }}",
"username": "{{ userName() }}",
"email": "{{ safeEmail() }}",
"last_access": {
"from": "{{ ipV4() }}"
}
}
}
> fake --template user.tera --num-rows 5
───┬──────────────────────────────────────┬───────────────────────────────────────
# │ id │ content
───┼──────────────────────────────────────┼───────────────────────────────────────
0 │ 0cabc14a-b9bc-4de9-9caa-6efe23ff350f │ [row email last_access name username]
1 │ 27f44eef-e4f5-4216-b65a-897ef357753d │ [row email last_access name username]
2 │ cc24c8cd-9dc6-4767-a627-e2b55c814c62 │ [row email last_access name username]
3 │ 12ad3953-11cc-43f7-991f-d680d9268357 │ [row email last_access name username]
4 │ 206194fa-7311-4a2f-a5eb-85d182199d8f │ [row email last_access name username]
───┴──────────────────────────────────────┴───────────────────────────────────────
> fake --template user.tera --num-rows 5 | doc upsert
───┬───────────┬─────────┬────────
# │ processed │ success │ failed
───┼───────────┼─────────┼────────
0 │ 5 │ 5 │ 0
───┴───────────┴─────────┴────────
7.1.4. Modifying data
In some circumstances you may want to modify the data before you import it.
Let’s take the example of importing from a csv file but this time the airports.csv file is missing the type
column but we want to add it to our data:
> cat airports.csv
airportname,city,country,faa,icao,id,tz
Calais Dunkerque,Calais,France,CQF,LFAC,1254,Europe/Paris
Peronne St Quentin,Peronne,France,,LFAG,1255,Europe/Paris
Les Loges,Nangis,France,,LFAI,1256,Europe/Paris
Couterne,Bagnole-de-l'orne,France,,LFAO,1257,Europe/Paris
Bray,Albert,France,,LFAQ,1258,Europe/Paris
> open ~/demo/airports.csv | insert type airport
───┬─────────────┬──────────────┬─────────┬─────┬──────┬──────┬──────────────┬─────────
# │ airportname │ city │ country │ faa │ icao │ id │ tz │ type
───┼─────────────┼──────────────┼─────────┼─────┼──────┼──────┼──────────────┼─────────
0 │ Calais │ Calais │ France │ CQF │ LFAC │ 1254 │ Europe/Paris │ airport
│ Dunkerque │ │ │ │ │ │ │
1 │ Peronne St │ Peronne │ France │ │ LFAG │ 1255 │ Europe/Paris │ airport
│ Quentin │ │ │ │ │ │ │
2 │ Les Loges │ Nangis │ France │ │ LFAI │ 1256 │ Europe/Paris │ airport
3 │ Couterne │ Bagnole-de-l │ France │ │ LFAO │ 1257 │ Europe/Paris │ airport
│ │ 'orne │ │ │ │ │ │
4 │ Bray │ Albert │ France │ │ LFAQ │ 1258 │ Europe/Paris │ airport
───┴─────────────┴──────────────┴─────────┴─────┴──────┴──────┴──────────────┴─────────
We can also add a column based on data from other columns, for instance adding a type
column which is set to the relevant country:
open ~/demo/airports.csv | each { |it| insert type $it.city }
───┬────────────┬────────────┬─────────┬─────┬──────┬──────┬────────────┬────────────
# │ airportnam │ city │ country │ faa │ icao │ id │ tz │ type
│ e │ │ │ │ │ │ │
───┼────────────┼────────────┼─────────┼─────┼──────┼──────┼────────────┼────────────
0 │ Calais │ Calais │ France │ CQF │ LFAC │ 1254 │ Europe/Par │ Calais
│ Dunkerque │ │ │ │ │ │ is │
1 │ Peronne St │ Peronne │ France │ │ LFAG │ 1255 │ Europe/Par │ Peronne
│ Quentin │ │ │ │ │ │ is │
2 │ Les Loges │ Nangis │ France │ │ LFAI │ 1256 │ Europe/Par │ Nangis
│ │ │ │ │ │ │ is │
3 │ Couterne │ Bagnole-de │ France │ │ LFAO │ 1257 │ Europe/Par │ Bagnole-de
│ │ -l'orne │ │ │ │ │ is │ -l'orne
4 │ Bray │ Albert │ France │ │ LFAQ │ 1258 │ Europe/Par │ Albert
│ │ │ │ │ │ │ is │
───┴────────────┴────────────┴─────────┴─────┴──────┴──────┴────────────┴────────────
7.2. Exporting data
Couchbase Shell supports exporting data to a variety of formats and sources.
7.2.1. A Note On Data format
The doc get
command exposes data as three fields; id
, cas
, and content
.
The body of the document is stored within the content
column.
If you want to only store the document body then you can use doc get <id> | get content
.
7.2.2. To file
To JSON
From KeyValue
> doc get airport_3719
───┬──────────────┬─────────────────────┬─────────────────
# │ id │ cas │ content
───┼──────────────┼─────────────────────┼─────────────────
0 │ airport_3719 │ 1600335833271238656 │ [row 9 columns]
───┴──────────────┴─────────────────────┴─────────────────
> doc get airport_3719 | get content | save mydoc.json
> cat mydoc.json
{"airportname":"Columbia Rgnl","city":"Columbia","country":"United States","faa":"COU","geo":{"alt":889.0,"lat":38.818094,"lon":-92.219631},"icao":"KCOU","id":3719,"type":"airport","tz":"America/Chicago"}
From Query/Analytics
To Single Document
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5"
───┬────────────────────┬───────────────────┬─────────┬─────┬───────────────────┬──────┬──────┬─────────┬──────────────
# │ airportname │ city │ country │ faa │ geo │ icao │ id │ type │ tz
───┼────────────────────┼───────────────────┼─────────┼─────┼───────────────────┼──────┼──────┼─────────┼──────────────
0 │ Calais Dunkerque │ Calais │ France │ CQF │ [row alt lat lon] │ LFAC │ 1254 │ airport │ Europe/Paris
1 │ Peronne St Quentin │ Peronne │ France │ │ [row alt lat lon] │ LFAG │ 1255 │ airport │ Europe/Paris
2 │ Les Loges │ Nangis │ France │ │ [row alt lat lon] │ LFAI │ 1256 │ airport │ Europe/Paris
3 │ Couterne │ Bagnole-de-l'orne │ France │ │ [row alt lat lon] │ LFAO │ 1257 │ airport │ Europe/Paris
4 │ Bray │ Albert │ France │ │ [row alt lat lon] │ LFAQ │ 1258 │ airport │ Europe/Paris
───┴────────────────────┴───────────────────┴─────────┴─────┴───────────────────┴──────┴──────┴─────────┴──────────────
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5" | save airports.json
> cat airports.json
[{"airportname":"Calais Dunkerque","city":"Calais","country":"France","faa":"CQF","geo":{"alt":12,"lat":50.962097,"lon":1.9547640000000002},"icao":"LFAC","id":1254,"type":"airport","tz":"Europe/Paris"},{"airportname":"Peronne St Quentin","city":"Peronne","country":"France","faa":null,"geo":{"alt":295,"lat":49.868547,"lon":3.0295780000000003},"icao":"LFAG","id":1255,"type":"airport","tz":"Europe/Paris"},{"airportname":"Les Loges","city":"Nangis","country":"France","faa":null,"geo":{"alt":428,"lat":48.596219,"lon":3.0067860000000004},"icao":"LFAI","id":1256,"type":"airport","tz":"Europe/Paris"},{"airportname":"Couterne","city":"Bagnole-de-l'orne","country":"France","faa":null,"geo":{"alt":718,"lat":48.545836,"lon":-0.387444},"icao":"LFAO","id":1257,"type":"airport","tz":"Europe/Paris"},{"airportname":"Bray","city":"Albert","country":"France","faa":null,"geo":{"alt":364,"lat":49.971531,"lon":2.697661},"icao":"LFAQ","id":1258,"type":"airport","tz":"Europe/Paris"}]
To Multiple Documents
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5"
───┬────────────────────┬───────────────────┬─────────┬─────┬───────────────────┬──────┬──────┬─────────┬──────────────
# │ airportname │ city │ country │ faa │ geo │ icao │ id │ type │ tz
───┼────────────────────┼───────────────────┼─────────┼─────┼───────────────────┼──────┼──────┼─────────┼──────────────
0 │ Calais Dunkerque │ Calais │ France │ CQF │ [row alt lat lon] │ LFAC │ 1254 │ airport │ Europe/Paris
1 │ Peronne St Quentin │ Peronne │ France │ │ [row alt lat lon] │ LFAG │ 1255 │ airport │ Europe/Paris
2 │ Les Loges │ Nangis │ France │ │ [row alt lat lon] │ LFAI │ 1256 │ airport │ Europe/Paris
3 │ Couterne │ Bagnole-de-l'orne │ France │ │ [row alt lat lon] │ LFAO │ 1257 │ airport │ Europe/Paris
4 │ Bray │ Albert │ France │ │ [row alt lat lon] │ LFAQ │ 1258 │ airport │ Europe/Paris
───┴────────────────────┴───────────────────┴─────────┴─────┴───────────────────┴──────┴──────┴─────────┴──────────────
> query "SELECT `travel-sample`.* FROM `travel-sample` WHERE `type`='airport' LIMIT 5" | each { echo $it | save (echo (build-string "user_" $it.id ".json")) }
> ls airport*.json
> ls airport*.json
───┬───────────────────┬──────┬───────┬─────────────
# │ name │ type │ size │ modified
───┼───────────────────┼──────┼───────┼─────────────
0 │ airport_1254.json │ File │ 200 B │ 56 secs ago
1 │ airport_1255.json │ File │ 203 B │ 56 secs ago
2 │ airport_1256.json │ File │ 193 B │ 56 secs ago
3 │ airport_1257.json │ File │ 194 B │ 56 secs ago
4 │ airport_1258.json │ File │ 178 B │ 56 secs ago
───┴───────────────────┴──────┴───────┴─────────────
To CSV
From KeyValue
> doc get airport_3719
───┬──────────────┬─────────────────────┬─────────────────
# │ id │ cas │ content
───┼──────────────┼─────────────────────┼─────────────────
0 │ airport_3719 │ 1600335833271238656 │ [row 9 columns]
───┴──────────────┴─────────────────────┴─────────────────
> doc get airport_3719 | get content
───┬───────────────┬──────────┬───────────────┬─────┬───────────────────┬──────┬──────┬─────────┬─────────────────
# │ airportname │ city │ country │ faa │ geo │ icao │ id │ type │ tz
───┼───────────────┼──────────┼───────────────┼─────┼───────────────────┼──────┼──────┼─────────┼─────────────────
0 │ Columbia Rgnl │ Columbia │ United States │ COU │ [row alt lat lon] │ KCOU │ 3719 │ airport │ America/Chicago
───┴───────────────┴──────────┴───────────────┴─────┴───────────────────┴──────┴──────┴─────────┴─────────────────
The geo
column in the above data contains row
data.
This means that we have to flatten out the geo
column due to limitations of the csv format.
> doc get airport_3719 | get content | insert alt {get geo.alt} | insert lat {get geo.lat} | insert lon {get geo.lon} | reject geo
───┬───────────────┬──────────┬───────────────┬─────┬──────┬──────┬─────────┬─────────────────┬──────────┬─────────┬──────────
# │ airportname │ city │ country │ faa │ icao │ id │ type │ tz │ alt │ lat │ lon
───┼───────────────┼──────────┼───────────────┼─────┼──────┼──────┼─────────┼─────────────────┼──────────┼─────────┼──────────
0 │ Columbia Rgnl │ Columbia │ United States │ COU │ KCOU │ 3719 │ airport │ America/Chicago │ 889.0000 │ 38.8180 │ -92.2196
───┴───────────────┴──────────┴───────────────┴─────┴──────┴──────┴─────────┴─────────────────┴──────────┴─────────┴──────────
> doc get airport_3719 | get content | insert alt {get geo.alt} | insert lat {get geo.lat} | insert lon {get geo.lon} | reject geo | save mydoc.csv
> cat mydoc.csv
airportname,city,country,faa,icao,id,type,tz,alt,lat,lon
Columbia Rgnl,Columbia,United States,COU,KCOU,3719,airport,America/Chicago,889.0000000000000,38.81809400000000,-92.21963100000001
From Query/Analytics
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5"
───┬────────────────────┬───────────────────┬─────────┬─────┬───────────────────┬──────┬──────┬─────────┬──────────────
# │ airportname │ city │ country │ faa │ geo │ icao │ id │ type │ tz
───┼────────────────────┼───────────────────┼─────────┼─────┼───────────────────┼──────┼──────┼─────────┼──────────────
0 │ Calais Dunkerque │ Calais │ France │ CQF │ [row alt lat lon] │ LFAC │ 1254 │ airport │ Europe/Paris
1 │ Peronne St Quentin │ Peronne │ France │ │ [row alt lat lon] │ LFAG │ 1255 │ airport │ Europe/Paris
2 │ Les Loges │ Nangis │ France │ │ [row alt lat lon] │ LFAI │ 1256 │ airport │ Europe/Paris
3 │ Couterne │ Bagnole-de-l'orne │ France │ │ [row alt lat lon] │ LFAO │ 1257 │ airport │ Europe/Paris
4 │ Bray │ Albert │ France │ │ [row alt lat lon] │ LFAQ │ 1258 │ airport │ Europe/Paris
───┴────────────────────┴───────────────────┴─────────┴─────┴───────────────────┴──────┴──────┴─────────┴──────────────
The geo
column in the above data contains row
data.
This means that we have to flatten out the geo
column due to limitations of the csv format.
There are two ways that we can do this.
We can manipulate the data in the shell after performing the query:
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5" | insert alt {get geo.alt} | insert lat {get geo.lat} | insert lon {get geo.lon} | reject geo
───┬────────────────────┬───────────────────┬─────────┬─────┬──────┬──────┬─────────┬──────────────┬─────┬─────────┬─────────
# │ airportname │ city │ country │ faa │ icao │ id │ type │ tz │ alt │ lat │ lon
───┼────────────────────┼───────────────────┼─────────┼─────┼──────┼──────┼─────────┼──────────────┼─────┼─────────┼─────────
0 │ Calais Dunkerque │ Calais │ France │ CQF │ LFAC │ 1254 │ airport │ Europe/Paris │ 12 │ 50.9620 │ 1.9547
1 │ Peronne St Quentin │ Peronne │ France │ │ LFAG │ 1255 │ airport │ Europe/Paris │ 295 │ 49.8685 │ 3.0295
2 │ Les Loges │ Nangis │ France │ │ LFAI │ 1256 │ airport │ Europe/Paris │ 428 │ 48.5962 │ 3.0067
3 │ Couterne │ Bagnole-de-l'orne │ France │ │ LFAO │ 1257 │ airport │ Europe/Paris │ 718 │ 48.5458 │ -0.3874
4 │ Bray │ Albert │ France │ │ LFAQ │ 1258 │ airport │ Europe/Paris │ 364 │ 49.9715 │ 2.6976
───┴────────────────────┴───────────────────┴─────────┴─────┴──────┴──────┴─────────┴──────────────┴─────┴─────────┴─────────
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5" | insert alt {get geo.alt} | insert lat {get geo.lat} | insert lon {get geo.lon} | reject geo | save airports.csv
We can also change our query to get the data into the format that we require:
> query "SELECT airportname, city, country, faa, icao, id, type, tz, geo.alt, geo.lat, geo.lon FROM `travel-sample` WHERE `type` = 'airport' LIMIT 5"
───┬────────────────────┬─────┬───────────────────┬─────────┬─────┬──────┬──────┬─────────┬─────────┬─────────┬──────────────
# │ airportname │ alt │ city │ country │ faa │ icao │ id │ lat │ lon │ type │ tz
───┼────────────────────┼─────┼───────────────────┼─────────┼─────┼──────┼──────┼─────────┼─────────┼─────────┼──────────────
0 │ Calais Dunkerque │ 12 │ Calais │ France │ CQF │ LFAC │ 1254 │ 50.9620 │ 1.9547 │ airport │ Europe/Paris
1 │ Peronne St Quentin │ 295 │ Peronne │ France │ │ LFAG │ 1255 │ 49.8685 │ 3.0295 │ airport │ Europe/Paris
2 │ Les Loges │ 428 │ Nangis │ France │ │ LFAI │ 1256 │ 48.5962 │ 3.0067 │ airport │ Europe/Paris
3 │ Couterne │ 718 │ Bagnole-de-l'orne │ France │ │ LFAO │ 1257 │ 48.5458 │ -0.3874 │ airport │ Europe/Paris
4 │ Bray │ 364 │ Albert │ France │ │ LFAQ │ 1258 │ 49.9715 │ 2.6976 │ airport │ Europe/Paris
───┴────────────────────┴─────┴───────────────────┴─────────┴─────┴──────┴──────┴─────────┴─────────┴─────────┴──────────────
> query "SELECT airportname, city, country, faa, icao, id, type, tz, geo.alt, geo.lat, geo.lon FROM `travel-sample` WHERE `type` = 'airport' LIMIT 5" | save airports.csv
7.3. Useful snippets
This section contains a collection of useful commands and sets of commands which don’t really fit into their own section of recipes.
7.3.1. Migrating scope and collection definitions
When you create a new cluster it can be useful to migrate scope and collection definitions from an old cluster. A good example here is migrating from an on-premise cluster to a Capella cluster.
To migrate scopes, except the _default
scope:
scopes --clusters "On-Prem-Cluster" --bucket travel-sample | select scope | where scope != "_default" | each { |it| scopes create $it.scope --clusters "Capella-Cluster" }
To migrate all collections, except the _default
collection:
collections --clusters "On-Prem-Cluster" --bucket "travel-sample" | select scope collection | where $it.scope != "_default" | where $it.collection != "_default" | each { |it| collections create $it.collection --clusters "Capella-Cluster" --bucket "travel-sample-import" --scope $it.scope
These examples can easily be extended to filter out any other scopes and collections you do not want to migrate.
For example to filter more scopes you would just add more where
clauses: … | where scope != "_default" | where scope != "inventory" | …
7.3.2. Migrating query index definitions
When you create a new cluster it can be useful to migrate index definitions from an old cluster. A good example here is migrating from an on-premise cluster to a Capella cluster.
To migrate all of your index definitions:
query indexes --definitions --clusters "On-Prem-Cluster" | get definition | each { |it| query $it --clusters "Capella-Cluster" }