Welcome to the Couchbase Shell cbsh
documentation! If you are new, you will want to start with the quickstart section and then proceed with the introduction. If you are already familar with the shell, feel free to jump right into the command reference.
Note that while the project is maintained by Couchbase, it is not covered under the EE support contract. We are providing community support through this bug tracker.
1. Quickstart
1.1. Installation
The current latest version is 0.4.0.
There are a couple ways you can get access to cbsh
, the easiest one is to download our pre-built binaries for your platform of choice:
-
Linux: cbsh-0.4.0-linux.zip
-
macOS: cbsh-0.4.0-osx.zip
-
Windows: cbsh-0.4.0-windows.zip
Once you’ve downloaded the zip
file, extract it and switch into the just created directory. The following example shows it for mac, but it works very similar if you are on linux (just align the commands with the file you just downloaded):
$ unzip cbsh-0.4.0-osx.zip
$ cd cbsh-mac-x86_64
$ ls
cbsh examples LICENSE README.md
You can now run the cbsh
binary:
$ ./cbsh
~/cbsh-mac-x86_64
❯ version
─────────┬───────
version │ 0.4.0
─────────┴───────
Tip
|
If you are running a recent macOS release (i.e. 10.15.x), you’ll likely see an error similar to "cbsh" was blocked from use because it is not from an identified developer. This is because our binaries are not yet signed. To run it nonetheless you need to navigate to System Preferences → Security & Privacy and click Allow Anyway . Next time you run the binary you’ll get another prompt but then it should run fine.
|

1.2. Connecting to a Cluster
If all you do is launch the shell through ./cbsh
it will try to connect to localhost
using the Administrator
username and the password
password. You can modify this through CLI arguments (see ./cbsh -h
for more information). If -p
is passed in, you will be prompted to enter a password upon start:
$ ./cbsh --username Administrator --hostnames 127.0.0.1 -p
Password:
Thanks for trying CBSH!
~/cbsh-mac-x86_64
❯
While passing in command line arguments is great for exploring, you likely want to look at dotfile configuration support further down in the documentation which allows you to configure multiple clusters and more.
Once in the shell, you can start to execute commands (see the introduction section for more information). As a quick sanity check, list the nodes in the cluster:
❯ nodes
──────────────┬─────────────────────────────────────────────
cluster │ local
hostname │ 127.0.0.1:8091
status │ healthy
services │ analytics,eventing,search,indexing,kv,query
version │ 6.5.1-6299-enterprise
os │ x86_64-apple-darwin17.7.0
memory_total │ 34.4 GB
memory_free │ 13.7 GB
──────────────┴─────────────────────────────────────────────
Or if you have the travel-sample
bucket installed you can switch to it and then fetch a document:
❯ doc get airline_10 --bucket travel-sample --flatten
──────────┬─────────────────────
id │ airline_10
cas │ 1585811206390153216
country │ United States
iata │ Q5
callsign │ MILE-AIR
name │ 40-Mile Air
icao │ MLA
type │ airline
──────────┴─────────────────────
1.3. The config dotfiles
Connecting to a single cluster through the command line is nice when you are starting out, but later on you will likely either connect to the same cluster all the time or even to a multitude of them. To help with this, you can create a .cbsh
dot folder in your home directory and place a config
file in it that the shell will read on startup.
The downloaded zip contains an example already, but here is a small sample config to help you get started as well:
version = 1
[[clusters]]
identifier = "local"
hostnames = ["127.0.0.1"]
default-bucket = "travel-sample"
username = "Administrator"
password = "password"
# [[clusters]]
# identifier = "remote"
# hostnames = ["10.143.200.101"]
# default-bucket = "myapp"
# username = "user"
# password = "pass"
This will connect to two clusters, one called local
and one called remote
(commented out). The file format is toml
in case you wonder. Now when you start the shell, it will connect to local
automatically and you are all set.
Please check out the reference section on additional parameters you can set as well as how to move the credentials to a separate credentials
file in case you want to share your config with other people and they do not use the same credentials.
2. Introduction
Couchbase Shell is fully featured, so it does not only contain commands related to couchbase but is actually built on top of a general purpose shell called nushell. This allows you to interact with the file system or any other command available on your machine, making it a great tool for both operational and development tasks on top of Couchbase.
The following introduction only touches on the basic concepts to make you productive quickly. We recommend also checking out the great nushell documentation so you can get the most out of it.
2.1. Navigating the Shell
Commands take inputs and produce output in a structured manner, most often represented as tables. Note how both the generic ls
command and the couchbase-specific buckets
command both produce a table as their output:
❯ ls
────┬──────────────┬──────┬──────────┬──────────────
# │ name │ type │ size │ modified
────┼──────────────┼──────┼──────────┼──────────────
0 │ CHANGELOG.md │ File │ 977 B │ 3 hours ago
1 │ Cargo.lock │ File │ 133.8 KB │ 6 hours ago
2 │ Cargo.toml │ File │ 1.6 KB │ 3 hours ago
3 │ LICENSE │ File │ 11.4 KB │ 2 months ago
4 │ README.md │ File │ 6.3 KB │ 3 hours ago
5 │ docs │ Dir │ 288 B │ 27 mins ago
6 │ examples │ Dir │ 96 B │ 2 months ago
7 │ jupyter │ Dir │ 128 B │ 2 weeks ago
8 │ src │ Dir │ 224 B │ 3 hours ago
9 │ target │ Dir │ 192 B │ 2 months ago
10 │ ui-assets │ Dir │ 832 B │ 3 weeks ago
────┴──────────────┴──────┴──────────┴──────────────
❯ buckets
───┬─────────┬───────────────┬───────────┬──────────┬────────────────┬─────────────
# │ cluster │ name │ type │ replicas │ quota_per_node │ quota_total
───┼─────────┼───────────────┼───────────┼──────────┼────────────────┼─────────────
0 │ default │ beer-sample │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
1 │ default │ default │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
2 │ default │ memd │ memcached │ 0 │ 104.9 MB │ 104.9 MB
3 │ default │ travel-sample │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
───┴─────────┴───────────────┴───────────┴──────────┴────────────────┴─────────────
You can pipe the output into other commands, for example if you only want to see buckets that have sample
in their name you can utilize the where
command:
❯ buckets | where name =~ "sample"
───┬─────────┬───────────────┬───────────┬──────────┬────────────────┬─────────────
# │ cluster │ name │ type │ replicas │ quota_per_node │ quota_total
───┼─────────┼───────────────┼───────────┼──────────┼────────────────┼─────────────
0 │ default │ beer-sample │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
1 │ default │ travel-sample │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
───┴─────────┴───────────────┴───────────┴──────────┴────────────────┴─────────────
In a similar fashion you can turn this structured table into other output formats, for example JSON:
❯ buckets | where name =~ "sample" | to json --pretty 2
[
{
"cluster": "default",
"name": "beer-sample",
"type": "couchbase",
"replicas": 1,
"quota_per_node": 104857600,
"quota_total": 104857600
},
{
"cluster": "default",
"name": "travel-sample",
"type": "couchbase",
"replicas": 1,
"quota_per_node": 104857600,
"quota_total": 104857600
}
]
Exactly this type of composition takes the unix philosophy of "do one thing well" and meshes it together with the idea of flexible structured pipelines. This allows to build powerful compositions that help you in your daily operations with Couchbase, both from a developer or operations point of view.
2.2. Getting Help
Other than using this documentation for help, each command can be called with -h
or --help
to get information about potential flags, arguments and subcommands. Also, some commands provide additional examples.
> buckets -h
Lists all buckets of the connected cluster
Usage:
> buckets <subcommand> {flags}
Subcommands:
buckets config - Shows the bucket config (low level)
Flags:
-h, --help: Display this help message
--clusters <string>: the clusters which should be contacted
Some commands only act as groupings for subcommands, like from
, to
or doc
. Since they do not serve a purpose on their own, they will render their help output automatically:
> doc
Perform document operations against a bucket or collection
Usage:
> doc <subcommand> {flags}
Subcommands:
doc get - Fetches a document through the data service
doc upsert - Upsert (insert or override) a document through the data service
doc insert - Insert a document through the data service
doc replace - Replace a document through the data service
doc remove - Removes a document through the data service
Flags:
2.3. The Prompt explained
Couchbase Shell uses a custom, two line prompt to show you exactly in what environment you are working in right now. Since you can connect to different clusters, switch buckets etc. it is important to know what is currently "active". Here is a sample prompt that will greet you when starting the shell:
👤Administrator at 🏠local in 🛫 travel-sample
>
It tells you that your user is Administrator
, the current active cluster identifier is local
and the active bucket is travel-sample
. Note that the emoji for the active bucket changes if you use a regular bucket vs. the sample buckets we ship (if you are curious, try loading the beer-sample
as well!).
In the second line, your actual user prompt starts.
2.4. Loading Data into the Shell
If you want to import data into Couchbase, or just load it into the shell for further processing, there are different commands available to help you. Once the data is loaded into the shell it can be sent to one of the couchbase save commands like doc upsert
. Depending on the structure of the data, you may also need to tweak it a little bit so it can be properly stored.
The open
command will look at file endings and try to decode it automatically. Imagine a file named user.json
in your current directy with the following content: {"name": "Michael", "age": 32}
.
> open user.json
──────┬─────────
name │ Michael
age │ 32
──────┴─────────
As you can see, the open
command already decoded the JSON document into the tabular format. If the filename would only be user
, the import would look like this instead:
> open user
{"name": "Michael", "age": 32}
If you are dealing with data that cannot be decoded automatically, you can use the various from
subcommands to help with decoding. In our case we use from json
:
> open user | from json
──────┬─────────
name │ Michael
age │ 32
──────┴─────────
Tip
|
look at the many different import formats from supports, including csv, xml, yaml and even sqlite. With this simple tool at hand you are able to load many different data formats quickly and import them into couchbase!
|
2.5. Exporting Data from the Shell
The export counterparts to open
and from
, are save
and to
. You can use both command to take tabular data from the shell and store it in files of the needed target format.
Like open
, save
will try to discern the format from the file ending. The following example will load a JSON file and save it as CSV:
> cat user.json
{"name":"Michael","age":32}
> open user.json | save user.csv
> cat user.csv
name,age
Michael,32
This example is dealing with only one row for simplicity, but you can save as many rows as you need in one file.
As a motivating example, the following snippet runs a N1QL query and stores the result as a csv file:
> query "select airportname,city,country from `travel-sample` where type = 'airport' limit 10" | save output.csv
> cat output.csv
airportname,city,country
Calais Dunkerque,Calais,France
Peronne St Quentin,Peronne,France
Les Loges,Nangis,France
Couterne,Bagnole-de-l'orne,France
Bray,Albert,France
Le Touquet Paris Plage,Le Tourquet,France
Denain,Valenciennes,France
Glisy,Amiens,France
La Garenne,Agen,France
Cazaux,Cazaux,France
3. Couchbase Commands
The following sections discuss the individual couchbase specific commands in greater detail. Remember, you can always mix and match them with built-in other shell commands as well as executables from your environment.
3.1. Preface: the --clusters
flag
On many commands, you will notice a flag called --clusters
. This flag allows you to run the specific command not just against the active cluster, but rather against a group of clusters that the shell knows about. The argument for this flag is an identifier combined with a regular expression. So imagine you have three clusters setup with the following identifiers:
> clusters | get identifier
───┬────────
0 │ prod-us-west
1 │ prod-us-east
2 │ prod-eu-center
3 │ local-test
───┴────────
If you wanted to run a command against all clusters in prod-us
, you could use --clusters prod-us.*
. In the background this gets passed to a regex engine, so you can go a little crazy with it if needed.
3.2. Preface: pivot mode
Sometimes data is easier to see if the table is pivoted so that the columns become rows and rows become columns.
For example the nodes
command detailed below, by default the output will look like:
> nodes
───┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────
# │ cluster │ hostnam │ status │ service │ version │ os │ memory_ │ memory_
│ │ e │ │ s │ │ │ total │ free
───┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────
0 │ local │ 127.0.0 │ healthy │ analyti │ 6.5.1-6 │ x86_64- │ 34.4 GB │ 8.4 GB
│ │ .1:8091 │ │ cs,even │ 299-ent │ apple-d │ │
│ │ │ │ ting,se │ erprise │ arwin17 │ │
│ │ │ │ arch,in │ │ .7.0 │ │
│ │ │ │ dexing, │ │ │ │
│ │ │ │ kv,quer │ │ │ │
│ │ │ │ y │ │ │ │
───┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────
This is easier to see if the table is pivoted to:
> nodes
──────────────┬─────────────────────────────────────────────
cluster │ local
hostname │ 127.0.0.1:8091
status │ healthy
services │ analytics,eventing,search,indexing,kv,query
version │ 6.5.1-6299-enterprise
os │ x86_64-apple-darwin17.7.0
memory_total │ 34.4 GB
memory_free │ 8.4 GB
──────────────┴─────────────────────────────────────────────
Nushell offers a couple of ways to set pivoting mode:
-
config set pivot_mode off
this is the default setting where pivoting is turned off. -
config set pivot_mode auto
(recommended) will allow Nushell to determine when to apply pivoting (typically when there is only one row in the results). -
config set pivot_mode always
will cause tables to always be pivoted.
3.3. use
and the Environment
While mulitple clusters can be connected at the same time, there is only ever one cluster (at most) active. The same is true for a bucket inside that cluster.
You can run the use
command, which will tell you which cluster and bucket are currently active (you are also able to tell from the prompt):
> use
─────────┬───────────────
cluster │ local
bucket │ travel-sample
─────────┴───────────────
Changing the active cluster is done via use cluster
, and changing the active bucket via use bucket
:
> use bucket beer-sample
────────┬─────────────
bucket │ beer-sample
────────┴─────────────
> use
─────────┬─────────────
cluster │ local
bucket │ beer-sample
─────────┴─────────────
Both the output of use
and the prompt will reflect the changes.
3.4. Working with clusters
The clusters
command lists all the clusters you specified in your configuration and the shell might be connected to.
> clusters
───┬────────┬─────┬────────────┬───────────────
# │ active │ tls │ identifier │ username
───┼────────┼─────┼────────────┼───────────────
0 │ No │ No │ local │ Administrator
1 │ Yes │ No │ remote │ Administrator
───┴────────┴─────┴────────────┴───────────────
3.5. Working with buckets
The buckets
command lists all the buckets from your active cluster:
> buckets
───┬─────────┬───────────────┬───────────┬──────────┬────────────────┬─────────────
# │ cluster │ name │ type │ replicas │ quota_per_node │ quota_total
───┼─────────┼───────────────┼───────────┼──────────┼────────────────┼─────────────
0 │ default │ beer-sample │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
1 │ default │ default │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
2 │ default │ memd │ memcached │ 0 │ 104.9 MB │ 104.9 MB
3 │ default │ travel-sample │ couchbase │ 1 │ 104.9 MB │ 104.9 MB
───┴─────────┴───────────────┴───────────┴──────────┴────────────────┴─────────────
As an advanced command, it is also possible to get the configuration for a bucket:
> buckets config beer-sample
────────────────────────┬──────────────────────────────────────────────────────────────────────────────────────────
name │ beer-sample
uuid │ 0af4496fe0612c76885d5dcd3e010c0d
bucketType │ membase
authType │ sasl
uri │ /pools/default/buckets/beer-sample?bucket_uuid=0af4496fe0612c76885d5dcd3e010c0d
streamingUri │ /pools/default/bucketsStreaming/beer-sample?bucket_uuid=0af4496fe0612c76885d5dcd3e010c0d
localRandomKeyUri │ /pools/default/buckets/beer-sample/localRandomKey
controllers │ [row compactAll compactDB purgeDeletes startRecovery]
nodes │ [table 1 rows]
stats │ [row directoryURI nodeStatsListURI uri]
nodeLocator │ vbucket
saslPassword │ 7fd7338a6b8fb30dbfb80205834db634
ddocs │ [row uri]
replicaIndex │ Yes
autoCompactionSettings │ No
vBucketServerMap │ [row hashAlgorithm numReplicas serverList vBucketMap]
maxTTL │ 0
compressionMode │ passive
replicaNumber │ 1
threadsNumber │ 3
quota │ [row ram rawRAM]
basicStats │ [row 8 columns]
evictionPolicy │ valueOnly
conflictResolutionType │ seqno
bucketCapabilitiesVer │
bucketCapabilities │ [table 9 rows]
────────────────────────┴──────────────────────────────────────────────────────────────────────────────────────────
If you are unsure what you would use this for, you probably don’t need it. If you have pivot mode (detailed above) turned off then this command can produce results that are difficult to read.
3.6. Listing nodes
The nodes
command allows you to list all the nodes of the cluster you are currently connected to.
> nodes
───┬─────────┬─────────────────────┬─────────┬───────────────────┬───────────────────────┬──────────────────────────┬──────────────┬─────────────
# │ cluster │ hostname │ status │ services │ version │ os │ memory_total │ memory_free
───┼─────────┼─────────────────────┼─────────┼───────────────────┼───────────────────────┼──────────────────────────┼──────────────┼─────────────
0 │ remote │ 10.143.200.101:8091 │ healthy │ indexing,kv,query │ 6.5.0-4960-enterprise │ x86_64-unknown-linux-gnu │ 2.1 GB │ 837.7 MB
1 │ remote │ 10.143.200.102:8091 │ healthy │ indexing,kv,query │ 6.5.0-4960-enterprise │ x86_64-unknown-linux-gnu │ 2.1 GB │ 1.0 GB
───┴─────────┴─────────────────────┴─────────┴───────────────────┴───────────────────────┴──────────────────────────┴──────────────┴─────────────
3.7. Reading and Writing `doc`uments
The fastest way to interact with documents is through the key value service (as long as you know the document ID). All those commands are located as subcommands under the doc
namespace.
3.7.1. Reading
You can retrieve a document with doc get
:
> doc get airline_10
─────────┬─────────────────────
id │ airline_10
cas │ 1585811206390153216
content │ [row 7 columns]
─────────┴─────────────────────
To distinguish the actual content from the metadata, the content is nested in the content
field. If you want to have everything at the toplevel, you can use the --flatten
flag:
> doc get airline_10 --flatten
──────────┬─────────────────────
id │ 10
cas │ 1585811206390153216
country │ United States
iata │ Q5
callsign │ MILE-AIR
name │ 40-Mile Air
icao │ MLA
type │ airline
──────────┴─────────────────────
If the document is not found, an empty result is returned.
To perform a bulk get operation, the incoming stream can be utilized.
> echo [airline_10 airline_10748 airline_137] | wrap id | doc get
───┬───────────────┬─────────────────────┬─────────────────
# │ id │ cas │ content
───┼───────────────┼─────────────────────┼─────────────────
0 │ airline_10 │ 1594902508775604224 │ [row 7 columns]
1 │ airline_10748 │ 1594902508776521728 │ [row 7 columns]
2 │ airline_137 │ 1594902508777439232 │ [row 7 columns]
───┴───────────────┴─────────────────────┴─────────────────
If doc get
operates on an incoming stream it will extract the document id from the id
column. This behavior can be customized through the --id-column
flag.
3.7.2. Mutating
Documents can be mutated with doc insert
, doc upsert
and doc replace
.
All those three commands take similar arguments. If you only want to mutate a single document, passing in the ID and the content as arguments is the simplest way:
> doc upsert my-doc {"hello": "world"}
───────────┬───
processed │ 1
success │ 1
failed │ 0
───────────┴───
Multiple documents can be mutated through an input stream as well, defaulting to the id
and content
columns:
3.7.3. Removing
Documents can be removed with doc remove
.
> doc remove airline_10
───────────┬───
processed │ 1
success │ 1
failed │ 0
───────────┴───
Similar to doc get
, if you want to delete more than one document at the same time, provide a stream of ids with an id
column:
> echo [airline_10 airline_10748 airline_137] | wrap id | doc remove
───────────┬───
processed │ 3
success │ 2
failed │ 1
───────────┴───
3.8. Interacting with the data
service
Note that the data
commands are intended to work directly against the data (Key/Value) service. If you are looking for manipulating doucments, please use the doc
commands instead.
You can use the data stats
subcommand to list the KV stats for all the nodes connected for the current cluster. In the following example we are filtering on a subset of the stats since the full output can get quite verbose.
> data stats | where key =~ cmd_
────┬─────────┬───────────┬──────────────────────────────┬──────────
# │ cluster │ node │ key │ value
────┼─────────┼───────────┼──────────────────────────────┼──────────
0 │ local │ 127.0.0.1 │ cmd_get │ 2095
1 │ local │ 127.0.0.1 │ cmd_set │ 1
2 │ local │ 127.0.0.1 │ cmd_flush │ 0
3 │ local │ 127.0.0.1 │ cmd_subdoc_lookup │ 0
4 │ local │ 127.0.0.1 │ cmd_subdoc_mutation │ 0
5 │ local │ 127.0.0.1 │ cmd_total_sets │ 41049
6 │ local │ 127.0.0.1 │ cmd_total_gets │ 12595
7 │ local │ 127.0.0.1 │ cmd_total_ops │ 53644
8 │ local │ 127.0.0.1 │ cmd_mutation │ 2
9 │ local │ 127.0.0.1 │ cmd_lookup │ 2095
10 │ local │ 127.0.0.1 │ cmd_lock │ 0
11 │ local │ 127.0.0.1 │ cmd_lookup_10s_count │ 0
12 │ local │ 127.0.0.1 │ cmd_lookup_10s_duration_us │ 0
13 │ local │ 127.0.0.1 │ cmd_mutation_10s_count │ 536440
14 │ local │ 127.0.0.1 │ cmd_mutation_10s_duration_us │ 22059071
────┴─────────┴───────────┴──────────────────────────────┴──────────
3.9. whoami
?
Sometimes simple commands are helpful when debugging. The whoami
command will ask the same question to the active cluster and return various information about the user.
> whoami
─────────┬────────────────
roles │ [table 1 rows]
id │ Administrator
domain │ admin
cluster │ local
─────────┴────────────────
Since a user can have many roles, if you want to look at them they need to be unnested:
> whoami | get roles
──────┬───────
role │ admin
──────┴───────
3.10. version
The version
command lists the version of the couchbase shell.
> version
─────────┬───────────
version │ 1.0.0-dev
─────────┴───────────
4. Reference
4.1. Config File Format
The ~/.cbsh/config
file with examples:
# Allows us to evolve in the future without breaking old config files
version = 1
[[clusters]]
identifier = "default"
hostnames = ["127.0.0.1"]
default-bucket = "travel-sample"
default-scope = "my-scope"
default-collection = "my-collection"
# The following can be part of the config or credentials
username = "Administrator"
password = "password"
cert-path = "/absolute/path/to/server.cert"
data-timeout = "2500ms"
connect-timeout = "7500ms"
query-timeout = "75s"
4.2. Credentials File Format
The optional ~/.cbsh/credentials
file with examples:
# Allows us to evolve in the future without breaking old config files
version = 1
[[clusters]]
identifier = "default"
username = "Administrator"
password = "password"
cert-path = "/absolute/path/to/server.cert"
5. Troubleshooting
When using Couchbase Shell sometimes commands will return errors from the underlying SDK, these often look something like:
> doc get t
─────────┬───────────────────────────────────────────────────
id │ t
cas │
content │
error │ Unknown Error(A generic / unknown error happened:
│ {"msg":"LCB_ERR_NO_MATCHING_SERVER (1010)"})
─────────┴───────────────────────────────────────────────────
It can be difficult to work out why commands are failing in ways such as this.
Couchbase Shell provides a log file containing output from the SDK logs, this log file lives at <binary_dir>/.cbshlog/sdk.log
where <binary_dir>
is the directory where the executable located.
You can view this log using your favourite tools such as tail
or cat
.
You can also use the sdklog
command, running this command will output the last x lines from the log file (by default x is 10):
👤 Administrator at 🏠 default in 🗄 defaul
> sdklog
───┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
# │ logs
───┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
0 │ 2021-02-04T10:21:31.039401+00:00 - HTTP node list finished. Trying to obtain connection from first node in list
1 │ 2021-02-04T10:21:31.039365+00:00 - Ignoring failure. Current=0x7fd1ab413c50 (CCCP)
2 │ 2021-02-04T10:21:31.039325+00:00 - Provider 'HTTP' failed: LCB_ERR_BUCKET_NOT_FOUND (210)
3 │ 2021-02-04T10:21:31.039256+00:00 - <10.112.210.102:8091> (CTX=0x7fd1ab704dd0,bc_http) Destroying context. Pending Writes=0, Entered=true, Socket
│ Refcount=1
4 │ 2021-02-04T10:21:31.039216+00:00 - <10.112.210.102:8091> (CTX=0x7fd1ab704dd0,bc_http) Got non-success HTTP status code 404
5 │ 2021-02-04T10:21:31.039176+00:00 - <10.112.210.102:8091> (CTX=0x7fd1ab704dd0,bc_http) Got 404 on config stream. Assuming bucket does not exist as we've
│ tried both URL types
6 │ 2021-02-04T10:21:31.039075+00:00 - <10.112.210.102:8091> (CTX=0x7fd1ab704dd0,bc_http) Received 411 bytes on HTTP stream
7 │ 2021-02-04T10:21:30.930327+00:00 - <10.112.210.102:8091> (CTX=0x7fd1ab704dd0,bc_http) Got 404 on config stream. Assuming terse URI not supported on
│ cluster
8 │ 2021-02-04T10:21:30.930268+00:00 - <10.112.210.102:8091> (CTX=0x7fd1ab704dd0,bc_http) Received 411 bytes on HTTP stream
9 │ 2021-02-04T10:21:30.930202+00:00 - <10.112.210.102:11210> (CTX=0x7fd1ab705440,sasl,SASLREQ=0x7fd1ab705270) Using SCRAM-SHA512 SASL mechanism
───┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
From looking at this log file we can see the error LCB_ERR_BUCKET_NOT_FOUND
which could suggest that we’ve used an incorrect bucket name.
Which, in this case, is correct - you can see on the prompt that we’ve mistyped the name of our "default" bucket as "defaul".
To get more lines than the default 10 lines we can use the --last x
flag:
👤 Administrator at 🏠 default in 🗄 defaul
> sdklog --last 20
────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
# │ logs
────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
0 │ 2021-02-04T10:23:10.070782+00:00 - <10.112.210.102:8091> (CTX=0x7fd1ab606010,unknown) Pairing with SOCK=1c9d04677063bb3e
1 │ 2021-02-04T10:23:10.070732+00:00 - Successfuly connected to REST API 10.112.210.102:8091
2 │ 2021-02-04T10:23:10.070693+00:00 - <10.112.210.102:8091> (SOCK=1c9d04677063bb3e) Successfully set TCP_KEEPALIVE
3 │ 2021-02-04T10:23:10.070642+00:00 - <10.112.210.102:8091> (SOCK=1c9d04677063bb3e) Successfully set TCP_NODELAY
4 │ 2021-02-04T10:23:10.070540+00:00 - <10.112.210.102:8091> (SOCK=1c9d04677063bb3e) Connected established
5 │ 2021-02-04T10:23:09.965411+00:00 - <10.112.210.102:8091> (SOCK=1c9d04677063bb3e) Scheduling I/O watcher for asynchronous connection completion.
6 │ 2021-02-04T10:23:09.965260+00:00 - <10.112.210.102:8091> (SOCK=1c9d04677063bb3e) Created new socket with FD=14
7 │ 2021-02-04T10:23:09.965142+00:00 - <10.112.210.102:8091> (SOCK=1c9d04677063bb3e) Starting. Timeout=7000000us
8 │ 2021-02-04T10:23:09.965059+00:00 - <10.112.210.101:8091> (CTX=0x7fd1ab605e20,bc_http) Destroying context. Pending Writes=0, Entered=true, Socket
│ Refcount=1
9 │ 2021-02-04T10:23:09.965015+00:00 - <10.112.210.101:8091> (CTX=0x7fd1ab605e20,bc_http) Got non-success HTTP status code 404
10 │ 2021-02-04T10:23:09.964964+00:00 - <10.112.210.101:8091> (CTX=0x7fd1ab605e20,bc_http) Got 404 on config stream. Assuming bucket does not exist as we've
│ tried both URL types
11 │ 2021-02-04T10:23:09.964849+00:00 - <10.112.210.101:8091> (CTX=0x7fd1ab605e20,bc_http) Received 411 bytes on HTTP stream
12 │ 2021-02-04T10:23:09.860333+00:00 - <10.112.210.101:8091> (CTX=0x7fd1ab605e20,bc_http) Got 404 on config stream. Assuming terse URI not supported on
│ cluster
13 │ 2021-02-04T10:23:09.860187+00:00 - <10.112.210.101:8091> (CTX=0x7fd1ab605e20,bc_http) Received 411 bytes on HTTP stream
14 │ 2021-02-04T10:23:09.757046+00:00 - <10.112.210.101:8091> (CTX=0x7fd1ab605e20,unknown) Pairing with SOCK=1c650149ecd8076d
15 │ 2021-02-04T10:23:09.756993+00:00 - Successfuly connected to REST API 10.112.210.101:8091
16 │ 2021-02-04T10:23:09.756955+00:00 - <10.112.210.101:8091> (SOCK=1c650149ecd8076d) Successfully set TCP_KEEPALIVE
17 │ 2021-02-04T10:23:09.756911+00:00 - <10.112.210.101:8091> (SOCK=1c650149ecd8076d) Successfully set TCP_NODELAY
18 │ 2021-02-04T10:23:09.756817+00:00 - <10.112.210.101:8091> (SOCK=1c650149ecd8076d) Connected established
19 │ 2021-02-04T10:23:09.653877+00:00 - <10.112.210.101:8091> (SOCK=1c650149ecd8076d) Scheduling I/O watcher for asynchronous connection completion.
────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
6. Recipes
6.1. Importing data
Couchbase Shell supports loading data from a variety of formats and sources.
6.1.1. A Note On Data format
The doc upsert
command requires there to be only two fields/columns for an upsert.
There can be more than two fields/columns in the data but only two can be used.
By default, these two columns are named id
and content
, but these can be overridden with --id-column
and --content-column
.
Given the following document format we need to perform some data manipulation to get it into a format which works with doc upsert
:
> cat mydoc.json
{"id":3719,"cas":1600344369374167040,"type":"airport","airportname":"Columbia Rgnl","city":"Columbia","country":"United States","faa":"COU","icao":"KCOU","tz":"America/Chicago"}
> open mydoc.json | wrap content | insert id {echo $it.content.airportname}
───┬──────────────┬─────────────────
# │ id │ content
───┼──────────────┼─────────────────
0 │ airport_3719 │ [row 9 columns]
───┴──────────────┴─────────────────
6.1.2. From file
From JSON
Single Document
> cat mydoc.json
{"id":3719,"cas":1600344369374167040,"type":"airport","airportname":"Columbia Rgnl","city":"Columbia","country":"United States","faa":"COU","icao":"KCOU","tz":"America/Chicago"}
> open mydoc.json | wrap content | insert id {echo $it.content.airportname} | doc upsert
───┬───────────┬─────────┬────────
# │ processed │ success │ failed
───┼───────────┼─────────┼────────
0 │ 1 │ 1 │ 0
───┴───────────┴─────────┴────────
Multiple Documents
> ls airports
───┬────────────────────────────┬──────┬───────┬────────────
# │ name │ type │ size │ modified
───┼────────────────────────────┼──────┼───────┼────────────
0 │ airports/airport_3719.json │ File │ 151 B │ 2 days ago
1 │ airports/airport_3720.json │ File │ 155 B │ 2 days ago
2 │ airports/airport_3721.json │ File │ 172 B │ 2 days ago
3 │ airports/airport_3722.json │ File │ 161 B │ 2 days ago
4 │ airports/airport_3723.json │ File │ 163 B │ 2 days ago
5 │ airports/airport_3724.json │ File │ 156 B │ 2 days ago
6 │ airports/airport_3725.json │ File │ 148 B │ 2 days ago
7 │ airports/airport_3726.json │ File │ 164 B │ 2 days ago
8 │ airports/airport_3727.json │ File │ 169 B │ 2 days ago
9 │ airports/airport_3728.json │ File │ 152 B │ 2 days ago
───┴────────────────────────────┴──────┴───────┴────────────
> open airports/airport_3719.json
───┬──────┬─────────┬─────────────┬──────────┬─────────┬─────┬──────┬──────────────
# │ id │ type │ airportname │ city │ country │ faa │ icao │ tz
───┼──────┼─────────┼─────────────┼──────────┼─────────┼─────┼──────┼──────────────
0 │ 3719 │ airport │ Columbia │ Columbia │ United │ COU │ KCOU │ America/Chic
│ │ │ Rgnl │ │ States │ │ │ ago
───┴──────┴─────────┴─────────────┴──────────┴─────────┴─────┴──────┴──────────────
> ls airports/ | open $it.name | each { wrap content | insert id {echo $it.content.airportname} } | doc upsert
───┬───────────┬─────────┬────────
# │ processed │ success │ failed
───┼───────────┼─────────┼────────
0 │ 10 │ 10 │ 0
───┴───────────┴─────────┴────────
From CSV
Single Document
> cat mydoc.csv
id,cas,type,airportname,city,country,faa,icao,tz
3719,1600344369374167040,airport,Columbia Rgnl,Columbia,United States,COU,KCOU,America/Chicago
> open mydoc.csv | each { wrap content | insert id {echo $it.content.airportname} } | doc upsert
───┬───────────┬─────────┬────────
# │ processed │ success │ failed
───┼───────────┼─────────┼────────
0 │ 1 │ 1 │ 0
───┴───────────┴─────────┴────────
Multiple Documents
> cat airports.csv
airportname,city,country,faa,icao,id,type,tz
Calais Dunkerque,Calais,France,CQF,LFAC,1254,airport,Europe/Paris
Peronne St Quentin,Peronne,France,,LFAG,1255,airport,Europe/Paris
Les Loges,Nangis,France,,LFAI,1256,airport,Europe/Paris
Couterne,Bagnole-de-l'orne,France,,LFAO,1257,airport,Europe/Paris
Bray,Albert,France,,LFAQ,1258,airport,Europe/Paris
> open airports.csv | each { wrap content | insert id {echo $it.content.airportname} } | doc upsert
───┬───────────┬─────────┬────────
# │ processed │ success │ failed
───┼───────────┼─────────┼────────
0 │ 10 │ 10 │ 0
───┴───────────┴─────────┴────────
6.1.3. Faking data
> cat user.tera
{
"id": "{{ uuid() }}",
"content": {
"name": "{{ name() }}",
"username": "{{ userName() }}",
"email": "{{ safeEmail() }}",
"last_access": {
"from": "{{ ipV4() }}"
}
}
}
> fake --template user.tera --num-rows 5
───┬──────────────────────────────────────┬───────────────────────────────────────
# │ id │ content
───┼──────────────────────────────────────┼───────────────────────────────────────
0 │ 0cabc14a-b9bc-4de9-9caa-6efe23ff350f │ [row email last_access name username]
1 │ 27f44eef-e4f5-4216-b65a-897ef357753d │ [row email last_access name username]
2 │ cc24c8cd-9dc6-4767-a627-e2b55c814c62 │ [row email last_access name username]
3 │ 12ad3953-11cc-43f7-991f-d680d9268357 │ [row email last_access name username]
4 │ 206194fa-7311-4a2f-a5eb-85d182199d8f │ [row email last_access name username]
───┴──────────────────────────────────────┴───────────────────────────────────────
> fake --template user.tera --num-rows 5 | doc upsert
───┬───────────┬─────────┬────────
# │ processed │ success │ failed
───┼───────────┼─────────┼────────
0 │ 5 │ 5 │ 0
───┴───────────┴─────────┴────────
6.1.4. Modifying data
In some circumstances you may want to modify the data before you import it.
Let’s take the example of importing from a csv file but this time the airports.csv file is missing the type
column but we want to add it to our data:
> cat airports.csv
airportname,city,country,faa,icao,id,tz
Calais Dunkerque,Calais,France,CQF,LFAC,1254,Europe/Paris
Peronne St Quentin,Peronne,France,,LFAG,1255,Europe/Paris
Les Loges,Nangis,France,,LFAI,1256,Europe/Paris
Couterne,Bagnole-de-l'orne,France,,LFAO,1257,Europe/Paris
Bray,Albert,France,,LFAQ,1258,Europe/Paris
> open ~/demo/airports.csv | insert type airport
───┬─────────────┬──────────────┬─────────┬─────┬──────┬──────┬──────────────┬─────────
# │ airportname │ city │ country │ faa │ icao │ id │ tz │ type
───┼─────────────┼──────────────┼─────────┼─────┼──────┼──────┼──────────────┼─────────
0 │ Calais │ Calais │ France │ CQF │ LFAC │ 1254 │ Europe/Paris │ airport
│ Dunkerque │ │ │ │ │ │ │
1 │ Peronne St │ Peronne │ France │ │ LFAG │ 1255 │ Europe/Paris │ airport
│ Quentin │ │ │ │ │ │ │
2 │ Les Loges │ Nangis │ France │ │ LFAI │ 1256 │ Europe/Paris │ airport
3 │ Couterne │ Bagnole-de-l │ France │ │ LFAO │ 1257 │ Europe/Paris │ airport
│ │ 'orne │ │ │ │ │ │
4 │ Bray │ Albert │ France │ │ LFAQ │ 1258 │ Europe/Paris │ airport
───┴─────────────┴──────────────┴─────────┴─────┴──────┴──────┴──────────────┴─────────
We can also add a column based on data from other columns, for instance adding a type
column which is set to the relevant country:
open ~/demo/airports.csv | each { insert type $it.city }
───┬────────────┬────────────┬─────────┬─────┬──────┬──────┬────────────┬────────────
# │ airportnam │ city │ country │ faa │ icao │ id │ tz │ type
│ e │ │ │ │ │ │ │
───┼────────────┼────────────┼─────────┼─────┼──────┼──────┼────────────┼────────────
0 │ Calais │ Calais │ France │ CQF │ LFAC │ 1254 │ Europe/Par │ Calais
│ Dunkerque │ │ │ │ │ │ is │
1 │ Peronne St │ Peronne │ France │ │ LFAG │ 1255 │ Europe/Par │ Peronne
│ Quentin │ │ │ │ │ │ is │
2 │ Les Loges │ Nangis │ France │ │ LFAI │ 1256 │ Europe/Par │ Nangis
│ │ │ │ │ │ │ is │
3 │ Couterne │ Bagnole-de │ France │ │ LFAO │ 1257 │ Europe/Par │ Bagnole-de
│ │ -l'orne │ │ │ │ │ is │ -l'orne
4 │ Bray │ Albert │ France │ │ LFAQ │ 1258 │ Europe/Par │ Albert
│ │ │ │ │ │ │ is │
───┴────────────┴────────────┴─────────┴─────┴──────┴──────┴────────────┴────────────
6.2. Exporting data
Couchbase Shell supports exporting data to a variety of formats and sources.
6.2.1. A Note On Data format
The doc get
command exposes data as three fields; id
, cas
, and content
.
The body of the document is stored within the content
column.
If you want to only store the document body then you can use doc get <id> | get content
.
6.2.2. To file
To JSON
From KeyValue
> doc get airport_3719
───┬──────────────┬─────────────────────┬─────────────────
# │ id │ cas │ content
───┼──────────────┼─────────────────────┼─────────────────
0 │ airport_3719 │ 1600335833271238656 │ [row 9 columns]
───┴──────────────┴─────────────────────┴─────────────────
> doc get airport_3719 | get content | save mydoc.json
> cat mydoc.json
{"airportname":"Columbia Rgnl","city":"Columbia","country":"United States","faa":"COU","geo":{"alt":889.0,"lat":38.818094,"lon":-92.219631},"icao":"KCOU","id":3719,"type":"airport","tz":"America/Chicago"}
From Query/Analytics
To Single Document
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5"
───┬────────────────────┬───────────────────┬─────────┬─────┬───────────────────┬──────┬──────┬─────────┬──────────────
# │ airportname │ city │ country │ faa │ geo │ icao │ id │ type │ tz
───┼────────────────────┼───────────────────┼─────────┼─────┼───────────────────┼──────┼──────┼─────────┼──────────────
0 │ Calais Dunkerque │ Calais │ France │ CQF │ [row alt lat lon] │ LFAC │ 1254 │ airport │ Europe/Paris
1 │ Peronne St Quentin │ Peronne │ France │ │ [row alt lat lon] │ LFAG │ 1255 │ airport │ Europe/Paris
2 │ Les Loges │ Nangis │ France │ │ [row alt lat lon] │ LFAI │ 1256 │ airport │ Europe/Paris
3 │ Couterne │ Bagnole-de-l'orne │ France │ │ [row alt lat lon] │ LFAO │ 1257 │ airport │ Europe/Paris
4 │ Bray │ Albert │ France │ │ [row alt lat lon] │ LFAQ │ 1258 │ airport │ Europe/Paris
───┴────────────────────┴───────────────────┴─────────┴─────┴───────────────────┴──────┴──────┴─────────┴──────────────
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5" | save airports.json
> cat airports.json
[{"airportname":"Calais Dunkerque","city":"Calais","country":"France","faa":"CQF","geo":{"alt":12,"lat":50.962097,"lon":1.9547640000000002},"icao":"LFAC","id":1254,"type":"airport","tz":"Europe/Paris"},{"airportname":"Peronne St Quentin","city":"Peronne","country":"France","faa":null,"geo":{"alt":295,"lat":49.868547,"lon":3.0295780000000003},"icao":"LFAG","id":1255,"type":"airport","tz":"Europe/Paris"},{"airportname":"Les Loges","city":"Nangis","country":"France","faa":null,"geo":{"alt":428,"lat":48.596219,"lon":3.0067860000000004},"icao":"LFAI","id":1256,"type":"airport","tz":"Europe/Paris"},{"airportname":"Couterne","city":"Bagnole-de-l'orne","country":"France","faa":null,"geo":{"alt":718,"lat":48.545836,"lon":-0.387444},"icao":"LFAO","id":1257,"type":"airport","tz":"Europe/Paris"},{"airportname":"Bray","city":"Albert","country":"France","faa":null,"geo":{"alt":364,"lat":49.971531,"lon":2.697661},"icao":"LFAQ","id":1258,"type":"airport","tz":"Europe/Paris"}]
To Multiple Documents
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5"
───┬────────────────────┬───────────────────┬─────────┬─────┬───────────────────┬──────┬──────┬─────────┬──────────────
# │ airportname │ city │ country │ faa │ geo │ icao │ id │ type │ tz
───┼────────────────────┼───────────────────┼─────────┼─────┼───────────────────┼──────┼──────┼─────────┼──────────────
0 │ Calais Dunkerque │ Calais │ France │ CQF │ [row alt lat lon] │ LFAC │ 1254 │ airport │ Europe/Paris
1 │ Peronne St Quentin │ Peronne │ France │ │ [row alt lat lon] │ LFAG │ 1255 │ airport │ Europe/Paris
2 │ Les Loges │ Nangis │ France │ │ [row alt lat lon] │ LFAI │ 1256 │ airport │ Europe/Paris
3 │ Couterne │ Bagnole-de-l'orne │ France │ │ [row alt lat lon] │ LFAO │ 1257 │ airport │ Europe/Paris
4 │ Bray │ Albert │ France │ │ [row alt lat lon] │ LFAQ │ 1258 │ airport │ Europe/Paris
───┴────────────────────┴───────────────────┴─────────┴─────┴───────────────────┴──────┴──────┴─────────┴──────────────
> query "SELECT `travel-sample`.* FROM `travel-sample` WHERE `type`='airport' LIMIT 5" | each { save $(echo "airport_" $(echo $it.id | str from) ".json" | str collect) }
> ls airport*.json
> ls airport*.json
───┬───────────────────┬──────┬───────┬─────────────
# │ name │ type │ size │ modified
───┼───────────────────┼──────┼───────┼─────────────
0 │ airport_1254.json │ File │ 200 B │ 56 secs ago
1 │ airport_1255.json │ File │ 203 B │ 56 secs ago
2 │ airport_1256.json │ File │ 193 B │ 56 secs ago
3 │ airport_1257.json │ File │ 194 B │ 56 secs ago
4 │ airport_1258.json │ File │ 178 B │ 56 secs ago
───┴───────────────────┴──────┴───────┴─────────────
To CSV
From KeyValue
> doc get airport_3719
───┬──────────────┬─────────────────────┬─────────────────
# │ id │ cas │ content
───┼──────────────┼─────────────────────┼─────────────────
0 │ airport_3719 │ 1600335833271238656 │ [row 9 columns]
───┴──────────────┴─────────────────────┴─────────────────
> doc get airport_3719 | get content
───┬───────────────┬──────────┬───────────────┬─────┬───────────────────┬──────┬──────┬─────────┬─────────────────
# │ airportname │ city │ country │ faa │ geo │ icao │ id │ type │ tz
───┼───────────────┼──────────┼───────────────┼─────┼───────────────────┼──────┼──────┼─────────┼─────────────────
0 │ Columbia Rgnl │ Columbia │ United States │ COU │ [row alt lat lon] │ KCOU │ 3719 │ airport │ America/Chicago
───┴───────────────┴──────────┴───────────────┴─────┴───────────────────┴──────┴──────┴─────────┴─────────────────
The geo
column in the above data contains row
data.
This means that we have to flatten out the geo
column due to limitations of the csv format.
> doc get airport_3719 | get content | insert alt {get geo.alt} | insert lat {get geo.lat} | insert lon {get geo.lon} | reject geo
───┬───────────────┬──────────┬───────────────┬─────┬──────┬──────┬─────────┬─────────────────┬──────────┬─────────┬──────────
# │ airportname │ city │ country │ faa │ icao │ id │ type │ tz │ alt │ lat │ lon
───┼───────────────┼──────────┼───────────────┼─────┼──────┼──────┼─────────┼─────────────────┼──────────┼─────────┼──────────
0 │ Columbia Rgnl │ Columbia │ United States │ COU │ KCOU │ 3719 │ airport │ America/Chicago │ 889.0000 │ 38.8180 │ -92.2196
───┴───────────────┴──────────┴───────────────┴─────┴──────┴──────┴─────────┴─────────────────┴──────────┴─────────┴──────────
> doc get airport_3719 | get content | insert alt {get geo.alt} | insert lat {get geo.lat} | insert lon {get geo.lon} | reject geo | save mydoc.csv
> cat mydoc.csv
airportname,city,country,faa,icao,id,type,tz,alt,lat,lon
Columbia Rgnl,Columbia,United States,COU,KCOU,3719,airport,America/Chicago,889.0000000000000,38.81809400000000,-92.21963100000001
From Query/Analytics
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5"
───┬────────────────────┬───────────────────┬─────────┬─────┬───────────────────┬──────┬──────┬─────────┬──────────────
# │ airportname │ city │ country │ faa │ geo │ icao │ id │ type │ tz
───┼────────────────────┼───────────────────┼─────────┼─────┼───────────────────┼──────┼──────┼─────────┼──────────────
0 │ Calais Dunkerque │ Calais │ France │ CQF │ [row alt lat lon] │ LFAC │ 1254 │ airport │ Europe/Paris
1 │ Peronne St Quentin │ Peronne │ France │ │ [row alt lat lon] │ LFAG │ 1255 │ airport │ Europe/Paris
2 │ Les Loges │ Nangis │ France │ │ [row alt lat lon] │ LFAI │ 1256 │ airport │ Europe/Paris
3 │ Couterne │ Bagnole-de-l'orne │ France │ │ [row alt lat lon] │ LFAO │ 1257 │ airport │ Europe/Paris
4 │ Bray │ Albert │ France │ │ [row alt lat lon] │ LFAQ │ 1258 │ airport │ Europe/Paris
───┴────────────────────┴───────────────────┴─────────┴─────┴───────────────────┴──────┴──────┴─────────┴──────────────
The geo
column in the above data contains row
data.
This means that we have to flatten out the geo
column due to limitations of the csv format.
There are two ways that we can do this.
We can manipulate the data in the shell after performing the query:
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5" | insert alt {get geo.alt} | insert lat {get geo.lat} | insert lon {get geo.lon} | reject geo
───┬────────────────────┬───────────────────┬─────────┬─────┬──────┬──────┬─────────┬──────────────┬─────┬─────────┬─────────
# │ airportname │ city │ country │ faa │ icao │ id │ type │ tz │ alt │ lat │ lon
───┼────────────────────┼───────────────────┼─────────┼─────┼──────┼──────┼─────────┼──────────────┼─────┼─────────┼─────────
0 │ Calais Dunkerque │ Calais │ France │ CQF │ LFAC │ 1254 │ airport │ Europe/Paris │ 12 │ 50.9620 │ 1.9547
1 │ Peronne St Quentin │ Peronne │ France │ │ LFAG │ 1255 │ airport │ Europe/Paris │ 295 │ 49.8685 │ 3.0295
2 │ Les Loges │ Nangis │ France │ │ LFAI │ 1256 │ airport │ Europe/Paris │ 428 │ 48.5962 │ 3.0067
3 │ Couterne │ Bagnole-de-l'orne │ France │ │ LFAO │ 1257 │ airport │ Europe/Paris │ 718 │ 48.5458 │ -0.3874
4 │ Bray │ Albert │ France │ │ LFAQ │ 1258 │ airport │ Europe/Paris │ 364 │ 49.9715 │ 2.6976
───┴────────────────────┴───────────────────┴─────────┴─────┴──────┴──────┴─────────┴──────────────┴─────┴─────────┴─────────
> query "SELECT `travel-sample`.* from `travel-sample` WHERE `type`='airport' LIMIT 5" | insert alt {get geo.alt} | insert lat {get geo.lat} | insert lon {get geo.lon} | reject geo | save airports.csv
We can also change our query to get the data into the format that we require:
> query "SELECT airportname, city, country, faa, icao, id, type, tz, geo.alt, geo.lat, geo.lon FROM `travel-sample` WHERE `type` = 'airport' LIMIT 5"
───┬────────────────────┬─────┬───────────────────┬─────────┬─────┬──────┬──────┬─────────┬─────────┬─────────┬──────────────
# │ airportname │ alt │ city │ country │ faa │ icao │ id │ lat │ lon │ type │ tz
───┼────────────────────┼─────┼───────────────────┼─────────┼─────┼──────┼──────┼─────────┼─────────┼─────────┼──────────────
0 │ Calais Dunkerque │ 12 │ Calais │ France │ CQF │ LFAC │ 1254 │ 50.9620 │ 1.9547 │ airport │ Europe/Paris
1 │ Peronne St Quentin │ 295 │ Peronne │ France │ │ LFAG │ 1255 │ 49.8685 │ 3.0295 │ airport │ Europe/Paris
2 │ Les Loges │ 428 │ Nangis │ France │ │ LFAI │ 1256 │ 48.5962 │ 3.0067 │ airport │ Europe/Paris
3 │ Couterne │ 718 │ Bagnole-de-l'orne │ France │ │ LFAO │ 1257 │ 48.5458 │ -0.3874 │ airport │ Europe/Paris
4 │ Bray │ 364 │ Albert │ France │ │ LFAQ │ 1258 │ 49.9715 │ 2.6976 │ airport │ Europe/Paris
───┴────────────────────┴─────┴───────────────────┴─────────┴─────┴──────┴──────┴─────────┴─────────┴─────────┴──────────────
> query "SELECT airportname, city, country, faa, icao, id, type, tz, geo.alt, geo.lat, geo.lon FROM `travel-sample` WHERE `type` = 'airport' LIMIT 5" | save airports.csv