Project

General

Profile

Actions

Calamari API 13 Gap analysis

Summary of API coverage vs CLI feature set

RADOS
ubuntu@vpm061:~$ ceph --help 2>&1 | grep -E "^[a-z]+" | sed '1d;2d' | wc -l

42 / 147 contains at least 10 duplicates
~30%

We have most of OSD commands implemented with the exception of creating, removal, tiering and erasure coding.

Pool commands done well, need to update when we have support in crush for EC and tier

Some Monitor commands implemented

PG ??
None of auth

MDS
None

RBD
0/30
None

RGW
0/63
None

CephFS
None

Auth

auth add <entity> {<caps> [<caps>...]}   add auth info for <entity> from input
                                         file, or random key if no input given,
                                         and/or any caps specified in the
                                         command
auth caps <entity> <caps> [<caps>...]    update caps for <name> from caps
                                         specified in the command
auth del <entity>                        delete all caps for <name>
auth export {<entity>}                   write keyring for requested entity, or
                                         master keyring if none given
auth get <entity>                        write keyring file with requested key
auth get-key <entity>                    display requested key
auth get-or-create <entity> {<caps>      add auth info for <entity> from input
[<caps>...]}                             file, or random key if no input given,
                                         and/or any caps specified in the
                                         command
auth get-or-create-key <entity> {<caps>  get, or add, key for <name> from
[<caps>...]}                             system/caps pairs specified in the
                                         command.  If key already exists, any
                                         given caps must match the existing
                                         caps for that key.
auth import                              auth import: read keyring file from -i
                                         <file>
auth list                                list authentication state
auth print-key <entity>                  display requested key

Monitor

compact                                  cause compaction of monitor's leveldb
                                         storage
config-key del <key>                     delete <key>
config-key exists <key>                  check for <key>'s existence
config-key get <key>                     get <key>
config-key list                          list keys
config-key put <key> {<val>}             put <key>, value <val>
df {detail}                              show cluster free space stats
fsid                                     show cluster FSID/UUID
health {detail}                          show cluster health
log <logtext> [<logtext>...]             log supplied text to the monitor log
mon add <name> <IPaddr[:port]>           add new monitor named <name> at <addr>
mon dump {<int[0-]>}                     dump formatted monmap (optionally from
                                         epoch)
mon getmap {<int[0-]>}                   get monmap
mon remove <name>                        remove monitor named <name>
mon stat                                 summarize monitor status
mon_status                               report status of monitors
quorum enter|exit                        enter or exit quorum
quorum_status                            report status of monitor quorum
report {<tags> [<tags>...]}              report full status of cluster,
                                         optional title tag strings
scrub                                    scrub the monitor stores
status                                   show cluster status
sync force {--yes-i-really-mean-it} {--  force sync of and clear monitor store
i-know-what-i-am-doing}                 

MDS

mds add_data_pool <pool>                 add data pool <pool>
mds cluster_down                         take MDS cluster down
mds cluster_up                           bring MDS cluster up
mds compat rm_compat <int[0-]>           remove compatible feature
mds compat rm_incompat <int[0-]>         remove incompatible feature
mds compat show                          show mds compatibility settings
mds deactivate <who>                     stop mds
mds dump {<int[0-]>}                     dump info, optionally from epoch
mds fail <who>                           force mds to status failed
mds getmap {<int[0-]>}                   get MDS map, optionally from epoch
mds newfs <int[0-]> <int[0-]> {--yes-i-  make new filesystom using pools
really-mean-it}                          <metadata> and <data>
mds remove_data_pool <pool>              remove data pool <>
mds rm <int[0-]> <name (type.id)>        remove nonactive mds
mds rmfailed <int[0-]>                   remove failed mds
mds set max_mds|max_file_size|allow_new_ set mds parameter <var> to <val>
snaps|inline_data <val> {<confirm>}
mds set max_mds|max_file_size <val>      set mds parameter <var> to <val>
mds set_max_mds <int[0-]>                set max MDS index
mds set_state <int[0-]> <int[0-20]>      set mds state of <gid> to <numeric-
                                         state>
mds setmap <int[0-]>                     set mds map; must supply correct epoch
                                         number
mds stat                                 show MDS status
mds stop <who>                           stop mds
mds tell <who> <args> [<args>...]        send command to particular mds
mds unset allow_new_snaps {<sure>}       unset <key>

OSDs

osd blacklist add|rm <EntityAddr>        add (optionally until <expire> seconds
{<float[0.0-]>}                          from now) or remove <addr> from
                                         blacklist
osd blacklist ls                         show blacklisted clients
osd create {<uuid>}                      create new osd (with optional UUID)
osd deep-scrub <who>                     initiate deep scrub on osd <who>
osd down <ids> [<ids>...]                set osd(s) <id> [<id>...] down
osd dump {<int[0-]>}                     print summary of OSD map
osd getmap {<int[0-]>}                   get OSD map
osd getmaxosd                            show largest OSD id
osd in <ids> [<ids>...]                  set osd(s) <id> [<id>...] in
osd lost <int[0-]> {--yes-i-really-mean- mark osd as permanently lost. THIS
it}                                      DESTROYS DATA IF NO MORE REPLICAS
                                         EXIST, BE CAREFUL
osd ls {<int[0-]>}                       show all OSD ids
osd metadata <int[0-]>                   fetch metadata for osd <id>
osd out <ids> [<ids>...]                 set osd(s) <id> [<id>...] out
osd pause                                pause osd
osd perf                                 print dump of OSD perf summary stats
osd repair <who>                         initiate repair on osd <who>
osd reweight <int[0-]> <float[0.0-1.0]>  reweight osd to 0.0 < <weight> < 1.0
osd reweight-by-utilization {<int[100-   reweight OSDs by utilization [overload-
]>}                                      percentage-for-consideration, default
                                         120]
osd rm <ids> [<ids>...]                  remove osd(s) <id> [<id>...] in
osd scrub <who>                          initiate scrub on osd <who>
osd set pause|noup|nodown|noout|noin|    
nobackfill|norecover|noscrub|nodeep-    
scrub|notieragent     set <key>                             
osd setmaxosd <int[0-]>                  set new maximum osd value
osd stat                                 print summary of OSD map
osd thrash <int[0-]>                     thrash OSDs for <num_epochs>
osd unpause                              unpause osd
osd unset pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent      unset <key>                             

CRUSH

osd crush add <osdname (id|osd.id)>      add or update crushmap position and
<float[0.0-]> <args> [<args>...]         weight for <name> with <weight> and
                                         location <args>
osd crush add-bucket <name> <type>       add no-parent (probably root) crush
                                         bucket <name> of type <type>
osd crush create-or-move <osdname (id|   create entry or move existing entry
osd.id)> <float[0.0-]> <args> [<args>..  for <name> <weight> at/to location
.]                                       <args>
osd crush dump                           dump crush map
osd crush link <name> <args> [<args>...] link existing entry for <name> under
                                         location <args>
osd crush move <name> <args> [<args>...] move existing entry for <name> to
                                         location <args>
osd crush remove <name> {<ancestor>}     remove <name> from crush map (
                                         everywhere, or just at <ancestor>)
osd crush reweight <name> <float[0.0-]>  change <name>'s weight to <weight> in
                                         crush map
osd crush rm <name> {<ancestor>}         remove <name> from crush map (
                                         everywhere, or just at <ancestor>)
osd crush rule create-erasure <name>     create crush rule <name> for erasure
{<profile>}                              coded pool created with <profile> (
                                         default default
osd crush rule create-simple <name>      create crush rule <name> to start from
<root> <type> {firstn|indep}             <root>, replicate across buckets of
                                         type <type>, using a choose mode of
                                         <firstn|indep> (default firstn; indep
                                         best for erasure pools)
osd crush rule dump {<name>}             dump crush rule <name> (default all)
osd crush rule list                      list crush rules
osd crush rule ls                        list crush rules
osd crush rule rm <name>                 remove crush rule <name>
osd crush set                            set crush map from input file
osd crush set <osdname (id|osd.id)>      update crushmap position and weight
<float[0.0-]> <args> [<args>...]         for <name> to <weight> with location
                                         <args>
osd crush show-tunables                  show current crush tunables  
osd crush tunables legacy|argonaut|      set crush tunables values to <profile>
bobtail|firefly|optimal|default

osd crush unlink <name> {<ancestor>}     unlink <name> from crush map (
                                         everywhere, or just at <ancestor>)
osd find <int[0-]>                       find osd <id> in the CRUSH map and
                                         show its location
osd getcrushmap {<int[0-]>}              get CRUSH map
osd setcrushmap                          set crush map from input file
osd tree {<int[0-]>}                     print OSD tree

Erasure Code
osd erasure-code-profile get <name>      get erasure code profile <name>
osd erasure-code-profile ls              list all erasure code profiles
osd erasure-code-profile rm <name>       remove erasure code profile <name>
osd erasure-code-profile set <name>      create erasure code profile <name>
{<profile> [<profile>...]}               with [<key[=value]> ...] pairs. Add a
                                         --force at the end to override an
                                         existing profile (VERY DANGEROUS)

Pool

osd lspools {<int>}                      list pools
osd pool create <poolname> <int[0-]>     create pool
{<int[0-]>} {replicated|erasure}
{<erasure_code_profile>} {<ruleset>}       
osd pool delete <poolname> {<poolname>}  delete pool
{--yes-i-really-really-mean-it}         
osd pool get <poolname> size|min_size|   get pool parameter <var>
crash_replay_interval|pg_num|pgp_num|
crush_ruleset|hit_set_type|hit_set_
period|hit_set_count|hit_set_fpp|auid                         
osd pool mksnap <poolname> <snap>        make snapshot <snap> in <pool>
osd pool rename <poolname> <poolname>    rename <srcpool> to <destpool>
osd pool rmsnap <poolname> <snap>        remove snapshot <snap> from <pool>
osd pool set <poolname> size|min_size|   set pool parameter <var> to <val>
crash_replay_interval|pg_num|pgp_num|
crush_ruleset|hashpspool|hit_set_type|
hit_set_period|hit_set_count|hit_set_
fpp|debug_fake_ec_pool|target_max_
bytes|target_max_objects|cache_target_
dirty_ratio|cache_target_full_ratio|
cache_min_flush_age|cache_min_evict_
age|auid <val> {--yes-i-really-mean-it}                             
osd pool set-quota <poolname> max_       set object or byte limit on pool
objects|max_bytes <val>                 
osd pool stats {<name>}                  obtain stats from all pools, or from
                                         specified pool
osd primary-affinity <osdname (id|osd.   adjust osd primary-affinity from 0.0 <=
id)> <float[0.0-1.0]>                     <weight> <= 1.0
osd primary-temp <pgid> <id>             set primary_temp mapping pgid:<id>|-1 (
                                         developers only)
osd tier add <poolname> <poolname> {--   add the tier <tierpool> (the second
force-nonempty}                          one) to base pool <pool> (the first
                                         one)
osd tier add-cache <poolname>            add a cache <tierpool> (the second one)
<poolname> <int[0-]>                     of size <size> to existing pool
                                         <pool> (the first one)

osd tier cache-mode <poolname> none|     specify the caching mode for cache
writeback|invalidate+forward|readonly    tier <pool>
osd tier remove <poolname> <poolname>    remove the tier <tierpool> from base
                                         pool <pool>
osd tier remove-overlay <poolname>       remove the overlay pool for base pool
                                         <pool>
osd tier set-overlay <poolname>          set the overlay pool for base pool
<poolname>                               <pool> to be <overlaypool>

PG

 
osd map <poolname> <objectname>          find pg for <object> in <pool>
osd pg-temp <pgid> {<id> [<id>...]}      set pg_temp mapping pgid:[<id> [<id>...
                                         ]] (developers only)
pg debug unfound_objects_exist|degraded_ show debug info about pgs
pgs_exist                               
pg deep-scrub <pgid>                     start deep-scrub on <pgid>
pg dump {all|summary|sum|delta|pools|    show human-readable versions of pg map
osds|pgs|pgs_brief [all|summary|sum|     (only 'all' valid with plain)
delta|pools|osds|pgs|pgs_brief...]}     
pg dump_json {all|summary|sum|pools|     show human-readable version of pg map
osds|pgs [all|summary|sum|pools|osds|    in json only
pgs...]}                                
pg dump_pools_json                       show pg pools info in json only
pg dump_stuck {inactive|unclean|stale    show information about stuck pgs
[inactive|unclean|stale...]} {<int>}    
pg force_create_pg <pgid>                force creation of pg <pgid>
pg getmap                                get binary pg map to -o/stdout
pg map <pgid>                            show mapping of pg to osds
pg repair <pgid>                         start repair on <pgid>
pg scrub <pgid>                          start scrub on <pgid>
pg send_pg_creates                       trigger pg creates to be issued
pg set_full_ratio <float[0.0-1.0]>       set ratio at which pgs are considered
                                         full
pg set_nearfull_ratio <float[0.0-1.0]>   set ratio at which pgs are considered
                                         nearly full
pg stat                                  show placement group status.

Misc

heap dump|start_profiler|stop_profiler|  show heap usage info (available only
release|stats                            if compiled with tcmalloc)
injectargs <injected_args> [<injected_   inject config arguments into monitor
args>...]                               
tell <name (type.id)> <args> [<args>...] send a command to a specific daemon

RBD

 (ls | list) [-l | --long ] [pool-name] list rbd images
                                             (-l includes snapshots/clones)
 info <image-name>                           show information about image size,
                                             striping, etc.
 create [--order <bits>] --size <MB> <name>  create an empty image
 clone [--order <bits>] <parentsnap> <clonename>
                                             clone a snapshot into a COW
                                             child image
 children <snap-name>                        display children of snapshot
 flatten <image-name>                        fill clone with parent data
                                             (make it independent)
 resize --size <MB> <image-name>             resize (expand or contract) image
 rm <image-name>                             delete an image
 export <image-name> <path>                  export image to file
                                             "-" for stdout
 import <path> <image-name>                  import image from file
                                             (dest defaults
                                              as the filename part of file)
                                             "-" for stdin
 diff <image-name> [--from-snap <snap-name>] print extents that differ since
                                             a previous snap, or image creation
 export-diff <image-name> [--from-snap <snap-name>] <path>
                                             export an incremental diff to
                                             path, or "-" for stdout
 import-diff <path> <image-name>             import an incremental diff from
                                             path or "-" for stdin
 (cp | copy) <src> <dest>                    copy src image to dest
 (mv | rename) <src> <dest>                  rename src image to dest
 snap ls <image-name>                        dump list of image snapshots
 snap create <snap-name>                     create a snapshot
 snap rollback <snap-name>                   rollback image to snapshot
 snap rm <snap-name>                         deletes a snapshot
 snap purge <image-name>                     deletes all snapshots
 snap protect <snap-name>                    prevent a snapshot from being deleted
 snap unprotect <snap-name>                  allow a snapshot to be deleted
 watch <image-name>                          watch events on image
 map <image-name>                            map image to a block device
                                             using the kernel
 unmap <device>                              unmap a rbd device that was
                                             mapped by the kernel
 showmapped                                  show the rbd images mapped
                                             by the kernel
 lock list <image-name>                      show locks held on an image
 lock add <image-name> <id> [--shared <tag>] take a lock called id on an image
 lock remove <image-name> <id> <locker>      release a lock on an image
 bench-write <image-name>                    simple write benchmark
                --io-size <bytes>              write size
                --io-threads <num>             ios in flight
                --io-total <bytes>             total bytes to write
                --io-pattern <seq|rand>        write pattern

RGW

 
$ ./radosgw-admin --help
usage: radosgw-admin <cmd> [options...]
commands:
 user create                create a new user
 user modify                modify user
 user info                  get user info
 user rm                    remove user
 user suspend               suspend a user
 user enable                re-enable user after suspension
 user check                 check user info
 user stats                 show user stats as accounted by quota subsystem
 caps add                   add user capabilities
 caps rm                    remove user capabilities
 subuser create             create a new subuser
 subuser modify             modify subuser
 subuser rm                 remove subuser
 key create                 create access key
 key rm                     remove access key
 bucket list                list buckets
 bucket link                link bucket to specified user
 bucket unlink              unlink bucket from specified user
 bucket stats               returns bucket statistics
 bucket rm                  remove bucket
 bucket check               check bucket index
 object rm                  remove object
 object unlink              unlink object from bucket index
 quota set                  set quota params
 quota enable               enable quota
 quota disable              disable quota
 region get                 show region info
 regions list               list all regions set on this cluster
 region set                 set region info (requires infile)
 region default             set default region
 region-map get             show region-map
 region-map set             set region-map (requires infile)
 zone get                   show zone cluster params
 zone set                   set zone cluster params (requires infile)
 zone list                  list all zones set on this cluster
 pool add                   add an existing pool for data placement
 pool rm                    remove an existing pool from data placement set
 pools list                 list placement active set
 policy                     read bucket/object policy
 log list                   list log objects
 log show                   dump a log from specific object or (bucket + date
                            + bucket-id)
 log rm                     remove log object
 usage show                 show usage (by user, date range)
 usage trim                 trim usage (by user, date range)
 temp remove                remove temporary objects that were created up to
                            specified date (and optional time)
 gc list                    dump expired garbage collection objects (specify
                            --include-all to list all entries, including unexpired)
 gc process                 manually process garbage
 metadata get               get metadata info
 metadata put               put metadata info
 metadata rm                remove metadata info
 metadata list              list metadata info
 mdlog list                 list metadata log
 mdlog trim                 trim metadata log
 bilog list                 list bucket index log
 bilog trim                 trim bucket index log (use start-marker, end-marker)
 datalog list               list data log
 datalog trim               trim data log
 opstate list               list stateful operations entries (use client_id,
                            op_id, object)
 opstate set                set state on an entry (use client_id, op_id, object, state)
 opstate renew              renew state on an entry (use client_id, op_id, object)
 opstate rm                 remove entry (use client_id, op_id, object)
 replicalog get             get replica metadata log entry
 replicalog delete          delete replica metadata log entry
options:
  --uid=<id>                user id
  --subuser=<name>          subuser name
  --access-key=<key>        S3 access key
  --email=<email>
  --secret=<key>            specify secret key
  --gen-access-key          generate random access key (for S3)
  --gen-secret              generate random secret key
  --key-type=<type>         key type, options are: swift, s3
  --temp-url-key[-2]=<key>  temp url key
  --access=<access>         Set access permissions for sub-user, should be one
                            of read, write, readwrite, full
  --display-name=<name>
  --system                  set the system flag on the user
  --bucket=<bucket>
  --pool=<pool>
  --object=<object>
  --date=<date>
  --start-date=<date>
  --end-date=<date>
  --bucket-id=<bucket-id>
  --shard-id=<shard-id>     optional for mdlog list
                            required for:
                              mdlog trim
                              replica mdlog get/delete
                              replica datalog get/delete
  --metadata-key=<key>      key to retrieve metadata from with metadata get
  --rgw-region=<region>     region in which radosgw is running
  --rgw-zone=<zone>         zone in which radosgw is running
  --fix                     besides checking bucket index, will also fix it
  --check-objects           bucket check: rebuilds bucket index according to
                            actual objects state
  --format=<format>         specify output format for certain operations: xml,
                            json
  --purge-data              when specified, user removal will also purge all the
                            user data
  --purge-keys              when specified, subuser removal will also purge all the
                            subuser keys
  --purge-objects           remove a bucket's objects before deleting it
                            (NOTE: required to delete a non-empty bucket)
  --sync-stats              option to 'user stats', update user stats with current
                            stats reported by user's buckets indexes
  --show-log-entries=<flag> enable/disable dump of log entries on log show
  --show-log-sum=<flag>     enable/disable dump of log summation on log show
  --skip-zero-entries       log show only dumps entries that don't have zero value
                            in one of the numeric field
  --infile                  specify a file to read in when setting data
  --state=<state string>    specify a state for the opstate set command
  --replica-log-type        replica log type (metadata, data, bucket), required for
                            replica log operations
  --categories=<list>       comma separated list of categories, used in usage show
  --caps=<caps>             list of caps (e.g., "usage=read, write; user=read" 
  --yes-i-really-mean-it    required for certain operations

Quota options:
  --bucket                  specified bucket for quota command
  --max-objects             specify max objects (negative value to disable)
  --max-size                specify max size (in bytes, negative value to disable)
  --quota-scope             scope of quota (bucket, user)

 --conf/-c FILE    read configuration from the given configuration file
 --id/-i ID        set ID portion of my name
 --name/-n TYPE.ID set name
 --cluster NAME    set cluster name (default: ceph)
 --version         show version and quit

Updated by Jessica Mack almost 9 years ago · 1 revisions