首页 > 代码库 > Deploy Ceph and start using it:simple librados cli
Deploy Ceph and start using it:simple librados cli
This part of the tutorial describes how to setup a simple Ceph client using librados (for C++).
The only information that the client requires for the cephx authentication is
- Endpoint of the monitor node
- Keyring containing the pre-shared secret (we will use the admin keyring)
Install librados APIs
On Ubuntu, the library is available on the repositories
$ sudo apt-get install librados-dev
Create a client configuration file
This is the file from which librados will read the client configuration.
The content of the file is structured according to this template:
[global] mon host= <IP address of one of the monitors> keyring = <path/to/client.admin.keyring>
for example:
[global] mon host = 192.168.252.10:6789 keyring = ./ceph.client.admin.keyring
The public endpoint of the monitor node can be retrieved with
$ ceph mon stat
The keyring file can be copied from the admin node. No change is needed to this file. The same information that is contained in the file can be retrieved with this command that will also list the client capabilities:
$ ceph auth get client.admin
Connect to the cluster
The following simple client will perform the following operations:
- Read the configuration file (ceph.conf) from the local directory
- Get an handle to the cluster and IO context on the “data” pool
- Create a new object
- Set an xattr
- Read the object and xattr back
- Print the list of pools
- Print the list of objects in the “data” pool
- Cleanup
-
#include <rados/librados.hpp>
-
#include <string>
-
#include <list>
-
-
int main ( int argc, const char **argv )
-
{
-
int ret = 0 ;
-
-
/*
-
* Errors are not checked to avoid pollution.
-
* After each Ceph operation:
-
* if (ret < 0) error_condition
-
* else success
-
*/
-
-
// Get cluster handle and connect to cluster
-
std :: string cluster_name ( “ceph” ) ;
-
std :: string user_name ( “client.admin” ) ;
-
librados :: Rados cluster ;
-
cluster. init2 (user_name. c_str ( ), cluster_name. c_str ( ), 0 ) ;
-
cluster. conf_read_file ( “ceph.conf” ) ;
-
cluster. connect ( ) ;
-
-
// IO context
-
librados :: IoCtx io_ctx ;
-
std :: string pool_name ( “data” ) ;
-
cluster. ioctx_create (pool_name. c_str ( ), io_ctx ) ;
-
-
// Write an object synchronously
-
librados :: bufferlist bl ;
-
std :: string objectId ( “hw” ) ;
-
std :: string objectContent ( “Hello World!” ) ;
-
bl. append (objectContent ) ;
-
io_ctx. write (objectId, bl, objectContent. size ( ), 0 ) ;
-
-
// Add an xattr to the object.
-
librados :: bufferlist lang_bl ;
-
lang_bl. append ( “en_US” ) ;
-
io_ctx. setxattr (objectId, “lang”, lang_bl ) ;
-
-
// Read the object back asynchronously
-
librados :: bufferlist read_buf ;
-
int read_len = 4194304 ;
-
//Create I/O Completion.
-
librados :: AioCompletion *read_completion =
-
librados :: Rados :: aio_create_completion ( ) ;
-
//Send read request.
-
io_ctx. aio_read (objectId, read_completion, &read_buf, read_len, 0 ) ;
-
-
// Wait for the request to complete, and print content
-
read_completion - >wait_for_complete ( ) ;
-
read_completion - >get_return_value ( ) ;
-
std :: cout << “Object name: “ << objectId << “\n“
-
<< “Content: “ << read_buf. c_str ( ) << std :: endl ;
-
-
// Read the xattr.
-
librados :: bufferlist lang_res ;
-
io_ctx. getxattr (objectId, “lang”, lang_res ) ;
-
std :: cout << “Object xattr: “ << lang_res. c_str ( ) << std :: endl ;
-
-
-
// Print the list of pools
-
std :: list <std :: string > pools ;
-
cluster. pool_list (pools ) ;
-
std :: cout << “List of pools from this cluster handle” << std :: endl ;
-
for ( auto pool_id : pools ) {
-
std :: cout << “\t“ << pool_id << std :: endl ;
-
}
-
-
// Print the list of objects
-
librados :: ObjectIterator oit = io_ctx. objects_begin ( ) ;
-
librados :: ObjectIterator oet = io_ctx. objects_end ( ) ;
-
std :: cout << “List of objects from this pool” << std :: endl ;
-
for ( ; oit ! = oet ; oit ++ ) {
-
std :: cout << “\t“ << oit - >first << std :: endl ;
-
}
-
-
// Remove the xattr
-
io_ctx. rmxattr (objectId, “lang” ) ;
-
-
// Remove the object.
-
io_ctx. remove (objectId ) ;
-
-
// Cleanup
-
io_ctx. close ( ) ;
-
cluster. shutdown ( ) ;
-
-
return 0 ;
-
}
Find the pastebin here.
This example can be compiled and executed with
$ g++ client.cpp -lrados -o cephclient $ ./cephclient
Operate with cluster data from the command line
To quickly verify if an object was written or to remove it, use the following commands (e.g., from the monitor node).
-
List objects in pool data
$ rados -p data ls
-
Check the location of an object in pool data
$ ceph osd map data <object name>
-
Remove object from pool data
$ rados rm <object name> --pool=data