targetcli-2.1.fb47 or newer package ython-rtslib-2.1.fb64 or newer package cmu-runner-1.3.0 or newer package eph-iscsi-config-2.4 or newer package eph-iscsi-cli-2.5 or newer package
[config] # Name of the Ceph storage cluster. A suitable Ceph configuration file allowing # access to the Ceph storage cluster from the gateway node is required, if not # colocated on an OSD node. cluster_name = ceph
# Place a copy of the ceph cluster's admin keyring in the gateway's /etc/ceph # drectory and reference the filename here gateway_keyring = ceph.client.admin.keyring
# API settings. # The API supports a number of options that allow you to tailor it to your # local environment. If you want to run the API under https, you will need to # create cert/key files that are compatible for each iSCSI gateway node, that is # not locked to a specific node. SSL cert and key files *must* be called # 'iscsi-gateway.crt' and 'iscsi-gateway.key' and placed in the '/etc/ceph/' directory # on *each* gateway node. With the SSL files in place, you can use 'api_secure = true' # to switch to https mode.
# To support the API, the bear minimum settings are: api_secure = false
# Additional API configuration options are as follows, defaults shown. # api_user = admin # api_password = admin # api_port = 5001 # trusted_ip_list = 192.168.0.10,192.168.0.11
最后一行的trusted_ip_list修改为用来配置网关的主机IP,我的环境为
trusted_ip_list =192.168.219.128,192.168.219.129
所有网关节点的这个配置文件的内容需要一致,修改好一台直接scp到每个网关节点上
启动API服务
[root@lab101 install]# systemctl daemon-reload [root@lab101 install]# systemctl enable rbd-target-api [root@lab101 install]# systemctl start rbd-target-api [root@lab101 install]# systemctl status rbd-target-api ● rbd-target-api.service - Ceph iscsi target configuration API Loaded: loaded (/usr/lib/systemd/system/rbd-target-api.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-03-1509:44:34 CST; 18min ago Main PID: 1493 (rbd-target-api) CGroup: /system.slice/rbd-target-api.service └─1493 /usr/bin/python /usr/bin/rbd-target-api
Mar 1509:44:34 lab101 systemd[1]: Started Ceph iscsi target configuration API. Mar 1509:44:34 lab101 systemd[1]: Starting Ceph iscsi target configuration API... Mar 1509:44:58 lab101 rbd-target-api[1493]: Started the configuration object watcher Mar 1509:44:58 lab101 rbd-target-api[1493]: Checking for config object changes every 1s Mar 1509:44:58 lab101 rbd-target-api[1493]: * Running on http://0.0.0.0:5000/
配置iscsi 执行gwcli命令
默认是这样的
进入icsi-target创建一个target
/> cd iscsi-target /iscsi-target> create iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw ok
/iscsi-target...t:75c3d5efde0> auth chap=iqn.1994-05.com.redhat:75c3d5efde0/admin@a_12a-bb ok
chap的命名规则可以这样查询
/iscsi-target...t:75c3d5efde0> help auth
SYNTAX ====== auth [chap]
DESCRIPTION ===========
Client authentication can be set to use CHAP by supplying the a string of the form <username>/<password>
e.g. auth chap=username/password | nochap
username ... the username is 8-64 character string. Each character may either be an alphanumeric or use one of the following special characters .,:,-,@. Consider using the hosts 'shortname' or the initiators IQN value as the username
password ... the password must be between 12-16 chars in length containing alphanumeric characters, plus the following special characters @,_,-
WARNING: Using unsupported special characters may result in truncation, resulting in failed logins.
Specifying 'nochap' will remove chap authentication for the client across all gateways.
增加磁盘到客户端
/iscsi-target...t:75c3d5efde0> disk add rbd.disk_1 ok
到这里就配置完成了,我们看下最终应该是怎么样的
windows客户端配置
这个地方我配置的时候用的win10配置的时候出现了无法连接的情况,可能是windows10自身的认证要求跟服务端冲突了,这里用windows server 2016 进行连接测试
Following our recent initiative on writing more Ceph modules for Ceph Ansible, I’d like to introduce one that I recently wrote: ceph_key.
The module is pretty straightforward to use and will ease your day two operations for managing CephX keys. It has several capabilities such as:
create: will create the key on the filesystem with the right permissions (support mode/owner) and will import in the Ceph (can be enabled/disabled) with the given capabilities
update: will update the capabilities of a particular key
delete: will delete the key from Ceph
info: will get every information about a particular key
list: will list all the available keys
The module also works on containerized Ceph clusters.
See the following examples:
--- # This playbook is used to manage CephX Keys # You will find examples below on how the module can be used on daily operations # # It currently runs on localhost
With its two latest versions (v1.3.0 and v1.4.0) Ceph Nano brought some nifty new functionalities that I’d like to highlight in the article.
Multi cluster support
This is feature is available since v1.3.0.
You can now run more than a single instance of cn, you can run as many as your system allows it (CPU and memory wise). This is how you run a new cluster:
HEALTH_OK is the Ceph status S3 object server address is: http://10.36.116.231:8001 S3 user is: nano S3 access key is: JZYOITC0BDLPB0K6E5WX S3 secret key is: sF0Vu6seb64hhlsmtxKT6BSrs2KY8cAB8la8kni1 Your working directory is: /tmp
And how you can retrieve the list of running clusters:
This feature works well in conjunction with the image support.
You can run any container using any container image available in the Docker Hub. You can even your own one if you want to test a fix.
You can list the available image like this:
$ ./cn image ls latest-bislatest latest-luminouslatest-kraken latest-jewelmaster-da37788-kraken-centos-7-x86_64 master-da37788-jewel-centos-7-x86_64master-da37788-kraken-ubuntu-16.04-x86_64 master-da37788-jewel-ubuntu-14.04-x86_64master-da37788-jewel-ubuntu-16.04-x86_64
Use -a to list all our images.
So using the -i option when starting a cluster will run the image you want.
Dedicated device or directory support
This feature is available since v1.4.0.
You might be after providing more persistent and fast storage for cn. This is possible by specifying either a dedicated block device (a partition works too) or a directory that you might have configured on a particular device.
You have to run cn with sudo here since it performs a couple of checks on that device to make sure its eligible for usage. Thus higher privileges to run cn are required.
Using a directory is identical, just run with -b /srv/cn/ for instance.
I’m so glad to see how cn has evolved, I’m proud of this little tool that I use on daily basis for so many things. I hope you are enjoying it as much as I do.
I will be attending the Red Hat summit as I’m co-presenting a lab.
This goal of the lab is to deploy an OpenStack Hyperconverged environment (HCI) with Ceph.
Cryptography does not have to be mysterious — as author of Serious Cryptography Jean-Philippe Aumasson points out. It is meant to be fiendishly complex to break, and it remains very challenging to implement (see jokes on rolling your own crypto found all over the Net), but it is well within the grasp of most programmers to understand.
While many are intimidated by the prospect of digging into what is effectively a branch of number theory, the reality is that cryptography is squarely based in discrete mathematics—and good coders are all, without exception and often unknowingly, natural discrete math jugglers. If you are interested and you aced your data structures course, chances are that crypto will not be an unsurmountable challenge to you. Aumasson certainly seems to think so, and he walks us along his own path to the discovery of the cryptographic realm.
The Contenders
Other books can take you along this journey, all with their distinctive traits. The classic 1994-vintage Applied Cryptography by Bruce Schneier is the unchallenged, most authoritative reference in the field. Squarely focused on crypto algorithms and their weaknesses, it belongs on every security nerd’s shelf, but it may not be an engineers first choice when looking at this space: actual production use or even mention of protocols like TLS and SSL are entirely outside of its scope.
Schneier revisited the subject again in 2003 with Niels Ferguson and gave us Practical Cryptography, covering every conceivable engineering aspect of implementing and consuming cryptographic code while having a clue to what is happening inside the system. This is an eminently practical book, and it was re-issued in updated form in 2010 under the new title of Cryptography Engineering under the new co-authorship of Tadayoshi Kohno.
While I had read Shneier’s original tome in installments during my Summer visits at the University of Jyväskylä, my deep-dive in the field came through a so-called MIT course bible, Lecture Notes on Cryptography compiled for 6.87s, a weeklong course on cryptography taught at MIT by future Turing laureate Shafi Goldwasser and Mihir Bellare during the Summers of 1996–2002, which I myself was privileged to attend in 2000. This was one of the most intellectually challenging and mind-stretching weeks of my life, with a new, exciting idea being served every 10 minutes in an amazing tour-de force. These notes are absolutely great, and I still go back to them, but I do not know if they would serve you as well without the live instructors guiding you along the first time.
Graduate Course in Applied Cryptography, by Dan Boneh and Victor Shoup of Stanford and NYU respectively, is another similar course bible, and I am mentioning it here because it has been updated more recently than Goldwasser and Bellare’s, which was last refreshed in 2008.
Enter the New Tome
No Starch Press has been lining up an impressive computer security catalog, and it was inevitable they would venture into crypto at one point or another. Aumasson’s entry into the pantheon of the explainers of cryptography is characterized by his focus on teaching us how the algorithms work with the most meager use of mathematical notation. This is, like most of the other books I referenced, a book aiming to increase the understanding of how cryptography works, covering primitives such as block and stream modes, hash functions, and keyed hashing. But what is noteworthy is how this book also straddles the ranges defined earlier, spanning from pre-requisites like the need for good randomness, hard problems, and the definition of cryptographic security on the one end and the operation of the RSA algorithm and the TLS protocol on the other. This is not a book targeted at experts in the field, but it does not trivialize the subject matter either and it is impressive in its breadth: the state of the art in applied cryptography is distilled here in a mere 282 pages.
It is hard to overstate how pleasing the broad reach of this single book is to this reader: despite my keen interest in the field and all my reading, I myself did not hand-roll the RSA algorithm until Professor H.T. Kung made me a few years later — in a networking graduate course. Isolation between the study of the algorithms and the study of the protocols implementing them is exceedingly common, and it is delightful to see this book bridges the two.
I was drawn to the book by its concise and yet comprehensive coverage of randomness for a talk I have been developing, and stayed to read the explanation of keyed hashing and message authentication codes (MACs) — a jewel in its own right as the author co-developed two hash functions now in widespread use . As someone who had to self-start his own coding in both subjects, I wish this book had been available when I was in grad school. My loss is your gain dear reader, you can catch up to the state of the art much faster than I did a decade ago!
This is still a complex subject, yet Aumasson’s tome should help increase the ranks of those that can confidently contribute when the topic is being discussed. Most programmers need not be cryptanalysts, but many will benefit from a deeper understanding of how security in computer systems is actually achieved.
Los Tres Caballeros —sans sombreros— descended on Vancouver this week to participate in the “Rocky” OpenStack Summit. For the assembled crowd of clouderati, Sébastien Han, Sean Cohen and yours truly had one simple question: what if your datacenter was wiped out in its entirety, but your users hardly even noticed?
We have touched on the disaster recovery theme before, but this time we decided to discuss backup as well as HA, which made for a slightly longer talk than we had planned—we hope you enjoyed our “choose your disaster” tour, we definitely enjoyed leading it.
The recording of our OpenStack Summit session is now live on the OpenStack Foundation’s YouTube channel. It is impressive how quickly the Foundation’s media team releases now:
Our slides are available as a PDF and can be viewed inline below — we are including our backup slides, so you can find out what we could have talked about, had we run over even longer
Last weekend, the openSUSE Conference 2018 took place in Prague (Czech
Republic). Our team was present to talk about Ceph and our
involvement in developing the Ceph manager dashboard, which will be available as
part of the upcoming Ceph “Mimic” release.
The presentations were held by Laura Paduano and Kai Wagner from our team – thank you for your engagement!
The openSUSE conference team did an excellent job in streaming and recording
each session, and the resulting videos can already be viewed from their YouTube
channel.
Kyle Bader and I teamed up to deliver a quick (and hopefully painless) review of what types of storage your Big Data strategy needs to succeed alongside the better-understood (and more traditional) existing approaches to structured data.
Data platform engineers need to receive support from both the Compute and the Storage infrastructure teams to deliver. We look at how the public cloud, and Amazon AWS in particular, tackle these challenges and what are the equivalent technology strategies in OpenStack and Ceph.
Tradeoffs between IO latency, availability of storage space, cost and IO performance lead to storage options fragmenting into three broad solution areas: network-backed persistent block, application-focused object storage (also network based), and directly-attached low-latency NVME storage for highest-performance scratch and overflow space.
Ideally, the infrastructure designer would choose to adopt similarly-behaving approaches to the public and private cloud environments, which is what makes OpenStack and Ceph a good fit: scale-out, cloud-native technologies naturally have much more in common with public cloud than legacy vendors. Interested? Listen to our quick survey of the field, the OpenStack Foundation kindly published a recording of our session:
[DEFAULT] ## this section is just used as default for all the "s3 *" ## sections, you can place these variables also directly there
## replace with e.g. "localhost" to run against local software host =192.168.19.101
## uncomment the port to use something other than 80 port =7481
## say "no" to disable TLS is_secure = no
[fixtures] ## all the buckets created will start with this prefix; ## {random} will be filled with random characters to pad ## the prefix to 30 characters long, and avoid collisions bucket prefix = cephtest-{random}-
[s3 main] ## the tests assume two accounts are defined, "main" and "alt".
## user_id is a 64-character hexstring user_id = test1
## display name typically looks more like a unix login, "jdoe" etc display_name = test1
## replace these with your access keys access_key = test1 secret_key = test1
## replace with key id obtained when secret is created, or delete if KMS not tested #kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
[s3 alt] ## another user account, used for ACL-related tests user_id = test2 display_name = test2 ## the "alt" user needs to have email set, too email = test2@qq.com access_key = test2 secret_key = test2
上面的用户信息是需要提前创建好的,这个用集群内的机器radosgw-admin命令创建即可
radosgw-admin user create --uid=test01 --display-name=test01 --access-key=test01 --secret-key=test01 --email=test01@qq.com radosgw-admin user create --uid=test02 --display-name=test02 --access-key=test02 --secret-key=test02 --email=test02@qq.com
创建好了以后就可以开始测试了
[root@lab101 s3-tests]# S3TEST_CONF=test.conf ./virtualenv/bin/nosetests -a '!fails_on_rgw' ..................................................SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS.....................................................................................................................SSSS.......................................................................................................................................SSSS....................................................... ---------------------------------------------------------------------- Ran 408 tests in122.087s
We’re happy to announce version 3.7.0 of openATTIC!
Version 3.7.0 is the first bugfix release of the 3.7 stable branch, containing fixes for multiple issues that were mainly reported by users.
There has been an issue with self-signed certificates in combination with the RGW proxy which is now configurable. We also improved the openATTIC user experience and adapted some of our frontend tests in order to make them more stable.
As mentioned in our last blog post our team was working on a Spanish translation. We are very proud to have the translation included in this release. Thank you Gustavo for your contribution.
Another highlight of the release is then newly added RBD snapshot management. openATTIC is now capable to create, clone, rollback, protect/unprotect and delete RBD snapshots. In addition it is also possible to copy RBD images now.
Furthermore the “pool edit” feature received a slight update: we implemented the option to set the “EC overwrite” flag when editing erasure coded pools.
Improve accuracy of statfs reporting for Ceph filesystems comprising exactly one data pool. In this case, the Ceph monitor can now report the space usage for the single data pool instead of the global data for the entire Ceph cluster. Include support for this message in mon_client and leverage it in ceph/super.
#define ZP_POOL_DEFAULT 0 /* pool id */ #define CEPH_CAPS_WANTED_DELAY_MAX_DEFAULT 60 /* cap release delay */ struct ceph_mount_options { int flags; int sb_flags;
int wsize; /* max write size */ int rsize; /* max read size */ int zp_pool; /* pool id */ int rasize; /* max readahead */
[root@lab103 ceph]# pwd /home/origin/linux-3.10.0-862.el7/fs/ceph [root@lab103 ceph]# make CONFIG_CEPH_FS=m -C /lib/modules/3.10.0-862.el7.x86_64/build/ M=`pwd` modules make: Entering directory `/usr/src/kernels/3.10.0-862.el7.x86_64' CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/super.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/inode.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/dir.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/file.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/locks.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/addr.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/ioctl.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/export.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/caps.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/snap.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/xattr.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/mds_client.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/mdsmap.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/strings.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/ceph_frag.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/debugfs.o CC [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/acl.o LD [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/ceph.o Building modules, stage 2. MODPOST 1 modules CC /home/origin/linux-3.10.0-862.el7/fs/ceph/ceph.mod.o LD [M] /home/origin/linux-3.10.0-862.el7/fs/ceph/ceph.ko make: Leaving directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
[root@lab103 ceph]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 11650G 11645G 5210M 0.04 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS data 9003671G 0 metadata 1036391011014G 22 newdata 11005507G 0