Sage Weil [Tue, 31 Jul 2012 01:16:16 +0000 (18:16 -0700)]
libceph: fix handling of immediate socket connect failure
If the connect() call immediately fails such that sock == NULL, we
still need con_close_socket() to reset our socket state to CLOSED.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Sage Weil [Mon, 30 Jul 2012 23:27:48 +0000 (16:27 -0700)]
ceph: update MAINTAINERS file
* shiny new inktank.com email addresses
* add include/linux/crush directory (previous oversight)
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Sage Weil [Mon, 30 Jul 2012 23:26:13 +0000 (16:26 -0700)]
libceph: be less chatty about stray replies
There are many (normal) conditions that can lead to us getting
unexpected replies, include cluster topology changes, osd failures,
and timeouts. There's no need to spam the console about it.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Sage Weil [Sat, 21 Jul 2012 00:30:40 +0000 (17:30 -0700)]
libceph: clear all flags on con_close
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Sat, 21 Jul 2012 00:29:55 +0000 (17:29 -0700)]
libceph: clean up con flags
Rename flags with CON_FLAG prefix, move the definitions into the c file,
and (better) document their meaning.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Sat, 21 Jul 2012 00:24:40 +0000 (17:24 -0700)]
libceph: replace connection state bits with states
Use a simple set of 6 enumerated values for the socket states (CON_STATE_*)
and use those instead of the state bits. All of the con->state checks are
now under the protection of the con mutex, so this is safe. It also
simplifies many of the state checks because we can check for anything other
than the expected state instead of various bits for races we can think of.
This appears to hold up well to stress testing both with and without socket
failure injection on the server side.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Sat, 21 Jul 2012 00:19:43 +0000 (17:19 -0700)]
libceph: drop unnecessary CLOSED check in socket state change callback
If we are CLOSED, the socket is closed and we won't get these.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Fri, 20 Jul 2012 23:45:49 +0000 (16:45 -0700)]
libceph: close socket directly from ceph_con_close()
It is simpler to do this immediately, since we already hold the con mutex.
It also avoids the need to deal with a not-quite-CLOSED socket in con_work.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Fri, 20 Jul 2012 22:40:04 +0000 (15:40 -0700)]
libceph: drop gratuitous socket close calls in con_work
If the state is CLOSED or OPENING, we shouldn't have a socket.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Fri, 20 Jul 2012 22:34:04 +0000 (15:34 -0700)]
libceph: move ceph_con_send() closed check under the con mutex
Take the con mutex before checking whether the connection is closed to
avoid racing with someone else closing it.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Fri, 20 Jul 2012 22:33:04 +0000 (15:33 -0700)]
libceph: move msgr clear_standby under con mutex protection
Avoid dropping and retaking con->mutex in the ceph_con_send() case by
leaving locking up to the caller.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Fri, 20 Jul 2012 22:22:53 +0000 (15:22 -0700)]
libceph: fix fault locking; close socket on lossy fault
If we fault on a lossy connection, we should still close the socket
immediately, and do so under the con mutex.
We should also take the con mutex before printing out the state bits in
the debug output.
Signed-off-by: Sage Weil <sage@inktank.com>
Alex Elder [Wed, 25 Jul 2012 14:32:41 +0000 (09:32 -0500)]
rbd: drop "object_name" from rbd_req_sync_unwatch()
rbd_req_sync_unwatch() only ever uses rbd_dev->header_name as the
value of its "object_name" parameter, and that value is available
within the function already. So get rid of the parameter.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Wed, 25 Jul 2012 14:32:40 +0000 (09:32 -0500)]
rbd: drop "object_name" from rbd_req_sync_notify_ack()
rbd_req_sync_notify_ack() only ever uses rbd_dev->header_name as the
value of its "object_name" parameter, and that value is available
within the function already. So get rid of the parameter.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Wed, 25 Jul 2012 14:32:40 +0000 (09:32 -0500)]
rbd: drop "object_name" from rbd_req_sync_notify()
rbd_req_sync_notify() only ever uses rbd_dev->header_name as the
value of its "object_name" parameter, and that value is available
within the function already. So get rid of the parameter.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Wed, 25 Jul 2012 14:32:40 +0000 (09:32 -0500)]
rbd: drop "object_name" from rbd_req_sync_watch()
rbd_req_sync_watch() is only called in one place, and in that place
it passes rbd_dev->header_name as the value of the "object_name"
parameter. This value is available within the function already.
Having the extra parameter leaves the impression the object name
could take on different values, but it does not.
So get rid of the parameter. We can always add it back again if
we find we want to watch some other object in the future.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 19 Jul 2012 14:09:27 +0000 (09:09 -0500)]
rbd: drop rbd_dev parameter in snap functions
Both rbd_register_snap_dev() and __rbd_remove_snap_dev() have
rbd_dev parameters that are unused. Remove them.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 19 Jul 2012 14:09:27 +0000 (09:09 -0500)]
rbd: drop rbd_header_from_disk() gfp_flags parameter
The function rbd_header_from_disk() is only called in one spot, and
it passes GFP_KERNEL as its value for the gfp_flags parameter.
Just drop that parameter and substitute GFP_KERNEL everywhere within
that function it had been used. (If we find we need the parameter
again in the future it's easy enough to add back again.)
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 19 Jul 2012 14:09:27 +0000 (09:09 -0500)]
rbd: snapc is unused in rbd_req_sync_read()
The "snapc" parameter to in rbd_req_sync_read() is not used, so
get rid of it.
Reported-by: Josh Durgin <josh.durgin@inktank.com>
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 3 Jul 2012 21:01:19 +0000 (16:01 -0500)]
rbd: rename rbd_device->id
The "id" field of an rbd device structure represents the unique
client-local device id mapped to the underlying rbd image. Each rbd
image will have another id--the image id--and each snapshot has its
own id as well. The simple name "id" no longer conveys the
information one might like to have.
Rename the device "id" field in struct rbd_dev to be "dev_id" to
make it a little more obvious what we're dealing with without having
to think more about context.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Wed, 25 Jul 2012 14:32:40 +0000 (09:32 -0500)]
rbd: encapsulate header validity test
If an rbd image header is read and it doesn't begin with the
expected magic information, a warning is displayed. This is
a fairly simple test, but it could be extended at some point.
Fix the comparison so it actually looks at the "text" field
rather than the front of the structure.
In any case, encapsulate the validity test in its own function.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Sat, 14 Jul 2012 01:35:11 +0000 (20:35 -0500)]
ceph: define snap counts as u32 everywhere
There are two structures in which a count of snapshots are
maintained:
struct ceph_snap_context {
...
u32 num_snaps;
...
}
and
struct ceph_snap_realm {
...
u32 num_prior_parent_snaps; /* had prior to parent_since */
...
u32 num_snaps;
...
}
These fields never take on negative values (e.g., to hold special
meaning), and so are really inherently unsigned. Furthermore they
take their value from over-the-wire or on-disk formatted 32-bit
values.
So change their definition to have type u32, and change some spots
elsewhere in the code to account for this change.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Sat, 14 Jul 2012 01:35:11 +0000 (20:35 -0500)]
rbd: clean up a few dout() calls
There was a dout() call in rbd_do_request() that was reporting
the reporting the offset as the length and vice versa. While
fixing that I did a quick scan of other dout() calls and fixed
a couple of other minor things.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 19 Jul 2012 14:09:27 +0000 (09:09 -0500)]
rbd: simplify __rbd_remove_all_snaps()
This just replaces a while loop with list_for_each_entry_safe()
in __rbd_remove_all_snaps().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 19 Jul 2012 14:09:27 +0000 (09:09 -0500)]
rbd: drop extra header_rwsem init
In commit
c666601a there was inadvertently added an extra
initialization of rbd_dev->header_rwsem. This gets rid of the
duplicate.
Reported-by: Guangliang Zhao <gzhao@suse.com>
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 19 Jul 2012 13:49:18 +0000 (08:49 -0500)]
rbd: kill rbd_image_header->snap_seq
The snap_seq field in an rbd_image_header structure held the value
from the rbd image header when it was last refreshed. We now
maintain this value in the snapc->seq field. So get rid of the
other one.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 19 Jul 2012 13:49:18 +0000 (08:49 -0500)]
rbd: set snapc->seq only when refreshing header
In rbd_header_add_snap() there is code to set snapc->seq to the
just-added snapshot id. This is the only remnant left of the
use of that field for recording which snapshot an rbd_dev was
associated with. That functionality is no longer supported,
so get rid of that final bit of code.
Doing so means we never actually set snapc->seq any more. On the
server, the snapshot context's sequence value represents the highest
snapshot id ever issued for a particular rbd image. So we'll make
it have that meaning here as well. To do so, set this value
whenever the rbd header is (re-)read. That way it will always be
consistent with the rest of the snapshot context we maintain.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 19 Jul 2012 13:49:18 +0000 (08:49 -0500)]
rbd: preserve snapc->seq in rbd_header_set_snap()
In rbd_header_set_snap(), there is logic to make the snap context's
seq field get set to a particular snapshot id, or 0 if there is no
snapshot for the rbd image.
This seems to be an artifact of how the current snapshot id for an
rbd_dev was recorded before the rbd_dev->snap_id field began to be
used for that purpose.
There's no need to update the value of snapc->seq here any more, so
stop doing it. Tidy up a few local variables in that function
while we're at it.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 19 Jul 2012 13:49:18 +0000 (08:49 -0500)]
rbd: don't use snapc->seq that way
In what appears to be an artifact of a different way of encoding
whether an rbd image maps a snapshot, __rbd_refresh_header() has
code that arranges to update the seq value in an rbd image's
snapshot context to point to the first entry in its snapshot
array if that's where it was pointing initially.
We now use rbd_dev->snap_id to record the snapshot id--using the
special value CEPH_NOSNAP to indicate the rbd_dev is not mapping a
snapshot at all.
There is therefore no need to check for this case, nor to update the
seq value, in __rbd_refresh_header(). Just preserve the seq value
that rbd_read_header() provides (which, at the moment, is nothing).
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Josh Durgin [Tue, 6 Dec 2011 02:10:44 +0000 (18:10 -0800)]
rbd: send header version when notifying
Previously the original header version was sent. Now, we update it
when the header changes.
Signed-off-by: Josh Durgin <josh.durgin@dreamhost.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Josh Durgin [Mon, 5 Dec 2011 22:03:05 +0000 (14:03 -0800)]
rbd: use reference counting for the snap context
This prevents a race between requests with a given snap context and
header updates that free it. The osd client was already expecting the
snap context to be reference counted, since it get()s it in
ceph_osdc_build_request and put()s it when the request completes.
Also remove the second down_read()/up_read() on header_rwsem in
rbd_do_request, which wasn't actually preventing this race or
protecting any other data.
Signed-off-by: Josh Durgin <josh.durgin@dreamhost.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Josh Durgin [Mon, 5 Dec 2011 18:41:28 +0000 (10:41 -0800)]
rbd: set image size when header is updated
The image may have been resized.
Signed-off-by: Josh Durgin <josh.durgin@dreamhost.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Josh Durgin [Mon, 5 Dec 2011 18:35:04 +0000 (10:35 -0800)]
rbd: expose the correct size of the device in sysfs
If an image was mapped to a snapshot, the size of the head version
would be shown. Protect capacity with header_rwsem, since it may
change.
Signed-off-by: Josh Durgin <josh.durgin@dreamhost.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Josh Durgin [Tue, 22 Nov 2011 01:13:54 +0000 (17:13 -0800)]
rbd: only reset capacity when pointing to head
Snapshots cannot be resized, and the new capacity of head should not
be reflected by the snapshot.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Josh Durgin [Tue, 22 Nov 2011 02:14:25 +0000 (18:14 -0800)]
rbd: return errors for mapped but deleted snapshot
When a snapshot is deleted, the OSD will return ENOENT when reading
from it. This is normally interpreted as a hole by rbd, which will
return zeroes. To minimize the time in which this can happen, stop
requests early when we are notified that our snapshot no longer
exists.
[elder@inktank.com: updated __rbd_init_snaps_header() logic]
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Jiaju Zhang [Fri, 20 Jul 2012 13:18:36 +0000 (08:18 -0500)]
libceph: trivial fix for the incorrect debug output
This is a trivial fix for the debug output, as it is inconsistent
with the function name so may confuse people when debugging.
[elder@inktank.com: switched to use __func__]
Signed-off-by: Jiaju Zhang <jjzhang@suse.de>
Reviewed-by: Alex Elder <elder@inktank.com>
Alan Cox [Fri, 20 Jul 2012 13:18:36 +0000 (08:18 -0500)]
ceph: fix potential double free
We re-run the loop but we don't re-set the attrs pointer back to NULL.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Sage Weil [Mon, 30 Jul 2012 23:22:05 +0000 (16:22 -0700)]
libceph: reset connection retry on successfully negotiation
We exponentially back off when we encounter connection errors. If several
errors accumulate, we will eventually wait ages before even trying to
reconnect.
Fix this by resetting the backoff counter after a successful negotiation/
connection with the remote node. Fixes ceph issue #2802.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Sage Weil [Mon, 30 Jul 2012 23:21:40 +0000 (16:21 -0700)]
libceph: protect ceph_con_open() with mutex
Take the con mutex while we are initiating a ceph open. This is necessary
because the may have previously been in use and then closed, which could
result in a racing workqueue running con_work().
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Sage Weil [Mon, 30 Jul 2012 23:21:17 +0000 (16:21 -0700)]
ceph: close old con before reopening on mds reconnect
When we detect a mds session reset, close the old ceph_connection before
reopening it. This ensures we clean up the old socket properly and keep
the ceph_connection state correct.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Sage Weil [Mon, 30 Jul 2012 23:20:25 +0000 (16:20 -0700)]
libceph: (re)initialize bio_iter on start of message receive
Previously, we were opportunistically initializing the bio_iter if it
appeared to be uninitialized in the middle of the read path. The problem
is that a sequence like:
- start reading message
- initialize bio_iter
- read half a message
- messenger fault, reconnect
- restart reading message
- ** bio_iter now non-NULL, not reinitialized **
- read past end of bio, crash
Instead, initialize the bio_iter unconditionally when we allocate/claim
the message for read.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Sage Weil [Mon, 30 Jul 2012 23:19:28 +0000 (16:19 -0700)]
libceph: resubmit linger ops when pg mapping changes
The linger op registration (i.e., watch) modifies the object state. As
such, the OSD will reply with success if it has already applied without
doing the associated side-effects (setting up the watch session state).
If we lose the ACK and resubmit, we will see success but the watch will not
be correctly registered and we won't get notifies.
To fix this, always resubmit the linger op with a new tid. We accomplish
this by re-registering as a linger (i.e., 'registered') if we are not yet
registered. Then the second loop will treat this just like a normal
case of re-registering.
This mirrors a similar fix on the userland ceph.git, commit
5dd68b95, and
ceph bug #2796.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Sage Weil [Mon, 30 Jul 2012 23:24:37 +0000 (16:24 -0700)]
libceph: fix mutex coverage for ceph_con_close
Hold the mutex while twiddling all of the state bits to avoid possible
races. While we're here, make not of why we cannot close the socket
directly.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Sage Weil [Mon, 30 Jul 2012 23:24:21 +0000 (16:24 -0700)]
libceph: report socket read/write error message
We need to set error_msg to something useful before calling ceph_fault();
do so here for try_{read,write}(). This is more informative than
libceph: osd0 192.168.106.220:6801 (null)
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Sage Weil [Tue, 31 Jul 2012 01:15:23 +0000 (18:15 -0700)]
libceph: support crush tunables
The server side recently added support for tuning some magic
crush variables. Decode these variables if they are present, or use the
default values if they are not present.
Corresponds to ceph.git commit
89af369c25f274fe62ef730e5e8aad0c54f1e5a5.
Signed-off-by: caleb miles <caleb.miles@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Sage Weil [Mon, 30 Jul 2012 23:23:22 +0000 (16:23 -0700)]
libceph: move feature bits to separate header
This is simply cleanup that will keep things more closely synced with the
userland code.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Alex Elder [Tue, 26 Jun 2012 19:57:03 +0000 (12:57 -0700)]
rbd: kill num_reply parameters
Several functions include a num_reply parameter, but it is never
used. Just get rid of it everywhere--it seems to be something
that never got fully implemented.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 3 Jul 2012 21:01:18 +0000 (16:01 -0500)]
rbd: option symbol renames
Use the name "ceph_opts" consistently (rather than just "opt") for
pointers to a ceph_options structure.
Change the few spots that don't use "rbd_opts" for a rbd_options
pointer to match the rest.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 3 Jul 2012 21:01:18 +0000 (16:01 -0500)]
rbd: more symbol renames
Rename variables named "obj" which represent object names so they're
consistently named "object_name".
Rename the "cls" and "method" parameters in rbd_req_sync_exec()
to be "class_name" and "method_name", and make similar changes
to the names of local variables in that function representing
the lengths of those names.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 3 Jul 2012 21:01:18 +0000 (16:01 -0500)]
rbd: rename some fields in struct rbd_dev
An rbd image is not a single object, but a logical construct made up
of an aggregation of objects.
Rename some fields in struct rbd_dev, in hopes of reinforcing this.
obj --> image_name
obj_len --> image_name_len
obj_md_name --> header_name
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 3 Jul 2012 21:01:18 +0000 (16:01 -0500)]
rbd: use rbd_dev consistently
Most variables that represent a struct rbd_device are named
"rbd_dev", but in some cases "dev" is used instead. Change all the
"dev" references so they use "rbd_dev" consistently, to make it
clear from the name that we're working with an RBD device (as
opposed to, for example, a struct device). Similarly, change the
name of the "dev" field in struct rbd_notify_info to be "rbd_dev".
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 10 Jul 2012 02:04:24 +0000 (21:04 -0500)]
rbd: dynamically allocate snapshot name
There is no need to impose a small limit the length of the snapshot
name recorded for an rbd image in a struct rbd_dev. Remove the
limitation by allocating space for the snapshot name dynamically.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 10 Jul 2012 02:04:23 +0000 (21:04 -0500)]
rbd: dynamically allocate image name
There is no need to impose a small limit the length of the rbd image
name recorded in a struct rbd_dev. Remove the limitation by
allocating space for the image name dynamically.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 10 Jul 2012 02:04:23 +0000 (21:04 -0500)]
rbd: dynamically allocate image header name
There is no need to impose a small limit the length of the header
name recorded for an rbd image in a struct rbd_dev. Remove the
limitation by allocating space for the header name dynamically.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 10 Jul 2012 02:04:24 +0000 (21:04 -0500)]
rbd: dynamically allocate object prefix
There is no need to impose a small limit the length of the object
prefix recorded for an rbd image in a struct rbd_image_header.
Remove the limitation by allocating space for the object prefix
dynamically.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 12 Jul 2012 15:46:35 +0000 (10:46 -0500)]
rbd: dynamically allocate pool name
There is no need to impose a small limit the length of the pool name
recorded for an rbd image in a struct rbd_device. Remove the
limitation by allocating space for the pool name ynamically.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Thu, 12 Jul 2012 15:46:35 +0000 (10:46 -0500)]
rbd: create pool_id device attribute
Add an entry under /sys/bus/rbd/devices/<N>/ named "pool_id" that
provides the id for the pool the rbd image is assocatied with. This
is in addition to the pool name already provided.
Rename the "poolid" field in struct rbd_device to be "pool_id".
Update the documentation to reflect the addition of this new entry.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Wed, 11 Jul 2012 01:30:09 +0000 (20:30 -0500)]
rbd: rename rbd_dev->block_name
Each rbd image has a name that forms the basis of all data objects
backing the device. Old (format 1) images refer to this name as the
"block name," while new (format 2) images use the term "object
prefix" for this.
Change the field name in the in-core rbd image header structure to
reflect the more modern usage. We intentionally keep the the name
"block_name" in the on-disk definition for format 1 image headers.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 10 Jul 2012 02:04:23 +0000 (21:04 -0500)]
rbd: define dup_token()
Define a new function dup_token(), to be used during argument
parsing for making dynamically-allocated copies of tokens being
parsed.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Wed, 11 Jul 2012 13:24:45 +0000 (08:24 -0500)]
libceph: define ceph_extract_encoded_string()
This adds a new utility routine which will return a dynamically-
allocated buffer containing a string that has been decoded from ceph
over-the-wire format. It also returns the length of the string
if the address of a size variable is supplied to receive it.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Tue, 3 Jul 2012 21:01:19 +0000 (16:01 -0500)]
rbd: drop a useless local variable
In rbd_req_sync_notify_ack(), a local variable was needlessly being
used to hold a null pointer. Just pass NULL instead.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Alex Elder [Tue, 3 Jul 2012 21:01:18 +0000 (16:01 -0500)]
libceph: fix off-by-one bug in ceph_encode_filepath()
There is a BUG_ON() call that doesn't account for the single byte
structure version at the start of an encoded filepath in
ceph_encode_filepath(). Fix that.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Sage Weil [Thu, 7 Jun 2012 20:43:35 +0000 (13:43 -0700)]
ceph: clean up useless d_parent checks
d_parent is never NULL, and IS_ROOT() is the proper way to check for a
(non-self-referential) parent.
Reported-by: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Sage Weil <sage@inktank.com>
Guanjun He [Mon, 9 Jul 2012 02:50:33 +0000 (19:50 -0700)]
libceph: prevent the race of incoming work during teardown
Add an atomic variable 'stopping' as flag in struct ceph_messenger,
set this flag to 1 in function ceph_destroy_client(), and add the condition code
in function ceph_data_ready() to test the flag value, if true(1), just return.
Signed-off-by: Guanjun He <gjhe@suse.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Sage Weil [Tue, 10 Jul 2012 18:53:34 +0000 (11:53 -0700)]
libceph: fix messenger retry
In ancient times, the messenger could both initiate and accept connections.
An artifact if that was data structures to store/process an incoming
ceph_msg_connect request and send an outgoing ceph_msg_connect_reply.
Sadly, the negotiation code was referencing those structures and ignoring
important information (like the peer's connect_seq) from the correct ones.
Among other things, this fixes tight reconnect loops where the server sends
RETRY_SESSION and we (the client) retries with the same connect_seq as last
time. This bug pretty easily triggered by injecting socket failures on the
MDS and running some fs workload like workunits/direct_io/test_sync_io.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Mon, 9 Jul 2012 21:31:41 +0000 (14:31 -0700)]
libceph: initialize rb, list nodes in ceph_osd_request
These don't strictly need to be initialized based on how they are used, but
it is good practice to do so.
Reported-by: Alex Elder <elder@inktank.com>
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Mon, 9 Jul 2012 21:22:34 +0000 (14:22 -0700)]
libceph: initialize msgpool message types
Initialize the type field for messages in a msgpool. The caller was doing
this for osd ops, but not for the reply messages.
Reported-by: Alex Elder <elder@inktank.com>
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Wed, 27 Jun 2012 19:31:02 +0000 (12:31 -0700)]
libceph: allow sock transition from CONNECTING to CLOSED
It is possible to close a socket that is in the OPENING state. For
example, it can happen if ceph_con_close() is called on the con before
the TCP connection is established. con_work() will come around and shut
down the socket.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Wed, 27 Jun 2012 19:24:34 +0000 (12:24 -0700)]
libceph: initialize mon_client con only once
Do not re-initialize the con on every connection attempt. When we
ceph_con_close, there may still be work queued on the socket (e.g., to
close it), and re-initializing will clobber the work_struct state.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Wed, 27 Jun 2012 19:24:08 +0000 (12:24 -0700)]
libceph: set peer name on con_open, not init
The peer name may change on each open attempt, even when the connection is
reused.
Signed-off-by: Sage Weil <sage@inktank.com>
Alex Elder [Thu, 21 Jun 2012 19:49:23 +0000 (12:49 -0700)]
libceph: drop declaration of ceph_con_get()
For some reason the declaration of ceph_con_get() and
ceph_con_put() did not get deleted in this commit:
d59315ca libceph: drop ceph_con_get/put helpers and nref member
Clean that up.
Signed-off-by: Alex Elder <elder@inktank.com>
Alex Elder [Thu, 21 Jun 2012 02:53:53 +0000 (21:53 -0500)]
libceph: add some fine ASCII art
Sage liked the state diagram I put in my commit description so
I'm putting it in with the code.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Mon, 11 Jun 2012 19:57:13 +0000 (14:57 -0500)]
libceph: small changes to messenger.c
This patch gathers a few small changes in "net/ceph/messenger.c":
out_msg_pos_next()
- small logic change that mostly affects indentation
write_partial_msg_pages().
- use a local variable trail_off to represent the offset into
a message of the trail portion of the data (if present)
- once we are in the trail portion we will always be there, so we
don't always need to check against our data position
- avoid computing len twice after we've reached the trail
- get rid of the variable tmpcrc, which is not needed
- trail_off and trail_len never change so mark them const
- update some comments
read_partial_message_bio()
- bio_iovec_idx() will never return an error, so don't bother
checking for it
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Thu, 24 May 2012 16:55:03 +0000 (11:55 -0500)]
libceph: distinguish two phases of connect sequence
Currently a ceph connection enters a "CONNECTING" state when it
begins the process of (re-)connecting with its peer. Once the two
ends have successfully exchanged their banner and addresses, an
additional NEGOTIATING bit is set in the ceph connection's state to
indicate the connection information exhange has begun. The
CONNECTING bit/state continues to be set during this phase.
Rather than have the CONNECTING state continue while the NEGOTIATING
bit is set, interpret these two phases as distinct states. In other
words, when NEGOTIATING is set, clear CONNECTING. That way only
one of them will be active at a time.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Thu, 31 May 2012 16:37:29 +0000 (11:37 -0500)]
libceph: separate banner and connect writes
There are two phases in the process of linking together the two ends
of a ceph connection. The first involves exchanging a banner and
IP addresses, and if that is successful a second phase exchanges
some detail about each side's connection capabilities.
When initiating a connection, the client side now queues to send
its information for both phases of this process at the same time.
This is probably a bit more efficient, but it is slightly messier
from a layering perspective in the code.
So rearrange things so that the client doesn't send the connection
information until it has received and processed the response in the
initial banner phase (in process_banner()).
Move the code (in the (con->sock == NULL) case in try_write()) that
prepares for writing the connection information, delaying doing that
until the banner exchange has completed. Move the code that begins
the transition to this second "NEGOTIATING" phase out of
process_banner() and into its caller, so preparing to write the
connection information and preparing to read the response are
adjacent to each other.
Finally, preparing to write the connection information now requires
the output kvec to be reset in all cases, so move that into the
prepare_write_connect() and delete it from all callers.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Wed, 23 May 2012 19:35:23 +0000 (14:35 -0500)]
libceph: define and use an explicit CONNECTED state
There is no state explicitly defined when a ceph connection is fully
operational. So define one.
It's set when the connection sequence completes successfully, and is
cleared when the connection gets closed.
Be a little more careful when examining the old state when a socket
disconnect event is reported.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Wed, 23 May 2012 19:35:23 +0000 (14:35 -0500)]
libceph: clear NEGOTIATING when done
A connection state's NEGOTIATING bit gets set while in CONNECTING
state after we have successfully exchanged a ceph banner and IP
addresses with the connection's peer (the server). But that bit
is not cleared again--at least not until another connection attempt
is initiated.
Instead, clear it as soon as the connection is fully established.
Also, clear it when a socket connection gets prematurely closed
in the midst of establishing a ceph connection (in case we had
reached the point where it was set).
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Thu, 21 Jun 2012 02:53:53 +0000 (21:53 -0500)]
libceph: clear CONNECTING in ceph_con_close()
A connection that is closed will no longer be connecting. So
clear the CONNECTING state bit in ceph_con_close(). Similarly,
if the socket has been closed we no longer are in connecting
state (a new connect sequence will need to be initiated).
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Thu, 21 Jun 2012 02:53:53 +0000 (21:53 -0500)]
libceph: don't touch con state in con_close_socket()
In con_close_socket(), a connection's SOCK_CLOSED flag gets set and
then cleared while its shutdown method is called and its reference
gets dropped.
Previously, that flag got set only if it had not already been set,
so setting it in con_close_socket() might have prevented additional
processing being done on a socket being shut down. We no longer set
SOCK_CLOSED in the socket event routine conditionally, so setting
that bit here no longer provides whatever benefit it might have
provided before.
A race condition could still leave the SOCK_CLOSED bit set even
after we've issued the call to con_close_socket(), so we still clear
that bit after shutting the socket down. Add a comment explaining
the reason for this.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Thu, 21 Jun 2012 02:53:53 +0000 (21:53 -0500)]
libceph: just set SOCK_CLOSED when state changes
When a TCP_CLOSE or TCP_CLOSE_WAIT event occurs, the SOCK_CLOSED
connection flag bit is set, and if it had not been previously set
queue_con() is called to ensure con_work() will get a chance to
handle the changed state.
con_work() atomically checks--and if set, clears--the SOCK_CLOSED
bit if it was set. This means that even if the bit were set
repeatedly, the related processing in con_work() only gets called
once per transition of the bit from 0 to 1.
What's important then is that we ensure con_work() gets called *at
least* once when a socket close event occurs, not that it gets
called *exactly* once.
The work queue mechanism already takes care of queueing work
only if it is not already queued, so there's no need for us
to call queue_con() conditionally.
So this patch just makes it so the SOCK_CLOSED flag gets set
unconditionally in ceph_sock_state_change().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Thu, 21 Jun 2012 02:53:53 +0000 (21:53 -0500)]
libceph: don't change socket state on sock event
Currently the socket state change event handler records an error
message on a connection to distinguish a close while connecting from
a close while a connection was already established.
Changing connection information during handling of a socket event is
not very clean, so instead move this assignment inside con_work(),
where it can be done during normal connection-level processing (and
under protection of the connection mutex as well).
Move the handling of a socket closed event up to the top of the
processing loop in con_work(); there's no point in handling backoff
etc. if we have a newly-closed socket to take care of.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Thu, 21 Jun 2012 02:53:53 +0000 (21:53 -0500)]
libceph: SOCK_CLOSED is a flag, not a state
The following commit changed it so SOCK_CLOSED bit was stored in
a connection's new "flags" field rather than its "state" field.
libceph: start separating connection flags from state
commit
928443cd
That bit is used in con_close_socket() to protect against setting an
error message more than once in the socket event handler function.
Unfortunately, the field being operated on in that function was not
updated to be "flags" as it should have been. This fixes that
error.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Mon, 11 Jun 2012 19:57:13 +0000 (14:57 -0500)]
libceph: don't use bio_iter as a flag
Recently a bug was fixed in which the bio_iter field in a ceph
message was not being properly re-initialized when a message got
re-transmitted:
commit
43643528cce60ca184fe8197efa8e8da7c89a037
Author: Yan, Zheng <zheng.z.yan@intel.com>
rbd: Clear ceph_msg->bio_iter for retransmitted message
We are now only initializing the bio_iter field when we are about to
start to write message data (in prepare_write_message_data()),
rather than every time we are attempting to write any portion of the
message data (in write_partial_msg_pages()). This means we no
longer need to use the msg->bio_iter field as a flag.
So just don't do that any more. Trust prepare_write_message_data()
to ensure msg->bio_iter is properly initialized, every time we are
about to begin writing (or re-writing) a message's bio data.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Mon, 11 Jun 2012 19:57:13 +0000 (14:57 -0500)]
libceph: move init of bio_iter
If a message has a non-null bio pointer, its bio_iter field is
initialized in write_partial_msg_pages() if this has not been done
already. This is really a one-time setup operation for sending a
message's (bio) data, so move that initialization code into
prepare_write_message_data() which serves that purpose.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Mon, 11 Jun 2012 19:57:13 +0000 (14:57 -0500)]
libceph: move init_bio_*() functions up
Move init_bio_iter() and iter_bio_next() up in their source file so
the'll be defined before they're needed.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Mon, 11 Jun 2012 19:57:13 +0000 (14:57 -0500)]
libceph: don't mark footer complete before it is
This is a nit, but prepare_write_message() sets the FOOTER_COMPLETE
flag before the CRC for the data portion (recorded in the footer)
has been completely computed. Hold off setting the complete flag
until we've decided it's ready to send.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Mon, 11 Jun 2012 19:57:13 +0000 (14:57 -0500)]
libceph: encapsulate advancing msg page
In write_partial_msg_pages(), once all the data from a page has been
sent we advance to the next one. Put the code that takes care of
this into its own function.
While modifying write_partial_msg_pages(), make its local variable
"in_trail" be Boolean, and use the local variable "msg" (which is
just the connection's current out_msg pointer) consistently.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Mon, 11 Jun 2012 19:57:13 +0000 (14:57 -0500)]
libceph: encapsulate out message data setup
Move the code that prepares to write the data portion of a message
into its own function.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Sage Weil [Thu, 21 Jun 2012 19:49:23 +0000 (12:49 -0700)]
libceph: drop ceph_con_get/put helpers and nref member
These are no longer used. Every ceph_connection instance is embedded in
another structure, and refcounts manipulated via the get/put ops.
Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Thu, 21 Jun 2012 19:47:08 +0000 (12:47 -0700)]
libceph: use con get/put methods
The ceph_con_get/put() helpers manipulate the embedded con ref
count, which isn't used now that ceph_connections are embedded in
other structures.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Dan Carpenter [Tue, 19 Jun 2012 13:52:33 +0000 (08:52 -0500)]
libceph: fix NULL dereference in reset_connection()
We dereference "con->in_msg" on the line after it was set to NULL.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Sage Weil [Fri, 15 Jun 2012 19:32:04 +0000 (12:32 -0700)]
Merge tag 'v3.5-rc1'
Linux 3.5-rc1
Conflicts:
net/ceph/messenger.c
Sage Weil [Mon, 11 Jun 2012 03:43:56 +0000 (20:43 -0700)]
libceph: flush msgr queue during mon_client shutdown
We need to flush the msgr workqueue during mon_client shutdown to
ensure that any work affecting our embedded ceph_connection is
finished so that we can be safely destroyed.
Previously, we were flushing the work queue after osd_client
shutdown and before mon_client shutdown to ensure that any osd
connection refs to authorizers are flushed. Remove the redundant
flush, and document in the comment that the mon_client flush is
needed to cover that case as well.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Sage Weil [Sat, 9 Jun 2012 21:19:21 +0000 (14:19 -0700)]
libceph: transition socket state prior to actual connect
Once we call ->connect(), we are racing against the actual
connection, and a subsequent transition from CONNECTING ->
CONNECTED. Set the state to CONNECTING before that, under the
protection of the mutex, to avoid the race.
This was introduced in
928443cd9644e7cfd46f687dbeffda2d1a357ff9,
with the original socket state code.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Xi Wang [Thu, 7 Jun 2012 00:35:55 +0000 (19:35 -0500)]
libceph: fix overflow in osdmap_apply_incremental()
On 32-bit systems, a large `pglen' would overflow `pglen*sizeof(u32)'
and bypass the check ceph_decode_need(p, end, pglen*sizeof(u32), bad).
It would also overflow the subsequent kmalloc() size, leading to
out-of-bounds write.
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Xi Wang [Thu, 7 Jun 2012 00:35:55 +0000 (19:35 -0500)]
libceph: fix overflow in osdmap_decode()
On 32-bit systems, a large `n' would overflow `n * sizeof(u32)' and bypass
the check ceph_decode_need(p, end, n * sizeof(u32), bad). It would also
overflow the subsequent kmalloc() size, leading to out-of-bounds write.
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Xi Wang [Thu, 7 Jun 2012 00:35:55 +0000 (19:35 -0500)]
libceph: fix overflow in __decode_pool_names()
`len' is read from network and thus needs validation. Otherwise a
large `len' would cause out-of-bounds access via the memcpy() call.
In addition, len = 0xffffffff would overflow the kmalloc() size,
leading to out-of-bounds write.
This patch adds a check of `len' via ceph_decode_need(). Also use
kstrndup rather than kmalloc/memcpy.
[elder@inktank.com: added -ENOMEM return for null kstrndup() result]
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Yan, Zheng [Thu, 7 Jun 2012 00:35:55 +0000 (19:35 -0500)]
rbd: Clear ceph_msg->bio_iter for retransmitted message
The bug can cause NULL pointer dereference in write_partial_msg_pages
Signed-off-by: Zheng Yan <zheng.z.yan@intel.com>
Reviewed-by: Alex Elder <elder@inktank.com>
Alex Elder [Fri, 1 Jun 2012 19:56:43 +0000 (14:56 -0500)]
libceph: make ceph_con_revoke_message() a msg op
ceph_con_revoke_message() is passed both a message and a ceph
connection. A ceph_msg allocated for incoming messages on a
connection always has a pointer to that connection, so there's no
need to provide the connection when revoking such a message.
Note that the existing logic does not preclude the message supplied
being a null/bogus message pointer. The only user of this interface
is the OSD client, and the only value an osd client passes is a
request's r_reply field. That is always non-null (except briefly in
an error path in ceph_osdc_alloc_request(), and that drops the
only reference so the request won't ever have a reply to revoke).
So we can safely assume the passed-in message is non-null, but add a
BUG_ON() to make it very obvious we are imposing this restriction.
Rename the function ceph_msg_revoke_incoming() to reflect that it is
really an operation on an incoming message.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Alex Elder [Fri, 1 Jun 2012 19:56:43 +0000 (14:56 -0500)]
libceph: make ceph_con_revoke() a msg operation
ceph_con_revoke() is passed both a message and a ceph connection.
Now that any message associated with a connection holds a pointer
to that connection, there's no need to provide the connection when
revoking a message.
This has the added benefit of precluding the possibility of the
providing the wrong connection pointer. If the message's connection
pointer is null, it is not being tracked by any connection, so
revoking it is a no-op. This is supported as a convenience for
upper layers, so they can revoke a message that is not actually
"in flight."
Rename the function ceph_msg_revoke() to reflect that it is really
an operation on a message, not a connection.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>