How to mount replicated share

Hi,

I could setup 2 rockstor appliances and replicate from rockstor1 -> 2, but I would like to know how to mount replicated folder from client using samba ?

I tried to add a samba export on [bunch-of-hexa-characters]-my_share-name, but folder is empty.
I tried to add a samba export on .snapshots/[bunch-of-hexa-characters]-my_share-name, but it makes samba crash and can’t browse from client.

My goal is to setup a two way replication between rockstor1 and 2, and share via samba the same folder name. I will put an haproxy or keepalived in front and then if I loose one rockstor, service will be ok (minus replication delta).

I tried to trick the system making file system links but it doesn’t work.

I don’t understand why rockstor allow replication but no access to replicated data. What’s the point of replication then ? :?

Thanks !

@tntteam Welcome to the Rockstor community.

This is an artefact of how the replication system works. This share will not be populated until a number of scheduled replication tasks have taken place; pretty sure it’s after the initial full transfer and the followed 3 subsequent tasks which are all incremental. At that point the oldest background snapshot associated with these replication tasks is promoted to ‘supplant’ the originally empty share that begins with the appliance name of the sender “[bunch-of-hexa-characters]-my_share-name”. In fact it’s not until after the 5th replication task event (including the first) that the final, and hopefully perpetual, system state is achieved.

You could try initially setting a smaller interval for the replication task timing so that you can more quickly achieve the populated share state, given transfer times of course.

Please note however that it would be strongly advisable to not share/use this specially named replication share as rw as after each replication task it is overwritten/supplanted by the oldest background receiver snapshot so all saves will be lost. I.e. treat it as a Read-only, self updating, live (with delay) archive volume.

As a form of backup to another machine/location that is ‘live updated’ at every scheduled replication task: only with a bit of hysteresis, and redundancy, build in: the lagging background snapshots.

It’s also worth noting that replication did suffer from a regression bug for a while but as of the stable update channel release version 3.9.12-13 should now be working again:

Related issue:
https://github.com/rockstor/rockstor-core/issues/1853

and it’s subsequent pull request fix:
https://github.com/rockstor/rockstor-core/pull/1885

I have in my ‘queue to do’ a technical write-up of how the replication system works and intend to post this in the wiki area of the forum when completed, and we also have some improvements / updates to be done on the regular user documentation for replication as what’s there currently is fairly old now: and we have an open issue to cover this work in our docs repo.

Hope that helps.

1 Like

Hi,

Thanks for your answer. I did not wait enough to see that I guess. I’ll try to check that.

I know the major issues to have r/w on both side, that’s why I was planning to work in an active/passive node scenario.

I think you have a great system here, I know how hard it is (and time consuming) to build such a project, but I also think you could really shine here. There is no NAS appliance that offer redundancy.

That’s just a suggestion, but you could bundle keepalived and an active/passive role on rockstor, with an auto role switchover based on who is currently owning the VIP of keepalived. That implies a mecanism which replicate shares configuration to passive node when you configure on active node. And also one way replication with failover to replicate in the other way when there is a switchover on VIP.

Or offer a way to have the same samba share name on master and replica, so we (users) can use haproxy in front to handle the situation.

I did not mention but I’m a teacher for IT students and I’m showing them various ways to achieve HA on different levels (hardware, VMs, services, …) and I’d like to promote rockstor because I think it’s a nice piece of software :slight_smile:

Good luck! And sorry for my poor english :smiley:

1 Like

Hi there,

I can confirm you, this worked after a few number of syncs, so you were absolutly right. But that is obscure and I think when you are “testing” the product, you can, like me, just think “it doesn’t work”. Can’t suggest best way to make user understand this behavior … :frowning:

Also now I have a replication problem, without touching anything, replication worked, then stopped working for no reason. " failed to promote the oldest snapshot to share" or something like that. Today, it’s now 500 internal server error.

Can’t really know what I could do wrong, as it was working for a few syncs and then stopped working on its own.

I’ll try to handle things with rsync atm with students.

@tntteam Thanks for the encouragement, much appreciated.

As per the more exciting features such as you mention I’m personally quite looking forward to getting around to the more ‘exotic’ stuff and have in mind implementing a nice friendly and simple integration for CTDB so we can gain pCIFS and some HA stuff such as IP takeover. We already have the basics of the inter-Rockstor appliance communication in place that the replication feature uses, which is based on zeromq, and I’m imagining leveraging that to allow non technical users to be able to bootstrap their way up into hosting a simple GlusterFS arrangement, potentially even within a single node but across multiple pools via brick/pool say (obviously lacking some redundancy elements here of course). The Glusterfs + CTDB combo looks to be a well tested and very importantly well documented option. However as previously mentioned:

My personal preference for HA is more in the direction of a clustered fs than a master slave type arrangement as it’s more expandable and seems like it addresses the issue at a lower level rather than trying to bend older technologies to fit newer paradigms. I.e I want to get to the stage where one could deploy / destroy Rockstor instances in a Cluster trivially.

Looks like we have a message cross-over here so I’ll address your more recent post here:

Agreed, hence the need for our documentation improvements referenced earlier. If you fancy you could take a look at our Community Contributions and more specifically the Contributing to Rockstor documentation doc sections.

Make sure you are running the latest stable release as per:

Prior to that for a while all releases showed the failure you seem to be having which was down to some major API refactoring for shares and pools that we unfortunately had to do but will in the end benefit the project going forward.

Also note that there are still some fragile elements to the replication code as it stands, and looking at the first reported failure in the replication history should help, and make sure to look at both the sender and the receiver’s reports as sometimes one is more informative than the other. I’ll get around to some more maintenance/improvements soon in that area but I’m currently occupied elsewhere in the project, for better or worse :slight_smile:

Do feel free to open a dedicated thread to track down / help debug any issues you find as Rockstor is fully open source and so greatly benefits from contributions of all sorts. Most notably in constructive forum participation, plus we seem to have quite a knowledge among our members.

Hope that helps.