Write lastlog failed; No space left on device

Please help, I can’t find any reference to this issue anywhere.
When I boot my machine it seems to start fine except the following lines after login

Web-ui is accessible with the following links:
https://127.0.0.1
https://192.168.86.192
rockstorage login: ****
Password:
Last Login: Sat Jan 20 11:42:49 on tty1
Login: Write lastlog failed; No space left on device

[***@rocstorage ~]$ [ 58.6800741 systemd-readahead{563]: Failed to write pack file.

Device not accessible from the web-page. I have no idea where to start

This is installed on a usb drive been running and working well for about 2 months.
I copied files to a share and then tried to access the share and it was offline.

And sugestions directions would be appreciated.

Chas

@kysdaddy Hello again.

This very much looks like your system drive (the one the Rockstor system is installed on) has run out of space.
Could you past the output of the following commands:

btrfs fi show

and if that one shows a rockstor_rockstor labelled volume (pool) then the following should help to see whats going on (if not just substitute your rockstor_* name:

btrfs fi usage /mnt2/rockstor_rockstor

As many parts of an operating system need free space to work they will often fail, such as the systemd-readahead element has done here, if that space is not available. It is likely that other elements associated with the Web-UI and share management have also failed for the same reason.

It may just be that this usb drive is too small, a situation that will depend on if you have also created and used shares on the system pool. It’s better to not use this pool really as then all data is kept on separate devices and helps to keep things simple.

Hope that helps.

Here it is, thank you for the help.
Chas

[root@rockstorage ~]# ^C
[root@rockstorage ~]# btrfs fi show
Label: ‘rockstor_rockstor’ uuid: 205e2461-67f8-45f7-9ebd-333da7fb433d
Total devices 1 FS bytes used 10.45GiB
devid 1 size 12.40GiB used 12.40GiB path /dev/sde3

Label: ‘Rock_Storage’ uuid: 2d1070f6-b156-4aae-8741-af00001e3763
Total devices 3 FS bytes used 196.67GiB
devid 1 size 931.51GiB used 100.00GiB path /dev/sda
devid 2 size 931.51GiB used 99.00GiB path /dev/sdb
devid 3 size 465.76GiB used 16.00MiB path /dev/sdc

[root@rockstorage ~]#

[root@rockstorage ~]# btrfs fi usage /mnt2/rockstor_rockstor
Overall:
Device size: 12.40GiB
Device allocated: 12.40GiB
Device unallocated: 0.00B
Device missing: 0.00B
Used: 10.53GiB
Free (estimated): 0.00B (min: 0.00B)
Data ratio: 1.00
Metadata ratio: 1.99
Global reserve: 20.62MiB (used: 0.00B)

Data,single: Size:10.37GiB, Used:10.37GiB
/dev/sde3 10.37GiB

Metadata,single: Size:8.00MiB, Used:0.00B
/dev/sde3 8.00MiB

Metadata,DUP: Size:1.00GiB, Used:81.73MiB
/dev/sde3 2.00GiB

System,single: Size:4.00MiB, Used:0.00B
/dev/sde3 4.00MiB

System,DUP: Size:8.00MiB, Used:16.00KiB
/dev/sde3 16.00MiB

Unallocated:
/dev/sde3 0.00B
[root@rockstorage ~]#

@kysdaddy

Indicate that you are essentially out of data space (meta data has some space left however) on this pool so that would appear to be your problem. Copy on write file systems can be a bit tricky once completely full, you may be able to do some progressively more aggressive filtered balances, and free some space up. You could try deleting some logs in: /var/log ie the older ones starting with “messages-” to free up space. There are also logs in /opt/rockstor/var/log/ ie the ones named rockstor.log.#.

And to see if you have, maybe inadvertently, used this small system disks to create a share on you could execute the following which should list all the subvols on the system pool (btrfs vol):

btrfs subvol list /mnt2/rockstor_rockstor

Should show only ‘root’, ‘home’, and maybe ‘root/var/lib/machines’.

Also note that your pool has a mixture of single and dup metadata chunks: from https://btrfs.wiki.kernel.org/index.php/FAQ subsection “Why do I have “single” chunks in my RAID filesystem?”:

"The single chunks are perfectly normal, and are a result of the way that mkfs works. They are small, harmless, and will remain unused as the FS grows, so you won’t risk any unreplicated data. You can get rid of them with a balance:

btrfs balance start -dusage=0 -musage=0 /mountpoint

"
Might be the first place to start as the single chunk may then be made available as initial working space for further balances.

There after you could try a balance of the metadata only in the hope that some near empty metadata chunks can be freed up for data use.

btrfs fi balance start -musage=5 /mnt2/rockstor_rockstor

ie balance any metadata chunks that are at max 5% full. If non are freed you could up this number by 10 and try again. The aim is to free up a complete chunck so that it might be re-assigned as a data chunk where currently there is no ‘room to breath’.

See how you get on first as additional ‘tricks’, like temporarily adding another device to the pool, are somewhat
fiddly and may not be necessary in this case.

Essentially to save anything new there has to be space available in an existing chunk of the right type (data / metadata), or unallocated disk space remaining so that a chunk of the required type be made from this disk space. In your case there is no unallocated disk space remaining (see above quotes from your last post), hence the suggested balance advise to try and free up some space initially taken by the single chunks and then by balancing (repacking) the metadata chunks progressively to hopefully free some more space for the now needed data chunks: as the existing data chunks have no remaining space:

Hope that helps.

I am pasting the responses , either I am not understanding your directions(really good probability) or this doesn’t help.
I am attaching jpgs of the folder trees perhaps you can id something that isn’t right!
thanks again
Chas

Root|519x256

Last login: Sat Jan 6 00:30:44 2018 from testwifi.here
[root@rockstorage ~]# btrfs subvol list /mnt2/rockstor_rockstor
ID 257 gen 46709 top level 5 path home
ID 258 gen 441972 top level 5 path root
ID 260 gen 34 top level 258 path var/lib/machines
[root@rockstorage ~]# btrfs balance start -dusage=0 -musage=0 /mountpoint
ERROR: cannot access ‘/mountpoint’: No such file or directory
[root@rockstorage ~]# btrfs fi balance start -musage=5 /mnt2/rockstor_rockstor
ERROR: error during balancing ‘/mnt2/rockstor_rockstor’: No space left on device
There may be more info in syslog - try dmesg | tail
[root@rockstorage ~]#

Root

@kysdaddy

OK, this is good as it tells us that there are only the default subvolumes on you system disk; ie no additional shares.

Sorry my fault, I should have clarified that quote, you need to change the /mountpoint indicated to reflect the actual mount point on your system volume (Pool) which is /mnt2/rockstor_rockstor in this case so try instead:

btrfs balance start -dusage=0 -musage=0 /mnt2/rockstor_rockstor

The hope here is that this ‘corrected’ command will at least give us a little wiggle room for the next balance command to give us some more:

This is just the same problem that there is no ‘working space’ so that this balance command can’t work. The hope here is that the previous balance command (the one that never executed because of the “/mountpoint” mistake will generate enough working space for this one to succeed once the above corrected version has hopefully done something.

This is a bit of a long shot but if it works you may be up and running again. Also remember the advise on removing what files you can from the sighted log directories, although you may not be able to do this until after some wiggle room has been created.

Before all of the above though it would be nice to see what is taking the space here and I suspect it’s down to run away logs. It would help if you could show a directory screen grab of the log directories I indicated before ie:

/var/log
/opt/rockstor/var/log

As I run a 16GB system disk here and it has not filled up as yours has done. All the same I’m tending towards recommending a larger system disk as our minimum system requirements are still at 8GB.

Hope that gets us a little further along.

Here are the images that you suggested I did both graphical and cmd hoping it would help.

opt%20rocstor%20var%20log

This is the result of the corrected promt

Last login: Sat Feb 24 08:27:02 2018 from chas2015.lan
[root@rockstorage ~]# btrfs balance start -dusage=0 -musage=0 /mnt2/rockstor_rockstor
Done, had to relocate 0 out of 15 chunks
[root@rockstorage ~]#

@kysdaddy OK great.
Your drive filling up problem does appear to be the logs but in this case it’s the logs beginning with “messages” in /var/log:
They are rotated by default but each is around 2GB which is crazy big. Best have a look at what is filling them up, ie:

tail -f /var/log/messages

Ctrl + c to exit that command but it should, whilst running, show what is filling them up. But given your system disk is full you may see nothing. You can see history via:

less /var/log/messages

g key or G (capitalised) for beginning, end. And n key or N to page up and down.

Could we have another output from:

btrfs fi usage /mnt2/rockstor_rockstor

It may be that any space created may have already been filled.

As previously indicated a jam packed full copy-on-write fs is a little stuffed at this stage. But lets try a little longer with the re-balance:

It may already be too late as the intention was, as previously indicated, for you to run the second balance directly after the first (I should have stressed this) followed by the log deletes as now whatever the system problem that is filling the logs may have already taken what ever space may have been freed by removing those single unused system chunks for our ‘wiggle room’. But you could just try the second balance anyway.

btrfs fi balance start -musage=5 /mnt2/rockstor_rockstor

It’s going to be far easier at this stage to just reinstall I think, it’s only the system disk. But it would be nice to see what was filling the logs. Also to free up space, if the filesystem will allow it, you need to delete those logs as they are massive, almost 2GB each so old rotated logs of messages = around 7.5G, around twice the size of the entire install:

rm /var/log/messages-20180*

answer y to each. But again to delete you need a little space first, tricky.

The more robust way to address this is to add a device temporarily to this pool and then balance and then remove problem files via the ‘wiggle room’ afforded by the temp device add. That is a lot of messing about and simpler to just reinstall really, especially if you are unfamiliar with the command line. But if you take this route check you logs upon re-install as otherwise the same will happen again - something is spamming them.

Lets see what’s filling the logs first then try deleting them, balance commands may help with this, but I suspect not now.

See how you go and let us know.

I followed the steps and pasted the replies.

I’ll reboot and see what happens

Last failed login: Sat Feb 24 12:58:51 EST 2018 from 27.200.83.139 on ssh:notty
There were 74 failed login attempts since the last successful login.
Last login: Sat Feb 24 08:27:54 2018 from 192.168.86.25
[root@rockstorage ~]# tail -f /var/log/messages
Feb 24 11:01:01 rockstorage systemd: Starting Session 51 of user root.
Feb 24 12:01:01 rockstorage systemd: Started Session 52 of user root.
Feb 24 12:01:01 rockstorage systemd: Starting Session 52 of user root.
Feb 24 13:01:01 rockstorage systemd: Started Session 53 of user root.
Feb 24 13:01:01 rockstorage systemd: Starting Session 53 of user root.
Feb 24 14:01:01 rockstorage systemd: Started Session 54 of user root.
Feb 24 14:01:01 rockstorage systemd: Starting Session 54 of user root.
Feb 24 14:12:33 rockstorage systemd-logind: New session 55 of user root.
Feb 24 14:12:33 rockstorage systemd: Started Session 55 of user root.
Feb 24 14:12:33 rockstorage systemd: Starting Session 55 of user root.

[root@rockstorage ~]# btrfs fi balance start -musage=5 /mnt2/rockstor_rockstor
Done, had to relocate 1 out of 15 chunks
[root@rockstorage ~]#

[root@rockstorage ~]# rm /var/log/messages-20180*
rm: remove regular file ‘/var/log/messages-20180128’? y
rm: remove regular file ‘/var/log/messages-20180204’? y
rm: remove regular file ‘/var/log/messages-20180212’? y
rm: remove regular file ‘/var/log/messages-20180218’? y
[root@rockstorage ~]#

ok the system launched and I have the website up, what do I look at changing?

Thanks
Chas

I think I was storing my snapshots in the root somehow. I deleted all of my snapshots and deleted all scheduled snapshots.
I’m still open to any other suggestions .
Oh and BTW thank you again for getting this working again

Chas

Last login: Sat Feb 24 14:12:33 2018 from chas2015.lan
[root@rockstorage ~]# btrfs fi show
Label: ‘rockstor_rockstor’ uuid: 205e2461-67f8-45f7-9ebd-333da7fb433d
Total devices 1 FS bytes used 3.23GiB
devid 1 size 12.40GiB used 12.38GiB path /dev/sdb3

Label: ‘Rock_Storage’ uuid: 2d1070f6-b156-4aae-8741-af00001e3763
Total devices 3 FS bytes used 131.08GiB
devid 1 size 931.51GiB used 102.00GiB path /dev/sdf
devid 2 size 931.51GiB used 105.00GiB path /dev/sdg
devid 3 size 465.76GiB used 16.00MiB path /dev/sdh

[root@rockstorage ~]#

@kysdaddy Great. So before the logs fill all space again it would be good to see what’s filling them. Look to the contents of:

/var/log/messages

either via a tail as indicated earlier ie:

tail -f /var/log/messages

You should see stuff scrolling almost constantly as those logs were massive so something is hitting them constantly, but if it’s not constant you may have to wait a while to see what’s filling them.

or via a less command which doesn’t update but can go back in time:

less /var/log/messages

Apparently our fancy build in Log Manager seems to have skipped /var/log/messages @Flyer have I missed something (probably) re viewing /var/log/messages from within the Web-UI.

Paste what you can here if you don’t see what the problem is.

That doesn’t look good, is your Rockstor accessible from the Internet, and consequently exposed to bots attempting logins?

Also pop in another:

btrfs fi usage /mnt2/rockstor_rockstor

Cheers.

Well I’ve screwed something up now, the root password is failing

Last login: Sat Feb 24 15:45:37 2018 from chas2015.lan
[root@rockstorage ~]#
[root@rockstorage ~]# tail -f /var/log/messages
Feb 24 15:46:57 rockstorage systemd: Started Hostname Service.
Feb 24 15:48:01 rockstorage nmbd[2695]: [2018/02/24 15:48:01.668365, 0] …/source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2)
Feb 24 15:48:01 rockstorage nmbd[2695]: *****
Feb 24 15:48:01 rockstorage nmbd[2695]:
Feb 24 15:48:01 rockstorage nmbd[2695]: Samba name server ROCKSTORAGE is now a local master browser for workgroup SAMBA on subnet 172.17.0.1
Feb 24 15:48:01 rockstorage nmbd[2695]:
Feb 24 15:48:01 rockstorage nmbd[2695]: *****
Feb 24 15:51:15 rockstorage systemd-logind: New session 14 of user root.
Feb 24 15:51:15 rockstorage systemd: Started Session 14 of user root.
Feb 24 15:51:15 rockstorage systemd: Starting Session 14 of user root.

Sorry the permissions are only denied on /Var/log/messages
they seem to work everywhere else

[root@rockstorage ~]# btrfs fi usage /mnt2/rockstor_rockstor
Overall:
Device size: 12.40GiB
Device allocated: 12.38GiB
Device unallocated: 17.00MiB
Device missing: 0.00B
Used: 3.30GiB
Free (estimated): 7.23GiB (min: 7.22GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 16.00MiB (used: 0.00B)

Data,single: Size:10.38GiB, Used:3.17GiB
/dev/sdh3 10.38GiB

Metadata,DUP: Size:1.00GiB, Used:71.38MiB
/dev/sdh3 2.00GiB

System,DUP: Size:1.50MiB, Used:16.00KiB
/dev/sdh3 3.00MiB

Unallocated:
/dev/sdh3 17.00MiB
[root@rockstorage ~]#

@kysdaddy So looks like the log delete worked to reclaim the 7+ GB:

and the initial balance removed the redundant single metadata (single raid defaults to dup meta on a single pool).

Might be as well to balance this pool (volume) now, can be done via the Web-UI now it’s back up and running. May well take a few minutes to complete however.

Keep an eye on that log file as we still need to find what’s filling it up.

Root user has access to all so not sure what’s going on there!

but here you are receiving access:

Pleased your up and running again. Was a bit of a close one that. Let us know if you manage to track down the log filler.

:open_mouth: @phillxnet you’re right, I missed it!
M.