Soooo, is it dead? LOL, whatever

You said lots more stuff now, I am still on your statement of,

I like this idea, I have done nothing as you also stated, I am today trying the update function.
The ROCKSTOR 4.5.8-0 top right has an arrow next to it so the system recognizes it has an available update track.

about an hour ago I clicked that arrow and was sent to the update page with the testing track activated. Scrolled to the bottom and clicked start update then walked away. when I returned the gui was back and responding still showing the ROCKSTOR 4.5.8-0. So I Rebooted and when it reloaded I still have ROCKSTOR 4.5.8-0 with the arrow showing. So Iā€™m trying a second time now while I read a bit of the plethora of information you have provided. Iā€™ll post once more tonight with final attempts details or a message of success in getting it to update.

OK, It updated and now is giving an error on the dashboard.

Unknown internal error doing a GET to /api/pools?page=1&format=json&page_size=32000&count=

@ScottPSilver OK, nice. Glad your now on latest testing.

Our installers were build some time ago now so there are many OS updates that also get installed when doing that method of update. That means it can take ages. But future updates should be far faster now your base OS has had the available updates all caught-up.

From you Web-UI pic you are currently running a 6.5.1 kernel (top right) on Leap 15.4. That good as itā€™s required to enable read-write on the Leap 15.4 base OS. Default kernel on 15.4 is around 5.14.

Did you also install the filesystem backports as indicated in our:

https://rockstor.com/docs/howtos/stable_kernel_backport.html

Regarding the red error message in your last post: 4.5.8-0 was an RC5 in our last testing channel phase heading towards our current stable of 4.6.1-0. But 5.0.8-0 is latest in current testing channel. This means that almost every back-end dependency is at least years newer. And we had to make a lot of changes. Try to do a Web browser - More tools - developer tools - Network (tab) and ensure ā€œDisable cacheā€ is ticked. Then with that panel still open do a normal browser refresh. This will force the newer browser side of things into place. Just in case what we see here is some old stuff lingering in the browser cache. You can also try logging out and back in.

Let us know if that top-of-page error then disapears. But otherwise you do now look to be on 5.0.8-0 which is nice. The error may be related to an inability to interpret the pool but lets take this a step at a time.

Also familiarise yourself with our Web-UI log reader: System - Logs Manager. We donā€™t yet have a populated doc entry for this unfortunately, but pull requests are always welcome; and we have the following rockstor-doc repo issue for this:

It may help with seeing relevant doc entries to assist with the back and forth here on the forum.

I have extremely limited time for the forum as Iā€™m working towards our next stable releae: with consequent installer / installer repo / website / Doc / back-end / server / Appman updates etc so may not be of much service here. But just wanted to nudge this along a little.

Glad your are at least now running what is our 3rd Stable Release Candidate (RC3). I hope to be releasing RC4 quite soon.

Can you also test the Pool overview and detail page for the problem pool? Pics if possible. One of our aims here is to address any malfunction on our side: as your basic issue is a poorly pool. Our Web-UI behaviour in such circumstance is more our remit here.

Hope that helps others here help @ScottPSilver further as Iā€™m afraid I will find it difficult to ā€˜ownā€™ this community support request currently.

2 Likes

OK, ran the first line from

zypper --non-interactive addrepo --refresh /repositories/Kernel:/stable:/Backport/standard - openSUSE Download Kernel_stable_Backport

and got,
Adding repository ā€˜Kernel_stable_Backportā€™ ā€¦[error]
Repository named ā€˜Kernel_stable_Backportā€™ already exists. Please use another alias.

SOOOO, from that I gather I have it already, so on to step two.

I run,
zypper --non-interactive addrepo --refresh https://download.opensuse.org/repositories/filesystems/15.X/ filesystems
And get,
Adding repository ā€˜filesystemsā€™ ā€¦[error]
Repository named ā€˜filesystemsā€™ already exists. Please use another alias.

SOOOO, must have that too!

Last step, three.

I run,
zypper --non-interactive --gpg-auto-import-keys refresh
and get,
Repository ā€˜Kernel_stable_Backportā€™ is up to date.
Repository ā€˜Leap_15_4ā€™ is up to date.
Repository ā€˜Leap_15_4_Updatesā€™ is up to date.
Repository ā€˜Rockstor-Testingā€™ is up to date.
Repository ā€˜filesystemsā€™ is up to date.
Repository ā€˜home_rockstorā€™ is up to date.
Repository ā€˜home_rockstor_branches_Base_Systemā€™ is up to date.
Repository ā€˜Update repository of openSUSE Backportsā€™ is up to date.
Repository ā€˜Update repository with updates from SUSE Linux Enterprise 15ā€™ is up to date.
All repositories have been refreshed.

So good to go on to final step Installing the updates.
I run,
zypper up --no-recommends --allow-vendor-change
and get,
Loading repository dataā€¦
Warning: Repository ā€˜Update repository of openSUSE Backportsā€™ metadata expired since 2024-02-05 04:25:26 AKST.

Warning: Repository metadata expired: Check if 'autorefresh' is turned on (zypper lr), otherwise
manualy refresh the repository (zypper ref). If this does not solve the issue, it could be that                                                                       
you are using a broken mirror or the server has actually discontinued to support the repository.                                                                      

Reading installed packagesā€¦

The following 197 packages are going to be upgraded:
aaa_base apparmor-parser avahi bind-utils binutils btrfsmaintenance btrfsprogs btrfsprogs-udev-rules containerd cpp7 crypto-policies cups-config curl device-mapper
docker dracut dracut-kiwi-lib dracut-kiwi-oem-dump dracut-kiwi-oem-repart dracut-mkinitrd-deprecated e2fsprogs gcc7 gcc7-c++ glibc glibc-devel glibc-locale
glibc-locale-base gpg2 gptfdisk grub2 grub2-branding-upstream grub2-i386-pc grub2-i386-pc-extras grub2-snapper-plugin grub2-x86_64-efi grub2-x86_64-efi-extras
kernel-firmware-bnx2 kernel-firmware-chelsio kernel-firmware-intel kernel-firmware-marvell kernel-firmware-network kernel-firmware-platform kernel-firmware-qlogic
libapparmor1 libasan4 libatomic1 libavahi-client3 libavahi-common3 libavahi-core7 libbtrfs0 libcilkrts5 libcom_err2 libcom_err-devel libcrypt1 libctf0 libctf-nobfd0
libcups2 libcurl4 libdevmapper1_03 libdevmapper-event1_03 libdns_sd libeconf0 libecpg6 libext2fs2 libfreebl3 libfuse2 libgcc_s1 libgnutls30 libgomp1 libhidapi-hidraw0
libicu65_1-ledata libicu-suse65_1 libitm1 libjansson4 libjbig2 liblsan0 liblvm2cmd2_03 libncurses6 libnghttp2-14 libntfs-3g89 libopenssl1_1 libopenssl-1_1-devel
libpolkit0 libpq5 libpython2_7-1_0 libpython3_6m1_0 libsnmp40 libsoftokn3 libsqlite3-0 libsss_certmap0 libsss_idmap0 libsss_nss_idmap0 libstdc++6 libstdc++6-devel-gcc7
libsystemd0 libteamdctl0 libtiff5 libtirpc3 libtirpc-netconfig libubsan0 libudev1 libwebp7 libX11-6 libX11-data libxcrypt-devel libxml2-2 libxml2-devel libxml2-tools
libXpm4 libz1 libzck1 libzypp login_defs lvm2 mdadm mozilla-nss mozilla-nss-certs ncurses-devel ncurses-utils net-snmp ntfs-3g openslp openssh openssh-clients
openssh-common openssh-server openssl-1_1 perl-Bootloader perl-SNMP pesign polkit postfix postgresql postgresql13 postgresql13-contrib postgresql13-devel
postgresql13-server postgresql13-server-devel postgresql-contrib postgresql-devel postgresql-llvmjit postgresql-server postgresql-server-devel ppp procps python python3
python3-base python3-bind python3-curses python3-ply python3-rpm python3-sssd-config python-base python-devel python-xml rpm runc samba samba-client samba-client-libs
samba-libs samba-winbind samba-winbind-libs shadow snmp-mibs sqlite3-tcl sssd sssd-ad sssd-common sssd-dbus sssd-krb5-common sssd-ldap sssd-tools suse-module-tools
systemd systemd-rpm-macros systemd-sysvinit system-group-hardware system-group-kvm system-group-wheel system-user-daemon system-user-lp system-user-mail
system-user-nobody system-user-upsd sysuser-shadow tack tar terminfo-base udev vim vim-data-common xen-libs xfsprogs zlib-devel zypper

The following 3 NEW packages are going to be installed:
kernel-default-6.8.6-lp155.3.1.g114e4b9 libprocps8 pesign-systemd

The following package requires a system reboot:
kernel-default-6.8.6-lp155.3.1.g114e4b9

197 packages to upgrade, 3 new.
Overall download size: 424.7 MiB. Already cached: 0 B. After the operation, additional 253.8 MiB will be used.

Note: System reboot required.

Continue? [y/n/v/ā€¦? shows all options] (y):
So I hit Y and enter and it goes on retrieving, and, applying delta, for quite a while. Once completed it checked for file conflicts then started to install 200 files.
On file 60 I got,
Installation of glibc-locale-2.31-150300.63.1.x86_64 failed:
Error: Subprocess failed. Error: RPM failed: Command exited with status 2.
then it offered me choices but then I got the UPS @local host broadcast due to changing my UPS and not changing it in the OS because I forgot how LOL, it deleted my options when it happened. The only option I saw and remembered was i for ignore so I did that.
and it went on installing 61.
Iā€™m guessing that was important as it showed the size of it being 100MB so 1/3 of the hole update.
Should I run the last command again to get the errored file installed?

second fail.
installing package libstdc++6-devel-gcc7-7.5.0+r278197-150000.4.35.1.x86_64 needs 23MB on the / filesystem

( 98/200) Installing: libstdc++6-devel-gcc7-7.5.0+r278197-150000.4.35.1.x86_64 ā€¦[error]

Installation of libstdc++6-devel-gcc7-7.5.0+r278197-150000.4.35.1.x86_64 failed:

Error: Subprocess failed. Error: RPM failed: Command exited with status 2.
Retry just yields same error.
Ignore and continued.

@ScottPSilver Re:

You may have an insufficiently large system Pool/drive here. The following command

btrfs fi usage /

should help to identify this and guide folks on the forum in further assisting. There are journalctl and snapper clean-up commands that can help in a tight spot such as you may be in here. Take a look at the following:

https://en.opensuse.org/SDB:Disk_space

Hope that helps. And as always tread carefully in these corners. A partial update due to disk space failure is not something you want to be rebooting into: as it can often be a no-win situation. Plus btrfs can get grumpy when itā€™s entirely out of space.

1 Like

Yeah, not too sure. It continues installing the next update. I am on 118 of 200 now.
32GB drive. Perhaps it got filled with snapshots or something, I never turned anything called snapshots off. Had seen the term before but no clue what they do. For now Iā€™m just keeping the process going. Looks like out of 200 I have had an issue with 5 and ignored them to continue installing the next update.

Interesting, it thinks it is only 5GB??? wtf? LOL

Device size:                   5.14GiB                                                                                                                                
Device allocated:              5.14GiB                                                                                                                                
Device unallocated:            1.05MiB                                                                                                                                
Device missing:                  0.00B                                                                                                                                
Device slack:                  1.50KiB                                                                                                                                
Used:                          4.97GiB                                                                                                                                
Free (estimated):              2.96MiB      (min: 2.96MiB)                                                                                                            
Free (statfs, df):             2.96MiB                                                                                                                                
Data ratio:                       1.00                                                                                                                                
Metadata ratio:                   1.00                                                                                                                                
Global reserve:               13.72MiB      (used: 0.00B)                                                                                                             
Multiple profiles:                  no                                                                                                                                

Data,single: Size:4.85GiB, Used:4.85GiB (99.94%)
/dev/sdi4 4.85GiB

Metadata,single: Size:264.00MiB, Used:128.48MiB (48.67%)
/dev/sdi4 264.00MiB

System,single: Size:32.00MiB, Used:16.00KiB (0.05%)
/dev/sdi4 32.00MiB

Unallocated:
/dev/sdi4 1.05MiB

Thinking about it, I do recall a prompt to create a pool for something. I thought was for add onā€™s, the rock-ons things I guess. I remember it suggesting no less than 5GBā€¦ I thought it was talking about creating it on the storage drives not the 32GB flash drive. Looks like it canā€™t be resized so itā€™s more and more starting to look like painted into a corner on this.
Any ideas feel free to chime in but if nothing popā€™s up soon I may cut losses and chalk it all up to a crash course lesson in the use of Rockstor and start fresh with a few months of swapping disks at my entertainment center PC.

@ScottPSilver The full output of:

Should show what drive/partition is actually hosting the ROOT Pool. And the expected size. Our installer expands this Pool to take up the entire size of the chosen OS drive. So itā€™s surprising that you only have 5GB and entirely not enough: give our Minimum System requirements state 15 GB for this very reason. On every significant zypper update new snapshots are created to allow for boot-to-snapshot which we enable by default (hence 15 GB as minimum spec). Upstream installers do not enable this facility unless the chosen dive is at least 15GB also for this reason. Many updates means many snapshots.

Has many ways to clear the working space taken up by for example logs and zypper/snapper. Also note that skipping installs may well break you OS. They need to be installed as they are dependencies of other packages. And a full OS drive is a broken one - mostly.

Hope that helps. And lest see the full btrfs fi show as we can then advise on potential in-place options. There is not installer selection on the size of the OS drive: only as you suspect shares created after the install. So Iā€™m curious how you ended up with an entirely insufficient 5 GB system Pool.

3 Likes

Unit is still running so, when I get home from work I can run the fi show.

1 Like

LOL, must have rebooted. All I get when truing to log in from multiple PCā€™s is

Page not found

Sorry, an unexpected internal error has occured.

If you need more help on this issue, email us at support@rockstor.com with the following information:

  • A step by step description of the actions leading up to the error.
  • Download the log.tgz file (containing the tarred and zipped server logs) available at error.tgz, and attach it to the email.

So I download the .tgz file and itā€™s 0kb! LOL, and farther down the rabbit hole I tumble!!!

Iā€™m dying here!!! LOL, every step I make adds another layer of issue! LOL I think we are now 5 layers of problems here. It looks like to continue I have to extract the desktop from itā€™s cove and set it up on my dinner table with a Cat7 cable across my place, get a monitor and peripherals, and, and, and,
LOL, I am quickly losing interest. I was not trying to learn about every issue one could possibly have with an operating system. Not trying to bug Phill, so, if anyone else has a path forward please feel free to join in. All I know for sure is it is not going to sit on my table for more then a week or two. If nothing happens in the correct direction soon, Iā€™m wiping then installing on a 128GB drive and remembering to chose 64GB for data storage. Iā€™ll be watching and Iā€™ll post again once I have the system pulled out and set up.

Can you access the server via terminal (or using PuTTY or the likes) so you donā€™t have to dig up the box?

Still need to get to the:

to have clarity on that part of the equation

3 Likes

I tried to find detail on how to access terminal but no such luck.

User InterfaceĀ¶

Rockstor supports three main user interface methods.

  1. A browser based interface (Web-UI) for most users
  2. A RESTful API for developers.
  3. root access via terminal and SSH for advanced users.

The Web-UI is the main way to interact with Rockstor, and each one of its parts will be described in the pages below:

No elaboration on the other twoā€¦ stuck on that Iā€™ll search outside Rockstor and the forum.

See where it says ā€œSystem Shellā€, just below the top menu? That should get you what you need

2 Likes

OH, I know system shell. Yeah, none of the Web-UI works. I login and all I see is.

The only two elements on the page that have any intractability is the Rockstor logo and the System Shell. When I click the logo I get,


I click shell and it just changes the address bar just like the logo element does but nothing happens even after leaving it open for hours.

I looked this up and downloaded what looked like the a PC to PC terminal access program and installed it. Looks like it does connect to the Rockstor terminal but my login just gets deniedā€¦
PuTTY

I donā€™t get a prompt to login when I access the Web-ui any more either. Not that there is anything in the web-ui.

even though I am getting the same response from the Nas on multiple PCā€™s I still tried clearing all browser data and connecting. Still same response.

@ScottPSilver Re:

You need to use the ā€˜rootā€™ user: the passwored for which was setup by you at this stage of our installer:

https://rockstor.com/docs/installation/installer-howto.html#enter-desired-root-user-password

The ā€˜rootā€™ user is also the one required to execute many btrfs comands.

There is no choice within our installer to ā€˜defineā€™ or choose the partition/size: only the OS drive to use. See:
ā€œSelect Installation Diskā€: Rockstorā€™s ā€œBuilt on openSUSEā€ installer ā€” Rockstor documentation

So I donā€™t know what you mean there. We have a dedicated system drive with suggested minimum size of 16 GB and 32 GB recommended:

Minimum system requirements: Quick start ā€” Rockstor documentation

  • 16 GB drive dedicated to the Operating System (32 GB+ SSD recommended, 5 TB+ ignored by installer). See USB advisory.

yours looks to be 1/3rd of the minimum, and 1/6th the recommended.

Hence you now have: < 3 MB of free space !!! And:

This would have been apparent to all here on the forum if you had reported a full output of btrfs fi show which was requested by me on the very first day of this thread, and for several responses there-after:

Your response, was as follows:

Others can seem more than you sometimes!

With an incomplete output of just your Data Pool: using a non production raid6 redundancy level, which requires lots of hoops to jump throught just to wright to. As by default it is read-only with our standard kernels.

Know you have a broken OS drive because 5 GB not 32 GB OS POOL/drive.

You OS Pool is likely toast as it was filled and unable to complete a full update: ergo half new / half old as it were. You must clear space as per:

Thatā€™s for our Rock-ons ā€˜rock-ons-rootā€™ share:
https://rockstor.com/docs/interface/overview.html#the-rock-ons-root
Unrelated.

So given you have hosed your OS by not following recommended/let alone minimum system guidelines, and are now in a tight spot, but want assistance with a broke pool that is only writable by your advance use of:

Installing the Stable Kernel Backport:
https://rockstor.com/docs/howtos/stable_kernel_backport.html

This How-to is intended for advanced users only. Itā€™s contents are likely irrelevant unless you require capabilities beyond our default openSUSE base. We include these instructions with the proviso that they will significantly modify your system from our upstream base. As such you will be running a far less tested system, and consequently may face more system stability/reliability risks. N.B. Pools created with this newer kernel have the newer free space tree i.e. (space_cache=v2). Future imports require kernels which are equally new/capable (at least ideally).

You are way out on a limb here and likely over-reaching re your existing knowledge to support this configuration: i.e. inability to use ssh without hand-holding. But hey, we are a DIY community here, and experimentation is all good, and you acknowledge re:

So my assumption here is that you have not used the system drive to store anything, if you have then use another and do a fresh install: ensuring you meed at-least the minimum system requirements as per our doc referenced above. You will then at least have a non-toast OS Pool with which to execute requests that have yet to be answered form day one. And if you could also indicate if this is running inside of a VM as that could also explain why a 32 GB OS drive becomes a 5 GB ā€˜ROOTā€™ Pool leaving you entirely unable to install anything including maintenance updates, let alone advanced options such as backported stable kernels and the like.

So in short, you might want to re-install: detaching all Data dives first to be sure you donā€™t accidentally select them: however we have a safeguard where drives < 5 TB will not be shown as install options anyway - but all yours look to be less than that. Also note Iā€™m working on new installers currently: so better get back to that.

I know this is frustrating: but learning often is - and your feedback here has helped to see where folks go with what we have and in party why Iā€™ve introduced such additional restrictions out-of-the-box as not concerning folks with the OS drive at all: as from 5.0.9-0 (but not on update) and all our future installers, the ROOT pool is no longer imported by default. But you would still have broken your system by having 5 GB ROOT pool. Or is this a bug: if so it would have been nice to have had the evidence as to what lead to it. And no recommendation to update would then have been given. Your efforts to enforce the use of the un-recommended btrfs-raid6 (default read only currently in our installers) indicates you are game, which is great: but there is much time taken in hand holding and we are a small community trying to do a lot. So I will have to duck out from this thread now as I have yet to receive output requested on the first day: as it was edited/incomplete. I am also support email incidentally.

So consider saving everyone elses time, and your own time, and re-do your install as per our recommendations. And if itā€™s still 5G then fantastic: we have a reproducer for a bug report. You then have an OS at least as per our recomendations. But you will only have read-only capability, if that, to your existing Pool. And will have to re-tread your prior steps re stable kernel backports install, but at least you will be more familiar with what you are attempting to do: which is go-it-alone. We inherited btrfs-raid6 ro form openSUSE. I wrote the stable kernel backport doc with the quoted proviso. If that is ignored by folks and subsequent requests are also ignored then more learning can take place, but folks canā€™t help you here if you do not answer with complete outputs: and use advanced procedures to do what you think is correct and then have no knowledge of having done them:


Not so much, we have all played around here: itā€™s part of the DIY sceen and can be fun, but if your are out of your depth you need to reset/re-install so you at least have an OS to work with. 5 GB does not cut it any longer for boot to snapshot ROOT. And advise (not followed) was given to clear up space at that critical time - by me - twice) We enfoce boot to snapshot to give folks a way to boot into a prior system if all fails on the one they have - but that breaks also if the ROOT pool is filled up. And could have dug you out from many issues if you had only had at least the minimum ROOT pool size. Iā€™m wondering if we are looking at an as yet unseen installer bug actually: speaking of which I better get back to that development.

Try re-installing so you have a working OS: then follow:

ā€œImport unwell Poolā€ Disks ā€” Rockstor documentation

But as you have already gone so way-off track to get your parity raid, you may also need the backported kernel and filesystem howto first.

Hope that helps, and I know you are trying here, but itā€™s a few minutes to re-install and you have nothing like our recommendations in any element of your system:

  • 5 GB ROOT pool
  • Parity btrfs-raid6
  • Backported kernels and filesystems with warnings against this in HowTo.
  • Testing channel (but we are now on RC 4 as of yesterday :slight_smile: )

And so your are out-on-a-limb. But without the expertise to contribute fixes / enhancements to improve things for yourself and others here. However you may be perfectly positioned to contribute to our docs re:

Contributions here are always welcome:
Contributing to Rockstor documentation : Contributing to Rockstor documentation ā€” Rockstor documentation

P.S.:

Starting with a 5 GB ROOT (OS Pool), which would have been evident from day one with a full btrfs fi show output :slight_smile: . Always difficult starting out: but you now have quite the perspective. And if really need parity raid: accepting the know repair problems, use say a mixed profile so you have equal redundancy on the metadata. [EDIT] And donā€™t enable compression.