• Recent
    • Tags
    • Popular
    • Register
    • Login

    Please Note This forum exists for community support for the Mango product family and the Radix IoT Platform. Although Radix IoT employees participate in this forum from time to time, there is no guarantee of a response to anything posted here, nor can Radix IoT, LLC guarantee the accuracy of any information expressed or conveyed. Specific project questions from customers with active support contracts are asked to send requests to support@radixiot.com.

    Radix IoT Website Mango 3 Documentation Website Mango 4 Documentation Website Mango 5 Documentation Website

    Full Storage issue on MangoES

    Mango Automation general Discussion
    2
    4
    2.1k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M
      mohd
      last edited by

      Hi,
      I received the following alert from MangoES regarding to full storage issue. following is the output of df command, I am wondering what caused this issue. Appreciate your help if you could let me know where should I check or is necessary to remove any file..

      0_1464157886959_Mango Alert.PNG
      mango@mangoES$df -a
      Filesystem Size Used Avail Use% Mounted on
      rootfs - - - - /
      /dev/root 7.0G 7.0G 0 100% /
      devtmpfs 746M 0 746M 0% /dev
      sysfs 0 0 0 - /sys
      proc 0 0 0 - /proc
      tmpfs 996M 0 996M 0% /dev/shm
      devpts 0 0 0 - /dev/pts
      tmpfs 996M 26M 971M 3% /run
      tmpfs 5.0M 0 5.0M 0% /run/lock
      tmpfs 996M 0 996M 0% /sys/fs/cgroup
      cgroup 0 0 0 - /sys/fs/cgroup/systemd
      cgroup 0 0 0 - /sys/fs/cgroup/cpuset
      cgroup 0 0 0 - /sys/fs/cgroup/debug
      cgroup 0 0 0 - /sys/fs/cgroup/cpu,cpuacct
      cgroup 0 0 0 - /sys/fs/cgroup/memory
      cgroup 0 0 0 - /sys/fs/cgroup/devices
      cgroup 0 0 0 - /sys/fs/cgroup/freezer
      cgroup 0 0 0 - /sys/fs/cgroup/perf_event
      systemd-1 0 0 0 - /proc/sys/fs/binfmt_misc
      mqueue 0 0 0 - /dev/mqueue
      debugfs 0 0 0 - /sys/kernel/debug
      configfs 0 0 0 - /sys/kernel/config
      tmpfs 996M 2.0M 994M 1% /tmp
      /dev/mmcblk0p1 127M 3.2M 124M 3% /media/boot
      tmpfs 200M 0 200M 0% /run/user/0
      tmpfs 200M 0 200M 0% /run/user/1000

      1 Reply Last reply Reply Quote 0
      • phildunlapP
        phildunlap
        last edited by phildunlap

        Hi mohd,

        The first thing to do is to figure out where the space has gone. If it's all in the Mango/databases, your best bet will be to move some data off the device and purge your events table.

        I would bet that it's probably filled up the disk some other way though. Things to check...

        du -hs /opt/mango/logs
        du -hs /opt/mango
        

        The vast majority of the logging in Mango rolls and limits its file size, but some, like the Persistent TCP publisher, do not have this implemented for their logging yet. If you are debug logging a Persistent TCP publisher, it can very quickly fill space. The second command will see if it's Mango that has filled the disk, or if it's something else. You can also search for the largest individual files in the current directory by running:

        du -ma . | sort -n -r | head -n 10
        

        So you can change . to / to search the whole filesystem (but if you disk is full, that could be slow, so I would investigate /opt first. File sizes from this command will be in megabytes, change -ma to just -a to get raw sizes. It is possible other logs from other processes have grown, so if /opt/mango is not large go ahead and search the whole filesystem for large files.

        1 Reply Last reply Reply Quote 0
        • M
          mohd
          last edited by

          Hi Phildunlap,
          Thanks for the response. According to the following output of the commands, it looks database has got the largest size. I can delete the old backup files to have more size, but appreciate your idea in this regard, and also want to know how to purge the event table.

          0_1464224209786_du output.PNG

          Thanks,
          Mohammad

          1 Reply Last reply Reply Quote 0
          • phildunlapP
            phildunlap
            last edited by

            Hi Mohammad,

            There are a few ways to do it.

            Through Mango:
            There is a button, "Purge all events" in the "Purge Settings" section on the system settings page. If your events table is large, this could take a while.

            Through the SQL console:
            delete from userEvents;
            delete from events;

            And my suggestion would be to do it from the database shell, since this will be very very fast regardless of table size:

            1. Navigate to /opt/mango/bin/
            2. Edit h2-web-console.sh to add the -webAllowOthers argument as instructed by the comment near the bottom, such that you can access it from your other computer.
            3. sudo ./h2-web-console.sh
            4. In your browser, navigate to the port defined in h2-web-console.sh, probably <Mango IP>:8081
            5. Log in, your connection string will be /opt/mango/databases/mah2 and the username and password are probably blank, but if not they will be explicit in the H2 section of your env.properties, at either Mango/overrides/properties/ or Mango/classes
            6. Run the following command:
            SET FOREIGN_KEY_CHECKS=0; CREATE TABLE eventsNewTable LIKE events; DROP TABLE events; RENAME TABLE eventsNewTable TO events; CREATE TABLE userEventsNewTable LIKE userEvents; DROP TABLE userEvents; RENAME TABLE userEventsNewTable TO userEvents; SET FOREIGN_KEY_CHECKS=1;
            

            It's good to have Mango off while doing that, but not required as the operations are so fast (but some database errors will probably be thrown by Mango right when the command is run).
            7. Ctrl + c the h2-web-console.sh

            And you're done!

            1 Reply Last reply Reply Quote 0
            • First post
              Last post