• Recent
    • Tags
    • Popular
    • Register
    • Login

    Please Note This forum exists for community support for the Mango product family and the Radix IoT Platform. Although Radix IoT employees participate in this forum from time to time, there is no guarantee of a response to anything posted here, nor can Radix IoT, LLC guarantee the accuracy of any information expressed or conveyed. Specific project questions from customers with active support contracts are asked to send requests to support@radixiot.com.

    Radix IoT Website Mango 3 Documentation Website Mango 4 Documentation Website Mango 5 Documentation Website

    Publisher queue discarded - Task Queue Full

    MangoES Hardware
    2
    16
    2.5k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M
      mihairosu
      last edited by

      It in in fact for the same publisher as the previous thread we were discussing. The path prefix is as you said it should be.

      See here from the Publishers export:

          },
               "connectionCheckPeriod":60000,
               "historyCutoffPeriods":1,
               "host":"10.4.0.20",
               "logCount":1,
               "logSize":1.0,
               "maxPointValuesToSend":5000,
               "parallelSyncTasks":100,
               "pathPrefix":[
               ],
               "port":9901,
               "reconnectSyncs":true,
      

      From the graphs, it seems communication was lost around exactly midnight.

      Here are logs from the data source side:

      Also, I can't upload my logs to this forum. The error is: "I do not have enough privileges for this action"

      1 Reply Last reply Reply Quote 0
      • phildunlapP
        phildunlap
        last edited by

        Were you attempting to upload a zip? An ma.log file should upload...

        No matter, you can email them in.

        I wonder, does your data source have any large number of work items or points values waiting to be written? These questions should be answerable from the internal metrics page (/internal/status.shtm)

        1 Reply Last reply Reply Quote 0
        • M
          mihairosu
          last edited by

          Nope, I was trying to upload the txt files themselves. I've had trouble uploading files for forever, and never noticed that error pop up at the top right before.

          I have emailed them to you. I hope you can receive up to 25MB in logs haha.

          Here's the status page:

          0_1513703383224_status.png

          1 Reply Last reply Reply Quote 0
          • M
            mihairosu
            last edited by

            Oh, I just realized my memory is full, now that you mention that point. Swapping is happening.

            This is no good. I will increase the memory.

            I will go from 8GB to 16GB and reboot, though I wonder what is using up all this memory.

            Thankfully we have spare RAM on our hosts!

            0_1513703514809_memory.png

            1 Reply Last reply Reply Quote 0
            • phildunlapP
              phildunlap
              last edited by phildunlap

              Hmm. I do see database issues in the first log I checked. I think you'll see benefits from converting your database to MySQL. Did you try the backup / restore method of shrinking the H2 database?

              1 Reply Last reply Reply Quote 0
              • M
                mihairosu
                last edited by

                Okay the Persistent TCP syncs are working again.

                Man we really need to get our Grafana up and running again so we can monitor our VMs.....such a simple thing could have been been much earlier.

                I'm not sure if this makes a difference, but when I check the Runtime status on the Data Source, sometimes there are no connections, sometimes both are connected and sometimes only one is connected (we have 2 TCP syncs).

                1 Reply Last reply Reply Quote 0
                • M
                  mihairosu
                  last edited by mihairosu

                  So I am running the Backups regularly. Are you saying I should just attempt to restore the latest backup?

                  Also, I thought H2 was the ideal database for our use case. Why would we consider going with MySQL or MariaDB?

                  1 Reply Last reply Reply Quote 0
                  • phildunlapP
                    phildunlap
                    last edited by

                    That could definitely result in a smaller mah2 which may alleviate some of the memory strain. I would do that by renaming your existing mah2 (so as not to lose it if something isn't right) with Mango off, then starting clean and running the SQL restore.

                    The smaller the system, the more it makes sense to weight H2 above MySQL. But, if your database is large, MySQL and MariaDB are both very capable of handling it. H2 should be as well, and as I mentioned in the other thread there were some major improvements in the recent version of H2, which will be bundled in our next release.

                    1 Reply Last reply Reply Quote 0
                    • M
                      mihairosu
                      last edited by

                      This post is deleted!
                      1 Reply Last reply Reply Quote 0
                      • M
                        mihairosu
                        last edited by mihairosu

                        By the way, I did try backup and restore database on the MangoES and it did not help with the Publisher errors.

                        For now I have to turn off logging on the publisher.

                        1 Reply Last reply Reply Quote 0
                        • phildunlapP
                          phildunlap
                          last edited by

                          Whoa, I just took a closer look at your publisher settings, I would try fiddling with that if it's a performance problem on the publisher's side.

                          If you're using NoSQL on both sides, increase the 'minimum overlap when syncing blocks of data' to 1000.

                          Regardless of the database setup, lower your sync threads to between 2 and 6 I'd say. I'd probably go with 3, personally.

                          1 Reply Last reply Reply Quote 0
                          • M
                            mihairosu
                            last edited by

                            Okay great, thanks Phil, I will try that.

                            1 Reply Last reply Reply Quote 0
                            • phildunlapP
                              phildunlap
                              last edited by

                              I would say resolution was found in increasing the data source's timeout from 5000 to 45000. Many other things also transpired, but I believe the source of the issues was the data source timing out during the connection. This then lead to an enormous audit table due to this issue: https://github.com/infiniteautomation/ma-core-public/issues/1188 and that led to a circuitous troubleshooting. Thanks for your patience and letting me look into it!

                              1 Reply Last reply Reply Quote 0
                              • M
                                mihairosu
                                last edited by

                                Wow yea, thank you so much for spending your time troubleshooting our problems.

                                Everything is running smoothly again, and with the new settings you recommended (such as purging the events at intervals less than 1 year hahah) we should be pretty well set.

                                1 Reply Last reply Reply Quote 0
                                • First post
                                  Last post