ON_CHANGE_INTERVAL stops logging
-
Hello, I just recently noticed that some of our points that are configured as ON_CHANGE_INTERVAL are not logging properly on Mango 3.7.4
I am at the beginning stages of looking in to this so I don't know exactly how widespread the problem nor all of the variables but we have a couple dozen points set up to log when the point value changes and every minute regardless of if there is a change. It appears to just stop logging every minute but will eventually start again. It seems to start logging again when a change occurs but other times it appears to be random. I do not see any errors in the ma.log and it doesn't seem to be happening on every point configured as OCI.
We poll more frequently than we log and I can see the values from each poll that is occurring from the cached list but it just doesn't seem to log even though it is polling fine.
I can see that the points that are having an issue DO NOT have an Interval Logging job entry in the high priority queue. Disabling and enabling the point does seem to work but I am hoping someone could offer a little bit of insight as to why and how we can fix this.
Thanks,
Ed -
There is a mango internal point tracking the points to be written, is that stable or does it keep climbing?
Sounds like you might be running out of threads or your system cant keep up. what is the size of the mango system and how powerful is our host machine? -
Hi Craig, unfortunately that metric was behaving as I would expect; climbing a bit and then dropping to 0.
This instance has 501 points and is running on the same hardware as a RD121C except we only have 4GB of ram. The instance had its threads set to 100 and I increase that to 200 but it made no difference at the time. We have since restarted the data points so we will see if it prevents the issue in the future.
In the mean time, we have a lot of sites to go through and look for this issue but we'll keep one in this state to continue troubleshooting.
-
@Xof1986 ok seems like more than enough resources for 500 points. I have not ever seen that behavior before. I will see if I can replicate it on 3.7.4. But if it's not happening on every point I doubt I can replicate it. I would recommend upgrading the server if you going to tinker with the threads. There was a bug fix in 3.7.8 regarding the threads:
Version 3.7.8
Upgrade Jetty webserver to 9.4.43.v20210629 to fix performance issues
Remove ssl.alpn.debug env property, this is now enabled by configuring log4j logger for the org.eclipse.jetty.alpn.server package
Fix bug where setting the high priority core thread pool greater than 100 would cause Mango to fail to start
Update store.url env property to new location of store.mango-os.com -
@Xof1986 What data sources are these points from and data type?
Actually could you export the whole point and paste it here -
We do have a few instances of mango with higher thread counts than 100 but I've never ran in to that bug as of yet. If it comes to it, we can do and upgrade to see if that helps.
The datasource is modbus set to poll every second.
{ "dataPoints":[ { "xid":"Lorem_IPSUM", "name":"IPSUM", "enabled":true, "loggingType":"ON_CHANGE_INTERVAL", "intervalLoggingPeriodType":"MINUTES", "intervalLoggingType":"AVERAGE", "purgeType":"YEARS", "pointLocator":{ "range":"HOLDING_REGISTER", "modbusDataType":"TWO_BYTE_INT_UNSIGNED", "writeType":"NOT_SETTABLE", "additive":0.0, "bit":0, "charset":"ASCII", "multiplier":1.0, "multistateNumeric":false, "offset":1000, "registerCount":0, "slaveId":1, "slaveMonitor":false }, "eventDetectors":[ ], "plotType":"SPLINE", "rollup":"NONE", "unit":"", "simplifyType":"NONE", "chartColour":"", "chartRenderer":{ "type":"IMAGE", "timePeriodType":"DAYS", "numberOfPeriods":1 }, "dataSourceXid":"Lorem_DS", "defaultCacheSize":1, "deviceName":"Lorem", "discardExtremeValues":false, "discardHighLimit":1.7976931348623157E308, "discardLowLimit":-1.7976931348623157E308, "intervalLoggingPeriod":1, "intervalLoggingSampleWindowSize":0, "overrideIntervalLoggingSamples":false, "preventSetExtremeValues":false, "purgeOverride":false, "purgePeriod":1, "readPermission":"dolor,sit", "setExtremeHighLimit":1.7976931348623157E308, "setExtremeLowLimit":-1.7976931348623157E308, "setPermission":"", "tags":{ }, "textRenderer":{ "type":"ANALOG", "useUnitAsSuffix":true, "format":"0.00" }, "tolerance":0.0 } ] }
-
im not able to replicate this. My only suggestion would be to change the interval logging type to instant and see if that fixes the problem.
you'll need to select interval logg on the dropdown to change that setting but you can save it as OCI -
@CraigWeb wow, I didn't even notice that. I'll let you know how it goes.
-
@CraigWeb We just tried to make this change and I noticed a couple of things.
There is no option for setting intervalLoggingType once you've set loggingType to "Interval and when point value changes" via the GUI.We ended up just changing the JSON and pushing that out, which it seems to have taken the changes.
Also interesting, when hitting the API endpoint for data point details...if the point is configured as ON_CHANGE_INTERVAL it does not return a key for "intervalLoggingType" nor is any value for any keys set to "AVERAGE" but I certainly do see those for points configured as "INTERVAL"
I honestly always assumed the way ON_CHANGE_INTERVAL worked was it logs on changes and the current value once per minute, I had no idea this would be averaging over that minute and then recording the average.
Anyway...I'll let you know the results, just wanted to give that info.