History Generation Still Running or Finished??
-
No change is required in Mango to switch to the 64 bit JVM, you just need to install that Java.
Both the absence of the purge between checkbox and the
my.getDataPointWrapper().getId()
function were released in 3.3, so you would need to update to use those. In 3.2 you have to get the ID through the DataPointDao frommy.getDataPointWapper().getXid()
You are correct in renaming "source" in what I posted.
It may be a minor hijack, but I think Phillip's issues were looked into. I tried to keep an eye on that.
-
I think you already had the 64 bit version?
Java VM: Java HotSpot(TM) 64-Bit Server VM (25.151-b12 mixed mode linux-amd64 compressed oops)
^from the hs_err file
-
@phildunlap said in History Generation Still Running or Finished??:
I think you already had the 64 bit version?
Java VM: Java HotSpot(TM) 64-Bit Server VM (25.151-b12 mixed mode linux-amd64 compressed oops)
^from the hs_err file
java --version
java version "1.8.0_151" Java(TM) SE Runtime Environment (build 1.8.0_151-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
Any idea what's going on?
I'm going to upgrade the java version to the newest (162) 64 bit...
-
@phildunlap said in History Generation Still Running or Finished??:
No change is required in Mango to switch to the 64 bit JVM, you just need to install that Java.
Both the absence of the purge between checkbox and the
my.getDataPointWrapper().getId()
function were released in 3.3, so you would need to update to use those. In 3.2 you have to get the ID through the DataPointDao frommy.getDataPointWapper().getXid()
You are correct in renaming "source" in what I posted.
It may be a minor hijack, but I think Phillip's issues were looked into. I tried to keep an eye on that.
I cannot update:
I could not find any other module named dashboards. Is that something I need to remove or can I replace it somehow?
-
There should be a module named 'dashboards' in the modules list. You can mark it for deletion and restart, then update. After 3.3 marking it for deletion would be sufficient, but alas. If it isn't there, there will be a Mango/web/modules/dashboards directory most likely that you can delete while Mango is off.
My suspicion on being OOM-killed is that you may have allocated too much memory to Java, and the operating system had another customer that it wished to supply memory to, and so it picked the giant, long-hanging memory that was your Mango and killed it for its memory. It's good to identify what this other process may have triggered that. I would guess you could lower your memory allocation to Mango since you're on MySQL now (I think), which will have its own process handling requests. With H2, the memory available to it is the same as the memory available to Mango.
-
@mihairosu dashboards has been replaced with a module named mangoUI. Remove dashboards and install mangoUI,
-
@jared-wiltshire said in History Generation Still Running or Finished??:
@mihairosu dashboards has been replaced with a module named mangoUI. Remove dashboards and install mangoUI,
Success. I thought I did this once before, which is what confused me in the back of my mind. Anyway thank you.
As far as the java issues are concerned, I'll spend more time later diagnosing, I don't have much time at the moment.
-
Hi Phil I am tasked to regenerate 144 meta points over the past 3 months (approx, 10000 values on each point). I purge the data range beforehand Nov 1 2017 - Jan 22 2018 (most current) ...
tried the two ways to purge the data beforehand ..
First through the point definition window and successfully purged the time frame and regeneration works every time perfectly.
However, when using the new method of purging before the regeneration
(delete existing data in range checkbox) it always causes my 12gb heap server to run out of memory and when I restart the process has created 4 million records back to 1970. I purge these values back out in the point definition window and restart the history script leaving the delete existing data unchecked this time and it works.It only creates these erroneous records and runs out of memory if I try to zap the data from this check box method.
:
ERROR 2018-01-23T09:33:07,929 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager$StatusProvider.scheduleTimeout:728) - 1 BWB Task Failures, first is: Task Queue Full
ERROR 2018-01-23T09:33:42,398 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager$StatusProvider.scheduleTimeout:728) - 1 BWB Task Failures, first is: Task Queue Full
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007b7480000, 5242880, 0) failed; error='Cannot allocate memory' (errno=12)There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 5242880 bytes for committing reserved memory.
An error report file with more information is saved as:
/opt/mango/hs_err_pid15338.log
ma-start: no restart flag found, not restarting MA
ma-start: MA doneNative memory allocation (mmap) failed to map 5242880 bytes for committing reserved memory.
An error report file with more information is saved as:
/opt/mango/hs_err_pid15338.log
ma-start: no restart flag found, not restarting MA
ma-start: MA done
-
So it does. I was able to reproduce. It's odd to me that it depends on the delete, since the line that is causing the interval logging task to fire for those 30+ years has been in the generate history code for ages, but I too saw that. I would certainly say that's a bug, and the fix looks incredibly simple so I would think we will release a new Meta module pretty soon. The issue for this bug is here: https://github.com/infiniteautomation/ma-core-public/issues/1206
Thanks for bringing this to our attention!
-
Well glad to help, while you are on this it would be also great if the checkbox delete existing data would remember the last date range setup over the calculated range making it easier to batch runs and of course also change it to the same date format as the point edit window for consistency.
..and our thanks is to you and the mango team for listening and improving this important tool. -
Did you see the function I provided Mihai in the script? You could write a loop to call that function and set values out to some alphanumeric point along the way (just not between generations). Something like,
var pointsList = DataPointQuery.query("dataSourceXid=DS_XID"); var metaDwr = new com.serotonin.m2m2.meta.MetaEditDwr(); var now = new Date().getTime(); //var pvd = com.serotonin.m2m2.Common.databaseProxy.newPointValueDao(); for(var k = 0; k < pointsList.length; k+=1) { //Given the issue discussed here with the delete before checkbox, you may want to call, //pvd.deletePointValuesBetween(pointsList[k].getId(), 0, now); metaDwr.generateMetaPointHistory(pointsList[k].getId(), 0, now, true); //alphanum.set("Finished with point: " + pointsList[k].getId()); }
This will regenerate all meta points on data source "DS_XID" from 0 (1970) to now, deleting what exists already, even when you only press validate in the script window.
-
ok sounds great and so can I replace the 0, now) with periodBegin , periodEnd right?
guess I also could also use the pvd.deletePointValuesBetween in other scripts where I want to delete before I manipulate original logged values in the same way before setting a new value correct? Not only for meta points I mean.so Jan 3 - 5 2018
var periodBegin = new Date(2018, 0, 3);
periodBegin.setHours(0);
periodBegin.setMinutes(0);
periodBegin.setSeconds(0);
periodBegin.setMilliseconds(0);var periodEnd = new Date(2018, 0, 5); periodEnd.setHours(12); periodEnd.setMinutes(0); periodEnd.setSeconds(0); periodEnd.setMilliseconds(0);
-
Yes all sounds good. I do believe Java will cast a Date to a long by using the getTime() but you may as well call it yourself, i.e.
periodStart.getTime()
andperiodEnd.getTime()
in the actual call to generateMetaPointHistoryTo invoke the method on
pvd
you will need to create it as shown in that script. -
so where do I run this? .. I tried in a scripting source but is complains of meta data source being cast to the scripting data source ... doI create another metaDS for this to run inside of ? OK that worked in another meta source. Thanks
-
That complaint means you have multiple tabs open, and more recently edited the meta data source than the scripting, so it is confused which you are editing. Should work in any script environment, and it'll happen when the validation button is pressed, not only at normal runtime.
-
Phil is there some limitation on the number of points returned into the pointsList by this query because I have 216 meta points in this data source and only 100 are being returned?
-
Yes, there is a default limit on that query of 100. You can add a
&limit(1000)
to the RQL to increase that. -
thanks ah
so if the RQL is eq(dataSourceXid,DS_537506)&limit(500)
what goes in here pointsList = DataPointQuery.query("dataSourceXid=DS_537506")
I got it now .. had a ) too many but it is working now -
Nice! I hope that proves helpful!
I updated the Mango JavaScript context help to let the default limit of 100 be known, and noted
&limit(-1)
is the way to not have a limit. But, that note in the help won't appear until a later core version. -
So I set the history loop running in the meta window because I could not get it to validate in the scripting source . It kept complaining about not being able to cast meta to scripting and only one browser window open.
I set it running about four hours ago. Mango still refreshes to browser but reports server timeouts when retrieving any data so I guess it's running because it hasn't crashed yet lol. I'll give it overnight to run and check it in the am. If still unresponsive I may have to kill it and see how far it has gotten.So it looks like it crashed again this time it created 318 million records back to 1969
Further diagnostics...
I ran the script enabling the range deletion over the entire datasource first as a seperate run from the metaDwr.generateMetaPointHistory and this worked perfectly. All datapoints had the correct range deleted.I then ran the script again commenting out the pvd.deletePointValuesBetween and enabling only the metaDwr.generateMetaPointHistory since the purge was already done now. The process did not complete the first point history.. It spent 4 hours creating 318 million points and then ran out of memory several hours later while still on the first point.
So the issue with purging of the data beforehand seems unrelated to the generation of the erroneous points. I ran the history recalc for the first several points from the dialog box without deleting existing data as it is still zapped from before and they all recalculated correctly.