History Generation Still Running or Finished??
-
@jared-wiltshire said in History Generation Still Running or Finished??:
@mihairosu dashboards has been replaced with a module named mangoUI. Remove dashboards and install mangoUI,
Success. I thought I did this once before, which is what confused me in the back of my mind. Anyway thank you.
As far as the java issues are concerned, I'll spend more time later diagnosing, I don't have much time at the moment.
-
Hi Phil I am tasked to regenerate 144 meta points over the past 3 months (approx, 10000 values on each point). I purge the data range beforehand Nov 1 2017 - Jan 22 2018 (most current) ...
tried the two ways to purge the data beforehand ..
First through the point definition window and successfully purged the time frame and regeneration works every time perfectly.
However, when using the new method of purging before the regeneration
(delete existing data in range checkbox) it always causes my 12gb heap server to run out of memory and when I restart the process has created 4 million records back to 1970. I purge these values back out in the point definition window and restart the history script leaving the delete existing data unchecked this time and it works.It only creates these erroneous records and runs out of memory if I try to zap the data from this check box method.
:
ERROR 2018-01-23T09:33:07,929 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager$StatusProvider.scheduleTimeout:728) - 1 BWB Task Failures, first is: Task Queue Full
ERROR 2018-01-23T09:33:42,398 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager$StatusProvider.scheduleTimeout:728) - 1 BWB Task Failures, first is: Task Queue Full
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007b7480000, 5242880, 0) failed; error='Cannot allocate memory' (errno=12)There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 5242880 bytes for committing reserved memory.
An error report file with more information is saved as:
/opt/mango/hs_err_pid15338.log
ma-start: no restart flag found, not restarting MA
ma-start: MA doneNative memory allocation (mmap) failed to map 5242880 bytes for committing reserved memory.
An error report file with more information is saved as:
/opt/mango/hs_err_pid15338.log
ma-start: no restart flag found, not restarting MA
ma-start: MA done
-
So it does. I was able to reproduce. It's odd to me that it depends on the delete, since the line that is causing the interval logging task to fire for those 30+ years has been in the generate history code for ages, but I too saw that. I would certainly say that's a bug, and the fix looks incredibly simple so I would think we will release a new Meta module pretty soon. The issue for this bug is here: https://github.com/infiniteautomation/ma-core-public/issues/1206
Thanks for bringing this to our attention!
-
Well glad to help, while you are on this it would be also great if the checkbox delete existing data would remember the last date range setup over the calculated range making it easier to batch runs and of course also change it to the same date format as the point edit window for consistency.
..and our thanks is to you and the mango team for listening and improving this important tool. -
Did you see the function I provided Mihai in the script? You could write a loop to call that function and set values out to some alphanumeric point along the way (just not between generations). Something like,
var pointsList = DataPointQuery.query("dataSourceXid=DS_XID"); var metaDwr = new com.serotonin.m2m2.meta.MetaEditDwr(); var now = new Date().getTime(); //var pvd = com.serotonin.m2m2.Common.databaseProxy.newPointValueDao(); for(var k = 0; k < pointsList.length; k+=1) { //Given the issue discussed here with the delete before checkbox, you may want to call, //pvd.deletePointValuesBetween(pointsList[k].getId(), 0, now); metaDwr.generateMetaPointHistory(pointsList[k].getId(), 0, now, true); //alphanum.set("Finished with point: " + pointsList[k].getId()); }
This will regenerate all meta points on data source "DS_XID" from 0 (1970) to now, deleting what exists already, even when you only press validate in the script window.
-
ok sounds great and so can I replace the 0, now) with periodBegin , periodEnd right?
guess I also could also use the pvd.deletePointValuesBetween in other scripts where I want to delete before I manipulate original logged values in the same way before setting a new value correct? Not only for meta points I mean.so Jan 3 - 5 2018
var periodBegin = new Date(2018, 0, 3);
periodBegin.setHours(0);
periodBegin.setMinutes(0);
periodBegin.setSeconds(0);
periodBegin.setMilliseconds(0);var periodEnd = new Date(2018, 0, 5); periodEnd.setHours(12); periodEnd.setMinutes(0); periodEnd.setSeconds(0); periodEnd.setMilliseconds(0);
-
Yes all sounds good. I do believe Java will cast a Date to a long by using the getTime() but you may as well call it yourself, i.e.
periodStart.getTime()
andperiodEnd.getTime()
in the actual call to generateMetaPointHistoryTo invoke the method on
pvd
you will need to create it as shown in that script. -
so where do I run this? .. I tried in a scripting source but is complains of meta data source being cast to the scripting data source ... doI create another metaDS for this to run inside of ? OK that worked in another meta source. Thanks
-
That complaint means you have multiple tabs open, and more recently edited the meta data source than the scripting, so it is confused which you are editing. Should work in any script environment, and it'll happen when the validation button is pressed, not only at normal runtime.
-
Phil is there some limitation on the number of points returned into the pointsList by this query because I have 216 meta points in this data source and only 100 are being returned?
-
Yes, there is a default limit on that query of 100. You can add a
&limit(1000)
to the RQL to increase that. -
thanks ah
so if the RQL is eq(dataSourceXid,DS_537506)&limit(500)
what goes in here pointsList = DataPointQuery.query("dataSourceXid=DS_537506")
I got it now .. had a ) too many but it is working now -
Nice! I hope that proves helpful!
I updated the Mango JavaScript context help to let the default limit of 100 be known, and noted
&limit(-1)
is the way to not have a limit. But, that note in the help won't appear until a later core version. -
So I set the history loop running in the meta window because I could not get it to validate in the scripting source . It kept complaining about not being able to cast meta to scripting and only one browser window open.
I set it running about four hours ago. Mango still refreshes to browser but reports server timeouts when retrieving any data so I guess it's running because it hasn't crashed yet lol. I'll give it overnight to run and check it in the am. If still unresponsive I may have to kill it and see how far it has gotten.So it looks like it crashed again this time it created 318 million records back to 1969
Further diagnostics...
I ran the script enabling the range deletion over the entire datasource first as a seperate run from the metaDwr.generateMetaPointHistory and this worked perfectly. All datapoints had the correct range deleted.I then ran the script again commenting out the pvd.deletePointValuesBetween and enabling only the metaDwr.generateMetaPointHistory since the purge was already done now. The process did not complete the first point history.. It spent 4 hours creating 318 million points and then ran out of memory several hours later while still on the first point.
So the issue with purging of the data beforehand seems unrelated to the generation of the erroneous points. I ran the history recalc for the first several points from the dialog box without deleting existing data as it is still zapped from before and they all recalculated correctly. -
The fourth argument to that function is whether or not to invoke the delete code. If you have left that true (admittedly you would have had to have read the post carefully where I supplied it to Mihai to see a note on argument 4) then it's the same as having checked the box. I will try to get the updated Meta out today, which won't generate record prior to the start time.
-
Shoot I believe I forgot to set that to false. That must be what happened.
-
Version 3.3.1 is released. If you update you can use the checkbox or the true argument without generating history back to time 0
-
Thanks Phil will update now.
If I reference the metapoint variable from within the script to set a prior timestamp in its history while this script is running or if I do this in a determined loop
backcounter=5;
myvar.set(xval,myvar.time - backcounter * 1000);
backcounter+=5;
could this cause memory problems? -
I would always generate data oldest to newest, but offhand I wouldn't think so. You may need to throttle your script if there are too many point values waiting to be written. Surely there are cases where you can generate enough data to crash the boat. You could have throttling in there by reading the internal data source's "point values to be written" point, for instance,
if(waitingValues.value > 1000000) RuntimeManager.sleep(5000);
-
any obvious suggestions off hand I have tried a few variations as you suggested before about tuning the database to the mission. This is working but I haven't stress tested it either. Its tuning one history right now about 130k values