History Generation Still Running or Finished??
-
That complaint means you have multiple tabs open, and more recently edited the meta data source than the scripting, so it is confused which you are editing. Should work in any script environment, and it'll happen when the validation button is pressed, not only at normal runtime.
-
Phil is there some limitation on the number of points returned into the pointsList by this query because I have 216 meta points in this data source and only 100 are being returned?
-
Yes, there is a default limit on that query of 100. You can add a
&limit(1000)
to the RQL to increase that. -
thanks ah
so if the RQL is eq(dataSourceXid,DS_537506)&limit(500)
what goes in here pointsList = DataPointQuery.query("dataSourceXid=DS_537506")
I got it now .. had a ) too many but it is working now -
Nice! I hope that proves helpful!
I updated the Mango JavaScript context help to let the default limit of 100 be known, and noted
&limit(-1)
is the way to not have a limit. But, that note in the help won't appear until a later core version. -
So I set the history loop running in the meta window because I could not get it to validate in the scripting source . It kept complaining about not being able to cast meta to scripting and only one browser window open.
I set it running about four hours ago. Mango still refreshes to browser but reports server timeouts when retrieving any data so I guess it's running because it hasn't crashed yet lol. I'll give it overnight to run and check it in the am. If still unresponsive I may have to kill it and see how far it has gotten.So it looks like it crashed again this time it created 318 million records back to 1969
Further diagnostics...
I ran the script enabling the range deletion over the entire datasource first as a seperate run from the metaDwr.generateMetaPointHistory and this worked perfectly. All datapoints had the correct range deleted.I then ran the script again commenting out the pvd.deletePointValuesBetween and enabling only the metaDwr.generateMetaPointHistory since the purge was already done now. The process did not complete the first point history.. It spent 4 hours creating 318 million points and then ran out of memory several hours later while still on the first point.
So the issue with purging of the data beforehand seems unrelated to the generation of the erroneous points. I ran the history recalc for the first several points from the dialog box without deleting existing data as it is still zapped from before and they all recalculated correctly. -
The fourth argument to that function is whether or not to invoke the delete code. If you have left that true (admittedly you would have had to have read the post carefully where I supplied it to Mihai to see a note on argument 4) then it's the same as having checked the box. I will try to get the updated Meta out today, which won't generate record prior to the start time.
-
Shoot I believe I forgot to set that to false. That must be what happened.
-
Version 3.3.1 is released. If you update you can use the checkbox or the true argument without generating history back to time 0
-
Thanks Phil will update now.
If I reference the metapoint variable from within the script to set a prior timestamp in its history while this script is running or if I do this in a determined loop
backcounter=5;
myvar.set(xval,myvar.time - backcounter * 1000);
backcounter+=5;
could this cause memory problems? -
I would always generate data oldest to newest, but offhand I wouldn't think so. You may need to throttle your script if there are too many point values waiting to be written. Surely there are cases where you can generate enough data to crash the boat. You could have throttling in there by reading the internal data source's "point values to be written" point, for instance,
if(waitingValues.value > 1000000) RuntimeManager.sleep(5000);
-
any obvious suggestions off hand I have tried a few variations as you suggested before about tuning the database to the mission. This is working but I haven't stress tested it either. Its tuning one history right now about 130k values
-
Coincidentally this just crashed about have way thru
When the console says the thread also had an error, does this mean that the script has an error it?
var increment =5;
var backcounter = 5; var LITERS_LAST_5MIN = 0;
while ((CW.past(MINUTE,backcounter).delta === Number.NaN) && (backcounter < 2880)){
backcounter += increment;}
if ((backcounter >= 2880) || (CW.past(MINUTE,backcounter).delta < 0)){
LITERS_LAST_5MIN = 0;}
else{
LITERS_LAST_5MIN = CW.past(MINUTE,backcounter).delta;
}
return LITERS_LAST_5MIN; -
@phildunlap said in History Generation Still Running or Finished??:
if(waitingValues.value > 1000000) RuntimeManager.sleep(5000);
cant find this waitingValues in mango system -
-
Did you mean adding this sleep statement to the meta point script - Or to the example you suggested controlling the point history loop because the history fails on the very first point so putting it here would not alleviate this and putting pointstowrite variable in the meta point affects the history date range since both points get considered even though this new pointstowrite variable does not update the metapoint context?
-
3.3.1 update done .. It appears that the delete beforehand is now working correctly,
However the history regeneration is no longer storing the script's return value and now stores only 0's as pointvalues.
The meta point script is working in real-time and calculating meta point values correctly and storing correctly as the source point updates but the historical values are not being saved during the history regeneration process and this is since the update to 3.3.1.
This is regenerating a point from June 1 to now and only 0's are being stored.
and just to show the script is working these are the latest values.
re-running this same meta point without deleting before the run and it seems to be working now. so I will try the delete beforehand again and see if this replicates the zeros
-
notice the points to be written history on this process.
Around 1am was the 260000 value and thats when it crashed. I have been re running the point for the past 30 mins and values hover bewteen 10 - 60
So what would you guess is happening here? considering that this was a delete before run and it is running now with a no delete before. lol it has not finished so I shouldn't speculate.
Around the same time frame it had just calculated a big change requiring many point values to be written in a short period which we can see that shortly after this it failed. Is it possible these events are factor in the running out of memory. Because if I put a sleep in the loop after a big calculation I do not need to bring pointstobewritten into the context thereby avoiding the issue history regen has about not ignoring non-updating points.
-
And this was what all the fuss is about... This is a complete data set for the entire period. It had a spike in the data where we lost real data at the modbus device level from power outages etc and the system just calculated a larger value. Ideally I want to push back the fractioned values into consecutive previous timestamps for smoothing out the consumption since we don't have the real values it is an approximation and won't reduce the real consumption as clipping the value would since it is summation not delta as it's represented in the case of actual meter values. This resolves our issue with untimely meter resets cleanly.
I also confirm my earlier suspicion, I ran another point with this option checked for Jan 1/18 to now and the result below so the Delete existing data in range causes the 0 issue and no results are being saved to the pointvalues only 0's.
Nevertheless this is good news and I will try the script control loop again to
automate the 140 I have left, The purge loop is instant lol ... regen is a 2 day process serially :) -
I have a loop running for the first 10 meta-points.. I wonder if I could run multiple silmultameous meta script loops as in 3 loops of 45 points each through the validate on different windows or will the open in different windows cause mango issues? I have run up to 6 simultaneous point histories in the past although not on v3.