Meta Point - Timestamp Not Constant
-
Hi Felicia,
Interesting question!
I wonder a few things,
- Do any of the context points of this meta point have a value at 16:51:11?
- Is the meta point executed on a cron or on Context update?
I don't think you need to set
TIMESTAMP=point.time
The point value that triggers the execution will give the meta point's new value its time if TIMESTAMP is not set. Possibly that's what occurred? The event for one of the points in the context fired before the update to the point you take the timestamp from?The odd thing to me is that the interval is shorter. If it was something like a garbage collection pausing something I would expect the interval to be longer (but the meta point still would have used the timestamp of the value that triggered its execution, so I would expect that to be invisible). The thing to do would be provide the JSON of the meta point and check the virtual data for these odd timestamps. What's the polling rate of your virtual data source?
-
Hi Phil,
I don't think there is any context points of this meta point have a value at 16:51:11. The meta point is executed on Context update. Therefore most of the timestamp of the meta point is per the virtual point. The polling rate for virtual data source is 3 seconds. Below is one of the example of the meta point:
{
"dataPoints":[
{
"xid":"PlantOperation2",
"name":"Condenser Heat Reject RT",
"enabled":true,
"loggingType":"ON_TS_CHANGE",
"intervalLoggingPeriodType":"MINUTES",
"intervalLoggingType":"INSTANT",
"purgeType":"YEARS",
"pointLocator":{
"dataType":"NUMERIC",
"updateEvent":"NONE",
"contextUpdateEvent":"CONTEXT_LOGGED",
"context":[
{
"varName":"MainHeader1",
"dataPointXid":"MainHeader1",
"updateContext":true
},
{
"varName":"MainHeader2",
"dataPointXid":"MainHeader2",
"updateContext":false
},
{
"varName":"MainHeader3",
"dataPointXid":"MainHeader3",
"updateContext":false
},
{
"varName":"MainHeader4",
"dataPointXid":"MainHeader4",
"updateContext":false
},
{
"varName":"MainHeader5",
"dataPointXid":"MainHeader5",
"updateContext":false
},
{
"varName":"MainHeader6",
"dataPointXid":"MainHeader6",
"updateContext":false
},
{
"varName":"PowerInput1",
"dataPointXid":"PowerInput1",
"updateContext":false
},
{
"varName":"PowerInput2",
"dataPointXid":"PowerInput2",
"updateContext":false
},
{
"varName":"PowerInput3",
"dataPointXid":"PowerInput3",
"updateContext":false
},
{
"varName":"PowerInput4",
"dataPointXid":"PowerInput4",
"updateContext":false
}
],
"logLevel":"NONE",
"variableName":"PlantOperation2",
"executionDelaySeconds":0,
"script":"TIMESTAMP = MainHeader1.time; return(MainHeader6.value4.19(MainHeader5.value-MainHeader4.value )/3.517);",
"scriptPermissions":{
"customPermissions":"",
"dataPointReadPermissions":"superadmin",
"dataPointSetPermissions":"superadmin",
"dataSourcePermissions":"superadmin"
},
"settable":false,
"updateCronPattern":""
},
"eventDetectors":[
],
"plotType":"SPLINE",
"rollup":"NONE",
"unit":"",
"simplifyType":"NONE",
"chartColour":"",
"chartRenderer":null,
"dataSourceXid":"MS_Building Cooling",
"defaultCacheSize":1,
"deviceName":"PlantOperation",
"discardExtremeValues":false,
"discardHighLimit":1.7976931348623157E308,
"discardLowLimit":-1.7976931348623157E308,
"intervalLoggingPeriod":1,
"intervalLoggingSampleWindowSize":0,
"overrideIntervalLoggingSamples":false,
"preventSetExtremeValues":false,
"purgeOverride":false,
"purgePeriod":1,
"readPermission":"",
"setExtremeHighLimit":1.7976931348623157E308,
"setExtremeLowLimit":-1.7976931348623157E308,
"setPermission":"",
"tags":{
},
"textRenderer":{
"type":"ANALOG",
"useUnitAsSuffix":true,
"unit":"",
"renderedUnit":"",
"format":"0.00"
},
"tolerance":0.0
}
]
}Please let me know if my setup is not correct. My requirement is to have same row for virtual and meta points without roll up and the calculation should be of meta points should be correct if compare to my manual calculation.
Thank you.
Regards,
Felicia -
Your setup looks fine. From what I see, I really expect there to be a value in MainHeader1 at the time you are asking about. That's the only way you would have gotten this result.
-
Hi Phil,
Definitely there is a value in MainHeader1 as the polling is every 3 minutes. However, the timestamp 1 minute interval. Therefore, the timestamp for meta points should be constant. I do not understand why the timestamp for meta point is not constant because it should be based on the timestamp of virtual point.
Is there any other setup to make sure both virtual and meta timestamp is the same? The meta point value should be based on the value of virtual point.
Thanks.
Regards,
Felicia -
Definitely there is a value in MainHeader1 as the polling is every 3 minutes.
You said 3 seconds the first time. And, I wanted to know if there was a value at that exact time that you're asking about. I think in the original post you said there was.
You can remove the
TIMESTAMP=MainHeader1.time;
portion of the script as it will only introduce cache considerations, when the meta point's default action would be to use the timestamp that generated the event. Since your update events areCONTEXT_LOGGED
this will be the timestamp recorded for that series. So, I would say remove that, regenerate the history and you won't see this. -
Hi Phil,
Thanks for your prompt reply. Sorry for the typo error. It should be every 3 seconds instead of every 3 minutes.
I had removed the timestamp and it seems to resolve the current issue I am facing. However, it doesn't resolve the other issue that I am having for the manual calculation. The result of meta point is not the same as manually calculated. The result will be out occasionally. Below is my result for the calculation:
Time Manual Cal From Mango Variance
10 05 2018 13:27 543.23495708 543.23495708 0.00000000
10 05 2018 13:28 549.48000893 549.48000893 0.00000000
10 05 2018 13:29 535.12357546 535.12357546 0.00000000
10 05 2018 13:30 539.55963913 539.55963913 0.00000000
10 05 2018 13:31 532.14960735 532.14960735 0.00000000
10 05 2018 13:32 554.78660457 554.78660457 0.00000000
10 05 2018 13:33 532.78220782 532.78220782 0.00000000
10 05 2018 13:34 561.12353174 553.76835358 7.35517816
10 05 2018 13:35 551.65939993 551.65939993 0.00000000
10 05 2018 13:36 540.21521998 540.21521998 0.00000000
10 05 2018 13:37 540.04708387 540.04708387 0.00000000
10 05 2018 13:38 529.39945998 529.39945998 0.00000000
10 05 2018 13:39 530.50840625 536.65520416 -6.14679791
10 05 2018 13:40 539.35237634 539.35237634 0.00000000
10 05 2018 13:41 585.95006258 585.95006258 0.00000000
10 05 2018 13:42 572.85304086 572.85304086 0.00000000
10 05 2018 13:43 579.10346793 579.10346793 0.00000000
10 05 2018 13:44 566.22326928 561.96256136 4.26070792
10 05 2018 13:45 545.42791329 545.42791329 0.00000000
10 05 2018 13:46 579.51836544 579.51836544 0.00000000
10 05 2018 13:47 562.71974977 562.71974977 0.00000000
10 05 2018 13:48 596.57930691 596.57930691 0.00000000
10 05 2018 13:49 606.39143280 606.39143280 0.00000000
10 05 2018 13:50 610.66484066 610.66484066 0.00000000
10 05 2018 13:51 588.74662065 588.74662065 0.00000000
10 05 2018 13:52 561.65815802 561.65815802 0.00000000
10 05 2018 13:53 541.38559756 541.38559756 0.00000000
10 05 2018 13:54 552.09730558 552.09730558 0.00000000
10 05 2018 13:55 535.06272974 535.06272974 0.00000000
10 05 2018 13:56 527.84615008 527.84615008 0.00000000
10 05 2018 13:57 531.92463244 531.92463244 0.00000000
10 05 2018 13:58 551.87285383 551.87285383 0.00000000
10 05 2018 13:59 569.47098284 569.47098284 0.00000000
10 05 2018 14:00 585.64340645 585.64340645 0.00000000
10 05 2018 14:01 588.84734714 588.84734714 0.00000000
10 05 2018 14:02 600.76407692 600.76407692 0.00000000
10 05 2018 14:03 606.30053705 606.30053705 0.00000000
10 05 2018 14:04 605.83226834 605.56736162 0.26490672
10 05 2018 14:05 603.11035797 602.20232108 0.90803689Any idea to fix this issue?
Felicia
-
These are values that were computed live or a history that has been regenerated? If live, and then the manual is computed after the fact, then it could be the meta point is processing the update before all the context points have been updated. You could introduce a 1 second execution delay in that situation. I am certain the arithmetic is doing what it has been configured to do.
-
Hi Phil,
I don't quite understand what do you mean computed live or a history that has been regenerated. Basically, I had setup some virtual points and meta points. The meta point is calculated using virtual points. I am checking if the meta calculation in Mango is correct by doing manual calculation after exporting the points.
For example,
Virtual point:- Point 1
- Point 2
Meta point:
- (Point 1 + Point 2)x2
If Point 1 is 10 and Point 2 is 20, the meta point should be (10+20)x2 = 60
I had extracted the data using Excel reports for past 30 minutes and then do a manual calculation to do a comparison. I am not sure if this is what you mean by computed live.
FYI, I had made the change to delay by 1 second. The result is worst than no delay meaning more variances.
I had been trying a lot of scenarios but failed.
Any other better solution to the setup in order to achieve the expected result? This is important as the data will be exported out. I need to make sure it is correct when doing manual calculation else the report will not be reliable.
Thanks.
Felicia
-
Hi Phil,
I get what you mean by computed live or a history that has been regenerated. To reply your question, the data is computed live.
Felicia
-
Part of the issue you'd experience in doing a calculation with the history is that you are interval logging on the virtual points, but the .value in the meta script will access the cache of those points' values, which is updating every 3 seconds you said.
-
Hi Phil,
Is there anyway to make sure that the meta point calculation is based on the value that is logged instead of using cache values which is updating every 3 seconds?
Felicia
-
var pvd = this.pvd; if(!pvd) { pvd = this.pvd = com.serotonin.m2m2.Common.databaseProxy.newPointValueDao(); } print( p.value ); print( p.time ); var loggedValue = pvd.getPointValueBefore( p.getDataPointWrapper().getId(), p.time + 1 ); print( loggedValue.doubleValue ); //if numeric point print( loggedValue.time );
While this could perhaps be more straightforward, I am having a tough time figuring out why one would be polling a virtual data source to get an interval logging rollup and then compute something off the interval values. It would seem to me to make more sense to record all data from the edge data source, and use the statistics functions in the scripting environment to compute the interval value you wish to use in the script body. That way if you have some anomalies you can still view the original data.