Multiple point values inserted via API all use latest timestamp inside metapoint
-
I'll look for you right now:
Original Point[ { "cached": true, "dataType": "NUMERIC", "value": 27000, "timestamp": 1582245900000, "annotation": "" }, { "cached": true, "dataType": "NUMERIC", "value": 28000, "timestamp": 1582245000000, "annotation": "" }, { "cached": true, "dataType": "NUMERIC", "value": 27000, "timestamp": 1582244100000, "annotation": "" }, { "cached": true, "dataType": "NUMERIC", "value": 27000, "timestamp": 1582243200000, "annotation": "" }, { "cached": true, "dataType": "NUMERIC", "value": 28000, "timestamp": 1582242300000, "annotation": "" },
Meta point:
[ { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582245900000,\"water\":{\"volume\":\"205336\",\"flow\":{\"avg\":27}}}", "timestamp": 1582245900000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582245900000,\"water\":{\"volume\":\"205336\",\"flow\":{\"avg\":27}}}", "timestamp": 1582245000000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582245900000,\"water\":{\"volume\":\"205336\",\"flow\":{\"avg\":27}}}", "timestamp": 1582244100000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582245900000,\"water\":{\"volume\":\"205336\",\"flow\":{\"avg\":27}}}", "timestamp": 1582243200000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582242300000,\"water\":{\"volume\":\"205226\",\"flow\":{\"avg\":28}}}", "timestamp": 1582242300000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582242300000,\"water\":{\"volume\":\"205226\",\"flow\":{\"avg\":28}}}", "timestamp": 1582241400000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582242300000,\"water\":{\"volume\":\"205226\",\"flow\":{\"avg\":28}}}", "timestamp": 1582240500000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582239600000,\"water\":{\"volume\":\"205143\",\"flow\":{\"avg\":28}}}", "timestamp": 1582239600000, "annotation": null },
-
Ok so we know the data is getting into Mango correctly and is being saved into the context point properly, and the point is firing the script each time it gets an update.
So the problem is that the value/time for the context point during the script is the latest value in the PUT for every execution.
This could be caused by:
-
The PUT array being inverted, but I check the code and it expects the oldest point value to be first in the array, which is what you have posted above. Also I'm not even sure if the meta point would fire on a backdated value. So I think we can safely ignore this one.
-
The executions of the script are happening AFTER the PUT has completed and the latest value of the point is the latest value in the PUT. This seems the most likely option.
I would be testing to see what are the values of the context point for each execution during a PUT. Something like:
LOG.info(CONTEXT.timestamp); LOG.info(volume.last(5, true));
If all the values are there for each execution and the
CONTEXT.timestamp
is accurate you should be able to get the correct value at the time you want. I am pretty confident that the context timestamp is right because that is what is being used when the meta script is executed to save the value against. -
-
Thanks Terry, we'll find out in the next 20 or so minutes when the next batch fires in!
EDIT: Voila, hrm should have just let it be those logs...LOG.info(volume.time);LOG.info(volume.value); //var out = buildPost(JSON.stringify(data)); LOG.info(start); LOG.info(CONTEXT.timestamp); LOG.info(volume.last(5, true));
INFO 2020-02-21 14:44:47,798 - 1582246800000 INFO 2020-02-21 14:44:47,877 - [PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000), PointValueTime(28000.0@2020/02/21 13:30:00.000)] INFO 2020-02-21 14:44:48,675 - //1582248600000 INFO 2020-02-21 14:44:48,700 - //27000.0 INFO 2020-02-21 14:44:48,713 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:44:48,738 - 1582247700000 INFO 2020-02-21 14:44:48,766 - [PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000), PointValueTime(28000.0@2020/02/21 13:30:00.000)] INFO 2020-02-21 14:44:48,910 - //1582248600000 INFO 2020-02-21 14:44:48,910 - //27000.0 INFO 2020-02-21 14:44:48,910 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:44:48,910 - 1582248600000 INFO 2020-02-21 14:44:48,910 - [PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000), PointValueTime(28000.0@2020/02/21 13:30:00.000)] INFO 2020-02-21 14:45:52,386 - //1582249500000 INFO 2020-02-21 14:45:52,386 - //28000.0 INFO 2020-02-21 14:45:52,386 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:45:52,386 - 1582249500000 INFO 2020-02-21 14:45:52,386 - [PointValueTime(28000.0@2020/02/21 14:45:00.000), PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000)]
It honestly looks like it's not updating the way it should be...
-
If I'm reading this right looks like you posted the end of the 1st execution's logs and the next 3 full executions.
I'll discard the first partial log and focus on the last 3.
So the first execution captured is for the PUT value at 3:15 (1582247700000) which is likely the 6th latest value.
INFO 2020-02-21 14:44:48,675 - //1582248600000 INFO 2020-02-21 14:44:48,700 - //27000.0 INFO 2020-02-21 14:44:48,713 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:44:48,738 - 1582247700000 INFO 2020-02-21 14:44:48,766 - [PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000), PointValueTime(28000.0@2020/02/21 13:30:00.000)]
The second execution is for the PUT value at 3:30 (1582248600000) which at this point is the 5th latest value
INFO 2020-02-21 14:44:48,910 - //1582248600000 INFO 2020-02-21 14:44:48,910 - //27000.0 INFO 2020-02-21 14:44:48,910 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:44:48,910 - 1582248600000 INFO 2020-02-21 14:44:48,910 - [PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000), PointValueTime(28000.0@2020/02/21 13:30:00.000)]
The 3rd execution is for the PUT value at 3:45 (1582249500000) which at this point is the 5th latest value. This also shows that the latest value for the point was updated, which can happen if you are running scripts from oldest to newest and inserting newer values during this time.
INFO 2020-02-21 14:45:52,386 - //1582249500000 INFO 2020-02-21 14:45:52,386 - //28000.0 INFO 2020-02-21 14:45:52,386 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:45:52,386 - 1582249500000 INFO 2020-02-21 14:45:52,386 - [PointValueTime(28000.0@2020/02/21 14:45:00.000), PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000)]
The script executions should happen in the order in which they are submitted to our ordered executor, so this tells me that the executions are lagging behind the data insertion. Think of it like a queue of script executions that are happening in parallel to values being added to the point. And the values being set on the point will happen faster than the scripts can execute.
I still think that if you could use this code to get the value you want to return
volume.pointValueAt(CONTEXT.timestamp, true);
. It seems to me that in every execution the value that would be returned will/must exist even if it isn't the latest value of the volume point.UPDATE: I just remembered that even if you are using a cache it may not be large enough to hold what you are inserting. When inserting a lot of values if they don't go into the cache immediately they can get put into the Batch Queue for writing to the DB. If that happens you won't get them if your script is executing. If I am correct then increasing the queue size on the context point will allow you to retain enough values in memory while they burst in.
-
Sorry Terry, but I did as you instructed prior to ensure all would go in:
var ts = CONTEXT.timestamp; var start = getSeasonStart(Number(ts)); var end = new Date(Number(ts)); var value = volume.pointValueAt(ts,true); // var xid= volume.getDataPointWrapper().xid; // var tags = volume.getDataPointWrapper().tags; var stats = volume.getStats( start.getTime(), end.getTime(),true); data={"serialNo": "serial", // "id to identify the unit that reports the data "timestamp": ts, // timestamp in UTC ISO!!! "water": { "volume": String(stats.sum/1000) // volume in m3 ,"flow":{ "avg":value.doubleValue?value.doubleValue*(1/(15*60)).toFixed(3) : 0 } } }; LOG.info(volume.time);LOG.info(volume.value); //var out = buildPost(JSON.stringify(data)); LOG.info(start); LOG.info(CONTEXT.timestamp); LOG.info(volume.last(5, true)); return JSON.stringify(data);
-
@terrypacker said in Multiple point values inserted via API all use latest timestamp inside metapoint:
UPDATE: I just remembered that even if you are using a cache it may not be large enough to hold what you are inserting. When inserting a lot of values if they don't go into the cache immediately they can get put into the Batch Queue for writing to the DB. If that happens you won't get them if your script is executing. If I am correct then increasing the queue size on the context point will allow you to retain enough values in memory while they burst in.
oh is that referring to the Default cache size?
-
Yes what is that set to on the context point?
-
Shoot that was only 1.... man oh man
-
Ok, if possible try setting it to the max size of your PUT batches. It behaves in a semi-complex way and I don't have the code in front of me but from memory it will expand if asked to in a query i.e. latest(x, true) and I'm not sure when it contracts but if you set the default size it will never be smaller than that once it fills up.
Regarding memory point values don't take up a lot of space so I doubt it will be an issue.
-
Done I'll monitor this when it comes through in the next 30 minutes. Thanks for your support here!
-
SUCCESSS!
[ { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582254900000,\"water\":{\"volume\":\"205612\",\"flow\":{\"avg\":27}}}", "timestamp": 1582254900000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582254000000,\"water\":{\"volume\":\"205584\",\"flow\":{\"avg\":28}}}", "timestamp": 1582254000000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582253100000,\"water\":{\"volume\":\"205556\",\"flow\":{\"avg\":28}}}", "timestamp": 1582253100000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582252200000,\"water\":{\"volume\":\"205529\",\"flow\":{\"avg\":27}}}", "timestamp": 1582252200000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582251300000,\"water\":{\"volume\":\"205501\",\"flow\":{\"avg\":28}}}", "timestamp": 1582251300000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582250400000,\"water\":{\"volume\":\"205474\",\"flow\":{\"avg\":27}}}", "timestamp": 1582250400000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582249500000,\"water\":{\"volume\":\"205446\",\"flow\":{\"avg\":28}}}", "timestamp": 1582249500000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582248600000,\"water\":{\"volume\":\"205419\",\"flow\":{\"avg\":27}}}", "timestamp": 1582248600000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582247700000,\"water\":{\"volume\":\"205391\",\"flow\":{\"avg\":28}}}", "timestamp": 1582247700000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582246800000,\"water\":{\"volume\":\"205363\",\"flow\":{\"avg\":28}}}", "timestamp": 1582246800000, "annotation": null } ]
If you're out my way, i'll buy you a drink!
-
Great news. Also no worries, we all really appreciate the help you provide on the forum so its good to be able to pay you back.