Multiple point values inserted via API all use latest timestamp inside metapoint
-
@MattFox a few questions to clarify what you are asking.
It seems you have a meta point with many context points and are expecting it to fire many times at different timestamps? Or is this 1 context point that you are expecting to fire the script many times?
The meta point is alphanumeric and saving its value as JSON?
What are the type of data points doing the context update? (i.e. data source type)
What are the logging types of the data points doing the context update?If possible share your script or pseudo code to explain what is happening.
Just one thought after making some assumptions:
- You have a single context point that is being bulk updated via a 3rd party api
- That api is updating the point in such a way that it is not inserting data in time order (inverted order I suspect)
- The data point is only firing an update event for the 1st value as the rest of the values are being inserted as backdates.
-
Thanks Terry, sorry I'll explain further:
There are 300 x points doing this below:1)A single numeric point is updated via the API using the PUT method for multiple values.
2) This numeric point is a context point in a meta data point to take the current value and timestamp and insert into a JSON string to be stored with other information.
3) usually there are 3 or 4 point values (an hours worth of data in 15min increments) which are pushed into the system at once.
4) the meta data point fires each time for each individual point value, but only appears to use the latest value and timestamp each time to store.
5) Your suggestion of an inverted order is a very valid point and I shall investigate.Code:
var data = {}; var start = getSeasonStart(Number(volume.time)); var end = new Date(Number(volume.time)); var value = volume.pointValueAt(volume.time,false); var stats = volume.getStats( start.getTime(), end.getTime(),true); data={"serialNo": "SERIALCODEHERE", // "id to identify the unit that reports the data "timestamp": volume.time, // timestamp in UTC!!! "water": { "volume": String(stats.sum/1000) // volume in m3 ,"flow":{ "avg":volume.value?volume.value*(1/(15*60)).toFixed(3) : 0 } } }; LOG.info(volume.time);LOG.info(volume.value); return JSON.stringify(data);
EDIT: Have checked incoming data in logs, can confirm everything is in chronological order. Looks like they're all coming in then the meta script is firing after the number of times the point is updated...
Different point data, but same 3rd party provider:xid: 'RainGauge[110100612]', timestamp: 1574362800000, value: 0, dataType: 'NUMERIC', annotation: '' }, { xid: 'RainGauge[110100612]', timestamp: 1574366400000, value: 0, dataType: 'NUMERIC', annotation: '' }, { xid: 'RainGauge[110100612]', timestamp: 1574370000000, value: 0, dataType: 'NUMERIC', annotation: '' }, { xid: 'RainGauge[110100612]', timestamp: 1574373600000, value: 0, dataType: 'NUMERIC', annotation: '' }
-
@MattFox thanks for the info. Starting to make sense now. A few more questions:
-
What is the REST endpoint you are using?
PUT /rest/v1/point-values
I can't remember if there was a v2 one for that version of Mango. -
What is the data source type of the points you are settings. Ie. What type of data source does
RainGauge[110100612]
belong to? -
What are the logging properties of that point? I presume they are not log all data.
-
-
- correct
PUT /v1/point-values
- datafile, the system was using FTP but they dropped it in preference of web based data in a hope to obtain data from their system faster. It meant their site gear could push directly to my API
Log All Data
- correct
-
I'm going to assume you know all this, but just to confirm:
To have a data point get its value set via a DataFile data source you must have the poll class's setPointValueImpl() method overridden to return true. If you don't have a poll class on that data source then you would be seeing events getting raised.
I see you using this:
var value = volume.pointValueAt(volume.time,false);
Which indicates not to use the cache, in the case of data being pushed into the database quickly the value you are looking for might not be in the database but will be in the cache. But from the rest of the script it seems like that you aren't using that.I would expect this:
LOG.info(volume.time);LOG.info(volume.value);
To return the exact latest value for the point at the time it is used which is where I think your problem is. It is possible that the meta scripts are firing at a much slower rate than the import is happening. Basically by the time your first context point value fires the script the entire import is done and the latest value for that point is the last in the imported PUT list.I would try to use something like this, to get the value in the cache at the timestamp the context point fired the script. This will work assuming that volume is the only point that can update this context.
var value = volume.pointValueAt(CONTEXT.timestamp, true);
Basically the value of
CONTEXT.timestamp
contains the timestamp of the value from the context point that caused the execution. There is also aruntime
member which contains the wall clock time that the script executed.I believe that the CONTEXT is in your version of Mango but I'm tracing code in 3.7+. I would first start by logging the value of CONTEXT in an actual execution and see if that is there. If not then I can dig through the 3.5.x branches code and figure out how the timestamp of the context update is set.
-
@terrypacker said in Multiple point values inserted via API all use latest timestamp inside metapoint:
lass's setPointValueImpl() method overridden to return true.
import com.infiniteautomation.datafilesource.rt.AbstractPollClass; import com.infiniteautomation.datafilesource.rt.PollClassInvoker; import com.serotonin.m2m2.rt.dataImage.DataPointRT; import com.serotonin.m2m2.rt.dataImage.PointValueTime; import com.serotonin.m2m2.rt.dataImage.SetPointSource; public class PutPollClass extends AbstractPollClass { @Override public void run(long time, PollClassInvoker invoker, String[] arguments) {} @Override public boolean setPointValueImpl(DataPointRT dataPoint, PointValueTime valueTime,SetPointSource source) { return true; } }
is my poll class method.
I would try to use something like this, to get the value in the cache at the timestamp the context point fired the script. This will work assuming that volume is the only point that can update this context.
var value = volume.pointValueAt(CONTEXT.timestamp, true);
I'll give it a go. If I recall this is an older version of my code. I'm just trying to get my head around where I'm going wrong - it still produces the same issue mentioned above:
var data = {}; var ts = CONTEXT.getTimestamp(); var start = getSeasonStart(ts); var end = new Date(Number(ts)); var value = volume.pointValueAt(ts,true); var xid= volume.getDataPointWrapper().xid; var tags = volume.getDataPointWrapper().tags; var stats = volume.getStats( start.getTime(), end.getTime(),true); data={"serialNo":"******", // "id to identify the unit that reports the data "timestamp": ts, // timestamp in UTC ISO!!! "water": { "volume": String(stats.sum/1000) // volume in m3 ,"flow":{ "avg":value?value.doubleValue*(1/(15*60)).toFixed(3) : 0 } } }; //var out = buildPost(JSON.stringify(data)); LOG.info(start); return JSON.stringify(data);
-
@MattFox I forgot to ask. Have you confirmed that the script is executing every time your context point is updating?
I know there is some logic in the backend that only lets it execute 1x if it is fired multiple times at the same millisecond timestamp. We do that to mitigate the script running at the same timestamp if many context points fire at the same time. There could be a bug around that where it isn't using the correct timestamp as a reference as to when to fire.
-
This is what swagger shows from a latest point values export:
[ { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"serialHERE\",\"timestamp\":1582242300000,\"water\":{\"volume\":\"205226\",\"flow\":{\"avg\":28}}}", "timestamp": 1582242300000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"serialHERE\",\"timestamp\":1582242300000,\"water\":{\"volume\":\"205226\",\"flow\":{\"avg\":28}}}", "timestamp": 1582241400000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"serialHERE\",\"timestamp\":1582242300000,\"water\":{\"volume\":\"205226\",\"flow\":{\"avg\":28}}}", "timestamp": 1582240500000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"serialHERE\",\"timestamp\":1582239600000,\"water\":{\"volume\":\"205143\",\"flow\":{\"avg\":28}}}", "timestamp": 1582239600000, "annotation": null },
Timestamp for when the point is saved definitely shows the correct timestamp for when the meta point stores it, but the rest doesn't line up...
EDIT: I'm half wondering if i have to go back to making it do individual puts... but that seems like murder on this system... -
That's interesting. I'd like to know what the latest point values export from swagger is for the context point. Does that line up with the PUT requests.
I've traced most of the logic path and can't see how this is happening. There is a bit of code in the REST controller where if it can't find the timestamp in the model it will use the current time. This would cause all those context events to be at the same timestamp. But I'm looking at your PUT JSON and it looks right.
To confirm if this is happening you would see the point being updated have big blocks of values at the same timestamp in an SQL database and only 1 entry for each PUT stamped at the time of the PUT request.
-
I'll look for you right now:
Original Point[ { "cached": true, "dataType": "NUMERIC", "value": 27000, "timestamp": 1582245900000, "annotation": "" }, { "cached": true, "dataType": "NUMERIC", "value": 28000, "timestamp": 1582245000000, "annotation": "" }, { "cached": true, "dataType": "NUMERIC", "value": 27000, "timestamp": 1582244100000, "annotation": "" }, { "cached": true, "dataType": "NUMERIC", "value": 27000, "timestamp": 1582243200000, "annotation": "" }, { "cached": true, "dataType": "NUMERIC", "value": 28000, "timestamp": 1582242300000, "annotation": "" },
Meta point:
[ { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582245900000,\"water\":{\"volume\":\"205336\",\"flow\":{\"avg\":27}}}", "timestamp": 1582245900000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582245900000,\"water\":{\"volume\":\"205336\",\"flow\":{\"avg\":27}}}", "timestamp": 1582245000000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582245900000,\"water\":{\"volume\":\"205336\",\"flow\":{\"avg\":27}}}", "timestamp": 1582244100000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582245900000,\"water\":{\"volume\":\"205336\",\"flow\":{\"avg\":27}}}", "timestamp": 1582243200000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582242300000,\"water\":{\"volume\":\"205226\",\"flow\":{\"avg\":28}}}", "timestamp": 1582242300000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582242300000,\"water\":{\"volume\":\"205226\",\"flow\":{\"avg\":28}}}", "timestamp": 1582241400000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582242300000,\"water\":{\"volume\":\"205226\",\"flow\":{\"avg\":28}}}", "timestamp": 1582240500000, "annotation": null }, { "cached": false, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582239600000,\"water\":{\"volume\":\"205143\",\"flow\":{\"avg\":28}}}", "timestamp": 1582239600000, "annotation": null },
-
Ok so we know the data is getting into Mango correctly and is being saved into the context point properly, and the point is firing the script each time it gets an update.
So the problem is that the value/time for the context point during the script is the latest value in the PUT for every execution.
This could be caused by:
-
The PUT array being inverted, but I check the code and it expects the oldest point value to be first in the array, which is what you have posted above. Also I'm not even sure if the meta point would fire on a backdated value. So I think we can safely ignore this one.
-
The executions of the script are happening AFTER the PUT has completed and the latest value of the point is the latest value in the PUT. This seems the most likely option.
I would be testing to see what are the values of the context point for each execution during a PUT. Something like:
LOG.info(CONTEXT.timestamp); LOG.info(volume.last(5, true));
If all the values are there for each execution and the
CONTEXT.timestamp
is accurate you should be able to get the correct value at the time you want. I am pretty confident that the context timestamp is right because that is what is being used when the meta script is executed to save the value against. -
-
Thanks Terry, we'll find out in the next 20 or so minutes when the next batch fires in!
EDIT: Voila, hrm should have just let it be those logs...LOG.info(volume.time);LOG.info(volume.value); //var out = buildPost(JSON.stringify(data)); LOG.info(start); LOG.info(CONTEXT.timestamp); LOG.info(volume.last(5, true));
INFO 2020-02-21 14:44:47,798 - 1582246800000 INFO 2020-02-21 14:44:47,877 - [PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000), PointValueTime(28000.0@2020/02/21 13:30:00.000)] INFO 2020-02-21 14:44:48,675 - //1582248600000 INFO 2020-02-21 14:44:48,700 - //27000.0 INFO 2020-02-21 14:44:48,713 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:44:48,738 - 1582247700000 INFO 2020-02-21 14:44:48,766 - [PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000), PointValueTime(28000.0@2020/02/21 13:30:00.000)] INFO 2020-02-21 14:44:48,910 - //1582248600000 INFO 2020-02-21 14:44:48,910 - //27000.0 INFO 2020-02-21 14:44:48,910 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:44:48,910 - 1582248600000 INFO 2020-02-21 14:44:48,910 - [PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000), PointValueTime(28000.0@2020/02/21 13:30:00.000)] INFO 2020-02-21 14:45:52,386 - //1582249500000 INFO 2020-02-21 14:45:52,386 - //28000.0 INFO 2020-02-21 14:45:52,386 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:45:52,386 - 1582249500000 INFO 2020-02-21 14:45:52,386 - [PointValueTime(28000.0@2020/02/21 14:45:00.000), PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000)]
It honestly looks like it's not updating the way it should be...
-
If I'm reading this right looks like you posted the end of the 1st execution's logs and the next 3 full executions.
I'll discard the first partial log and focus on the last 3.
So the first execution captured is for the PUT value at 3:15 (1582247700000) which is likely the 6th latest value.
INFO 2020-02-21 14:44:48,675 - //1582248600000 INFO 2020-02-21 14:44:48,700 - //27000.0 INFO 2020-02-21 14:44:48,713 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:44:48,738 - 1582247700000 INFO 2020-02-21 14:44:48,766 - [PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000), PointValueTime(28000.0@2020/02/21 13:30:00.000)]
The second execution is for the PUT value at 3:30 (1582248600000) which at this point is the 5th latest value
INFO 2020-02-21 14:44:48,910 - //1582248600000 INFO 2020-02-21 14:44:48,910 - //27000.0 INFO 2020-02-21 14:44:48,910 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:44:48,910 - 1582248600000 INFO 2020-02-21 14:44:48,910 - [PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000), PointValueTime(28000.0@2020/02/21 13:30:00.000)]
The 3rd execution is for the PUT value at 3:45 (1582249500000) which at this point is the 5th latest value. This also shows that the latest value for the point was updated, which can happen if you are running scripts from oldest to newest and inserting newer values during this time.
INFO 2020-02-21 14:45:52,386 - //1582249500000 INFO 2020-02-21 14:45:52,386 - //28000.0 INFO 2020-02-21 14:45:52,386 - //[Date 2019-06-30T12:00:00.000Z] INFO 2020-02-21 14:45:52,386 - 1582249500000 INFO 2020-02-21 14:45:52,386 - [PointValueTime(28000.0@2020/02/21 14:45:00.000), PointValueTime(27000.0@2020/02/21 14:30:00.000), PointValueTime(28000.0@2020/02/21 14:15:00.000), PointValueTime(28000.0@2020/02/21 14:00:00.000), PointValueTime(27000.0@2020/02/21 13:45:00.000)]
The script executions should happen in the order in which they are submitted to our ordered executor, so this tells me that the executions are lagging behind the data insertion. Think of it like a queue of script executions that are happening in parallel to values being added to the point. And the values being set on the point will happen faster than the scripts can execute.
I still think that if you could use this code to get the value you want to return
volume.pointValueAt(CONTEXT.timestamp, true);
. It seems to me that in every execution the value that would be returned will/must exist even if it isn't the latest value of the volume point.UPDATE: I just remembered that even if you are using a cache it may not be large enough to hold what you are inserting. When inserting a lot of values if they don't go into the cache immediately they can get put into the Batch Queue for writing to the DB. If that happens you won't get them if your script is executing. If I am correct then increasing the queue size on the context point will allow you to retain enough values in memory while they burst in.
-
Sorry Terry, but I did as you instructed prior to ensure all would go in:
var ts = CONTEXT.timestamp; var start = getSeasonStart(Number(ts)); var end = new Date(Number(ts)); var value = volume.pointValueAt(ts,true); // var xid= volume.getDataPointWrapper().xid; // var tags = volume.getDataPointWrapper().tags; var stats = volume.getStats( start.getTime(), end.getTime(),true); data={"serialNo": "serial", // "id to identify the unit that reports the data "timestamp": ts, // timestamp in UTC ISO!!! "water": { "volume": String(stats.sum/1000) // volume in m3 ,"flow":{ "avg":value.doubleValue?value.doubleValue*(1/(15*60)).toFixed(3) : 0 } } }; LOG.info(volume.time);LOG.info(volume.value); //var out = buildPost(JSON.stringify(data)); LOG.info(start); LOG.info(CONTEXT.timestamp); LOG.info(volume.last(5, true)); return JSON.stringify(data);
-
@terrypacker said in Multiple point values inserted via API all use latest timestamp inside metapoint:
UPDATE: I just remembered that even if you are using a cache it may not be large enough to hold what you are inserting. When inserting a lot of values if they don't go into the cache immediately they can get put into the Batch Queue for writing to the DB. If that happens you won't get them if your script is executing. If I am correct then increasing the queue size on the context point will allow you to retain enough values in memory while they burst in.
oh is that referring to the Default cache size?
-
Yes what is that set to on the context point?
-
Shoot that was only 1.... man oh man
-
Ok, if possible try setting it to the max size of your PUT batches. It behaves in a semi-complex way and I don't have the code in front of me but from memory it will expand if asked to in a query i.e. latest(x, true) and I'm not sure when it contracts but if you set the default size it will never be smaller than that once it fills up.
Regarding memory point values don't take up a lot of space so I doubt it will be an issue.
-
Done I'll monitor this when it comes through in the next 30 minutes. Thanks for your support here!
-
SUCCESSS!
[ { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582254900000,\"water\":{\"volume\":\"205612\",\"flow\":{\"avg\":27}}}", "timestamp": 1582254900000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582254000000,\"water\":{\"volume\":\"205584\",\"flow\":{\"avg\":28}}}", "timestamp": 1582254000000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582253100000,\"water\":{\"volume\":\"205556\",\"flow\":{\"avg\":28}}}", "timestamp": 1582253100000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582252200000,\"water\":{\"volume\":\"205529\",\"flow\":{\"avg\":27}}}", "timestamp": 1582252200000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582251300000,\"water\":{\"volume\":\"205501\",\"flow\":{\"avg\":28}}}", "timestamp": 1582251300000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582250400000,\"water\":{\"volume\":\"205474\",\"flow\":{\"avg\":27}}}", "timestamp": 1582250400000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582249500000,\"water\":{\"volume\":\"205446\",\"flow\":{\"avg\":28}}}", "timestamp": 1582249500000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582248600000,\"water\":{\"volume\":\"205419\",\"flow\":{\"avg\":27}}}", "timestamp": 1582248600000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582247700000,\"water\":{\"volume\":\"205391\",\"flow\":{\"avg\":28}}}", "timestamp": 1582247700000, "annotation": null }, { "cached": true, "dataType": "ALPHANUMERIC", "value": "{\"serialNo\":\"SERIAL\",\"timestamp\":1582246800000,\"water\":{\"volume\":\"205363\",\"flow\":{\"avg\":28}}}", "timestamp": 1582246800000, "annotation": null } ]
If you're out my way, i'll buy you a drink!