@etienne-ott glad you got it sorted. The API can indeed be a very good way to do these sort of updates. For future reference, you can also export the data points as JSON, manipulate it, and reimport it.
You cannot do this on the query but you could do it on the update statement for the point. The setting of the SQL data point's value could be done via a Meta or Scripting data source. The help docs on the data source should get you going.
Thank you very much for taking the time to respond! I was kind of thinking this might be the case and BACnet is an element of possibility; it may just need to be done in phases or we go S4. Either way, this is good intel to bring back to the table to support moving forward with looking at Mango more seriously!
For another option, the strategy I took was that I created global scripts to create the data points (MQTT/Modbus, virtual, meta) for my devices.
The scripts I have are:
--> This script contains the minimal set of values I need to set for each type of data point.
--> This script contains the list of points I need for each of the devices. It loops through each list of points and calls the appropriate function below.
--> This script creates each of the appropriate data point, event detector, and event handlers.
The intention was to detect the error generated when an unknown MQTT topic is subscribed to and create the required data points from the device id in the topic. I haven't gotten that working yet but it is a lower priority item at the moment.
For now, I have a scripting data source to call the function:
I have created multiple 'createDataPointFunctions' scripts for the different devices I need. The same pattern is applied.
You may still be able to mitigate this by lowering the max open files in your Mango/overrides/properties/env.properties file. In specific, the
#Set the number of files the database can have open at one time
property. The comment is somewhat misleading, what this number really means is the maximum number of output streams that can be held open. Setting this to 0 would tell the database to always close files after every write, which would encourage flushing to disk (I believe MAPPED_BYTE_BUFFER streams may not be flushed until a subsequent garbage collection)
The downside would be that buffering is done for a reason, which is efficiency, especially when points are receiving fairly rapid writes. So, performance can be affected.
Fox's suggestion of the API is the right one. Without knowing any idiosyncrasies of how you need the data exposed it is difficult to make a definite recommendation other than the API as that is definitely part of its purpose.
OK! As promised, here we are. I'm using 3.5.6 so I'm not entirely certain how well this will work. I've stuck to mango components to the best of my ability to allow the best compatibility sans the colour picker. If it doesn't work you may need to alter the HTML to use <ma-color-picker instead.
You'll need to set up a userModule file to incorporate this.
I've saved my files in /opt/mango/overrides/web/modules/mangoUI/web/dev/directives/chartProfile
the userModule.js is in /opt/mango/overrides/web/modules/mangoUI/web/dev
For the mangoUI settings, the url for the userModule.js file is:
Yes, I was considering whether to split my post into different topics. Thank you for all your answers.
The POST /rest/v2/script/ run allows you to submit a script and get the results of its run.
That sounds very interesting. I just completed a python script using XSRF token. Nice: I like how it eliminated the need to login.
One could easily downsample their data via script, which I can provide a simple or efficient example of if desired.
If you already have an existing script, it would be nice if you could post it under its own forum topic, as I'm sure many of us would like to downsample old data. In addition to downsampling old data, I have numerous points that were logged much too often due to initially setting a log tolerance threshold that was too small. However, I won't have time to run this script right away due to my other project.
there is no data type that is "array of data type: numeric" and handled in Mango as such.
You understood me correctly. I'm looking to store an array of readings (as in multiple channels for each timestamp). Basically, a 2D numerical array where the rows are for different timestamps and the columns are the same type of data type but from different sources (channels). If it were stored in a CSV or spreadsheet, it would look like this:
It seems to me that in order to reduce data redundancy (by not storing the same timestamp multiple times) I could store the data in HDF5 format. HDF5 includes the metadata for the stored information, so the data can be retrieved into a meaningful format using generic tools, even without the source code that stored it. Additionally, it can efficiently compress and decompress binary data such as numerical arrays. My array elements could be any number of bytes. HDF5 is also extremely fast.
Self-Describing The datasets with an HDF5 file are self describing. This allows us to efficiently extract metadata without needing an additional metadata document.
Supports Heterogeneous Data: Different types of datasets can be contained within one HDF5 file.
Supports Large, Complex Data: HDF5 is a compressed format that is designed to support large, heterogeneous, and complex datasets.
Supports Data Slicing: "Data slicing", or extracting portions of the dataset as needed for analysis, means large files don't need to be completely read into the computers memory or RAM.
Open Format - wide support in the many tools: Because the HDF5 format is open, it is supported by a host of programming languages and tools, including open source languages like R and Python and open GIS tools like QGIS.
I also found TsTables, a PyTables wrapper that will enable storing time stamped arrays into HDF5 files in daily shards and seamlessly stitch them together during queries. Appends are also efficient. The HDF5 tools will also help for debugging whether any inconsistency is occurring during the read or write operation.
This is probably only a display issue. I would guess above that on the page it is showing the value for hour 23 as the current value, and that if you chart it for the day you will see the values in the chart. Thanks for bringing this to our attention!