File Datasource breaks on file upload
-
Hi all, have got an issue where a third party is providing data in CSV files, only that their timestamp does not use seconds (it is updated on the minute). Tried implementing a bit of code based upon the abstract class to rectify this so the data is in the correct format prior to being inserted into the corresponding datapoints.
Annoyingly, I don't even reach my code in the parsing point of the data. The file datasource importer breaks directly after uploading the designated files and complains about the time format. Has anyone else had this issue or do you have any further suggestions? I am unable to receive the data in a different format. -
Hi Matt, couple things,
- Posting the error you got is helpful.
- Posting the code you're talking about is useful.
Using Joda's DateTimeFormat will be somewhat strict. You can always
+ ":00"
to the date string before you try to parse it if you like. Or, you can use SimpleDateFormat instead, like,import java.text.SimpleDateFormat; import java.text.ParseException; ... private SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd HH:mm"); ... long dt; try { dt = sdf.parse(time).getTime(); } catch(ParseException e) { return; } ...
-
Tried uploading to get the error, am now only receiving an HTTP500 server error... >_<
Sorry Phil, please take a gander at this
Here's a sample of the data.Site=Site,Location=141206,Name=Name,Version=11 Date/Time,M[141206],T[141206] 01/10/2017 12:00 a.m.,36.83,11.60 01/10/2017 01:00 a.m.,36.72,11.30 01/10/2017 02:00 a.m.,36.73,11.10 01/10/2017 03:00 a.m.,36.63,11.10 01/10/2017 04:00 a.m.,36.64,11.00 01/10/2017 05:00 a.m.,36.43,11.00 01/10/2017 06:00 a.m.,36.43,10.90 01/10/2017 08:00 a.m.,36.44,10.60
And here's my facepalm of a code I'm using to parse the CSV
public class StreatsCsvImporter extends AbstractCSVDataSource { private DateTimeFormatter dtf = DateTimeFormat.forPattern("dd/MM/yyyy HH:mm:ss a"); private Map<Integer, String> headerMap = new HashMap<Integer, String>(); @Override public void importRow(String[] row, int rowNum) { //Strip out the header row, it does not contain our data if(rowNum == 1){ for(int k = 0; k < row.length; ++k) { this.headerMap.put(k, row[k]); } } if(rowNum>1) { //Column 0 is the time long time = dtf.parseDateTime(row[0]).getMillis(); //Empty additional parameters Map<String, String> extraParams = new HashMap<String,String>(); //For each additional column we will create an Import Point for(int i=1; i<row.length; i++){ String identifier = headerMap.get(i); //Get the identifier from our header map double value = Double.parseDouble(row*); //Create the value NumericImportPoint point = new NumericImportPoint(identifier, value, time, extraParams); this.parsedPoints.add(point); } } } }
-
Gotta love formatters. I hacked it up, should work,
import java.util.HashMap; import java.util.Map; import org.joda.time.format.DateTimeFormat; import org.joda.time.format.DateTimeFormatter; import com.infiniteautomation.datafilesource.contexts.AbstractCSVDataSource; import com.infiniteautomation.datafilesource.dataimage.NumericImportPoint; public class StreatsCsvImporter extends AbstractCSVDataSource { private DateTimeFormatter dtf = DateTimeFormat.forPattern("MM/dd/yyyy HH:mm:ss a"); @Override public void importRow(String[] row, int rowNum) { if(rowNum <= 1 || row.length < 3) return; long dt; String timeString = row[0].replace(":00", ":00:00").replace("a.m.", "AM").replace("p.m.", "PM"); //System.out.print("Timestring: " + timeString + "\n"); try { dt = dtf.parseDateTime(timeString).getMillis(); } catch(Exception e) { //Gobble //e.printStackTrace(); return; } Map<String, String> extraParams = new HashMap<String,String>(); this.parsedPoints.add(new NumericImportPoint("DP_M", Double.parseDouble(row[1]), dt, extraParams)); this.parsedPoints.add(new NumericImportPoint("DP_T", Double.parseDouble(row[2]), dt, extraParams)); } }
-
Tried it again, this time I received the following response:
{
"resourceId" : "DF_0b81ae23-c5f4-4e12-bb5d-a3efbd1d63cb",
"progress" : 100,
"finished" : true,
"cancelled" : false,
"errors" : [ ],
"unfoundIdentifiers" : [ ],
"createdPoints" : [ ],
"failedPoints" : [ ],
"totalImported" : 0,
"priority" : 2,
"queueSize" : 2,
"taskId" : "TR_DF_0b81ae23-c5f4-4e12-bb5d-a3efbd1d63cb",
"expires" : 1512521151744,
"threadName" : "Temporary Resource Timeout for : DF_0b81ae23-c5f4-4e12-bb5d-a3efbd1d63cb"
} -
Are you submitting the Java file through the uploader on the data source page? That is not for templates, that is for a data file that the data source should process immediately. I actually was screensharing with someone earlier and they too assumed there would be a direct way to upload their templates from the data file data source page, but there currently is not. You have to manually place the .java into Mango/web/modules/dataFile/web/CompilingGrounds/CSV then hit the compile button on the data source.
The resource ID you got back, were it a data file to import that had been submitted, would enable you to get the status of that import, via navigating to
sptth://yourmango.extension/rest/v2/data-file/import/{resourceId}
I have made an issue for submitting files to the data file data source, as admittedly it is confusing, and would be desirable to upload from there. Part of the reason it may not have been enabled is that data source permissions would enable someone to do arbitrary code execution in Mango - but that's basically true already with what's possible through the scripting data source, so, that concern would have been subsumed. https://github.com/infiniteautomation/ma-core-public/issues/1162
-
No I'm using the template and have placed it in the required templates csv directory then clicked compile. After a while, I noticed the datasource was not parsing the directory I had set for reading the data. I manually tried uploading the same file I showed you earlier to see if it would actually insert some datapoints in and update them, and yet nothing has gone in,
Something's definitely off.. -
Eh, have to ask :D. Similarly, is create points enabled for your data source?
Hmm. I did test that one some...
You can try uncommenting the print statements if you're running in a console, or you can
throw new java.lang.RuntimeException("What's happening and where?!");
which should convey whatever insight can be gleaned to the logs.Is it possible your file begins with the import prefix? This would prevent it from importing through REST, as well.
The ClassLoader for the importing class is associated with the DataSourceRT, So, if you restart the data source it will reload the class if you have modified /recompiled it. Otherwise, it may reload it at its discretion.
-
Ah the "Create Missing Points" tickbox did the job. Never thought to tick that as to me it meant it was adding additional point values into the mix. Maybe if it was named Generate datapoints from file or something to that effect I'd be alright. Non matter, looks like all is working and we're at the end of the proverbial slide!
Thanks for your patience Phil. Will keep you updated if I run into any other bizarre behaviour.
-
Always good to check the help for data sources! Glad we got it resolved!
I wonder if I'll do anything about reloading the class whenever the compile button is hit... That seems like it could be more intuitive. Normally I find myself renaming my test file and hitting save to get a poll to happen, though!
-
Hi again Phil, would you be willing to explain in further detail how to run this in the console? I need to do some further programming and debugging and would be good to have this working for files that have a different number of logging devices - some have two others one or even three...
-
Would also need to have a way to remember/recall the first few rows to designate the deviceName, name, and XID values if possible...
-
Running it in the console means starting Mango on the command line using either
ma-start.bat
or./ma.sh start
from the Mango/bin/ directory.it is possible to store and recall them. In the code you posted, the "headerMap" is storing an integer (position in the row) as a key for the column header. You can do something like that (declare a member variable to your class, assign to it and then refer to it in subsequent calls to 'importRow').
It is possible to pass in "deviceName" / "xid" / "name" / "identifier" (data file data point property, the first argument in the ImportPoint's we are adding to parsedPoints) by passing those attributes in the "extraParams" map for the first import point with that identifier. So, had you never run the importer before, and you put,
extraParams.put("deviceName", "Streats");
then the points would have been created with that device name
-
Excellent, I'll give it a crack. Thanks Phil!
-
This post is deleted! -
A little confused with an error in the system alarms. Not sure if I should start a new thread or can continue in this one since it relates to the same data and importing code...
I receive Urgent level errors when importing data that says "Failed to find all expected points", I don't know if this is because it expects all datapoints to be in a single file when it parses them (because each file hosts a different sensor and its readings) or if it's my code, Will add below:
import java.util.HashMap; import java.util.Map; import org.joda.time.format.DateTimeFormat; import org.joda.time.format.DateTimeFormatter; import com.infiniteautomation.datafilesource.contexts.AbstractCSVDataSource; import com.infiniteautomation.datafilesource.dataimage.NumericImportPoint; public class StreatsCsvImporter extends AbstractCSVDataSource { private DateTimeFormatter dtf = DateTimeFormat.forPattern("dd/MM/yyyy hh:mm:ss a"); private Map<Integer, String> headerMap = new HashMap<Integer, String>(); @Override public void importRow(String[] row, int rowNum) { if( row.length <= 1) return; /* Extract devicename from first line in file */ if(rowNum==0) { /* for( int i=0; i<row.length; i++) { System.out.println(row*); } return; */ String loc = row[0].split("=")[1]; /* System.out.println(loc); */ String deviceName = row[2].split("=")[1]; /* System.out.println(deviceName); */ this.headerMap.put(0, loc.concat(" - ").concat(deviceName) ); /* System.out.println(this.headerMap.get(0) ); */ return; } /* extract point names from 2nd column onwards in 2nd line of file */ if(rowNum==1) { for(int i=1; i<row.length; i++) { headerMap.put(i,row*); } return; } long dt; String timeString = row[0].replace(":00", ":00:00").replace("a.m.", "AM").replace("p.m.", "PM"); /* System.out.print("Timestring: " + timeString + "\n"); */ try { dt = dtf.parseDateTime(timeString).getMillis(); } catch(Exception e) { // // Gobble // e.printStackTrace(); return; } // /* Extra params option to set deviceName property */ Map<String, String> extraParams = new HashMap<String,String>(); extraParams.put("deviceName", headerMap.get(0) ); for(int i=1; i<row.length; i++) { this.parsedPoints.add(new NumericImportPoint(headerMap.get(i), Double.parseDouble(row*), dt, extraParams)); /* this.parsedPoints.add(new NumericImportPoint("DP_T", Double.parseDouble(row[2]), dt, extraParams)); */ } } }
-
You are correct that it expects all data points in every file. It's raised as a data source error but it probably should be more specific, so that you can turn it off. I'll bring that up, as it would be fairly easy to break that out into its own event type.
-
If you could do that I'd be very grateful. Would be good to be able to ignore it if I could.
-
This was added and should be released fairly soon.