Dear all,
is it possible to use the flexmonster-compressor module with Node and SQL server?
Thanks
Hello, Franz,
Thank you for writing to us.
Yes, Flexmonster Compressor can be used with Node.js and SQL Server.
Flexmonster Compressor is supported for Node.js.
The way this can be done is by receiving the result of the query and passing it to Flexmonster Compressor
Please let us know if this helps and if you have further questions.
Best Regards,
Vera
Thanks for your fast response. I asked the question, because of performance issues/chrome crashes. I want to create a "classic" report
rows: [
{
uniqueName: "Code_dossier",
},
{
uniqueName: "Num_pv",
},
{
uniqueName: "Mail_id",
},
{
uniqueName: "statut",
},
{
uniqueName: "Date_evt",
},
],
columns: [
{
uniqueName: "[Measures]"
},
],
measures: [
{
uniqueName: "Price",
formula: "sum('Price')"
}
with following data:
facturationDetail: {
"statut": {type: "string"},
"Date_evt": {type: "date string"},
"Code_dossier": {type: "string"},
"Code_societe": {type: "string"},
"Nom_transporteur": {type: "string"},
"Mail_id": {type: "string"},
"Num_pv": {type: "number"},
"Price": {type: "number"}
}
Starting with about 40 000 lines the performance is worse and worse and chrome starts to crash. Is this an "expected" behaviour (the JSON file is only about 20MB for 100.000 line) and the use of Flexmonster Compressor is going to solve the problem?
Or is there likely another problem?
Thanks
Hello,
so you recommend to make the request on the server and save the response as JSON file and use JSON file with the Flexmonster Compressor? Does the Flexmonster Compressor improve the handling of data at the client side? Or only speed up the transfer time of the request. (which is not a big issue here)
Hello, Franz,
Thank you for your reply and for sending examples of your configurations.
We would like to point out that the behavior you described is not the expected one for your amount of data. In general, Flexmonster can handle datasets much larger than that.
On this note, could you please provide us with sample data that you try to load for testing purposes? This will help us a lot to solve this issue.
One thing to take into consideration is that Flexmonster is a fully client-side component, which means that all aggregations and calculations are performed solely in the browser.
This means that the performance depends on the capabilities of the client's machine.
Regarding Flexmonster Compressor, you are right. Flexmonster Compressor only speeds up the transfer time.
We are looking forward to hearing from you.
Please let us know if you have further questions.
Best Regards,
Vera
Dear Vera,
thanks for your answer. Attached a test file. Depening on the machine/open tabs... chrome crashes.
In the devTools console a out our memory waring is indicated.
Hello, Franz,
Thank you for providing a sample file for testing purposes.
Our team tried to reproduce the issue, nevertheless, everything works fine on our side.
We tried by loading the data from the file using our demos as well as through the Flexmonster Data Compressor.
Please consider the fact that Flexmonster relies on resources available to the browser, and this affects the maximum size of the data that can be handled on every particular computer. Therefore, if the processor’s CPU and the amount of RAM on the machine are quite small, there may be a chance that the browser doesn't have enough memory to manage a dataset of such size. This point is worth checking.
Our team kindly advises considering using Elasticsearch with Flexmonster. This way only the part of the data needed for the requested representation is sent to the client side. Every time more data is needed, for example when expanding a row or applying a filter, a different data portion is sent over. This approach works best for large data sets since memory is used with moderation.
Here is a JSFiddle example of connecting to Elasticsearch with Flexmonster: https://jsfiddle.net/flexmonster/Ly47pq5b/
Could you please let us know if such a solution would work for you?
We are looking forward to hearing from you.
Best Regards,
Vera