☝️Small business or a startup? See if you qualify for our special offer.
+

Inquiry about handling large files in Flexmonster (.NET Server) and client-side optimization

Answered
Marcelo Alejandro Gallardo asked on August 14, 2025

Hello Flexmonster team,

At Nubimetrics, we have been using Flexmonster for over 6 years as a core component of our Pivots functionality.
We currently work with Flexmonster Data Server in .NET, downloaded from your official repository:
https://github.com/flexmonster/api-data-source

In our case, we only use the CSV loading functionality on the server, with additional logic to customize the data before sending it to the client (aliases, date formats, data types by column name).

Current flow

  1. The front end requests the data → Data Server downloads the entire CSV (which can exceed 3 GB).

  2. During in-memory loading on the server, we apply customizations (aliases, formats, data types).

  3. The processed dataset is sent to the front end, where Flexmonster Pivot renders it using the user’s saved template.

The issue we are facing

  • On PCs with 32 GB of RAM, we can render CSV files of around ~4 GB without errors.

  • On PCs with 8 GB of RAM (very common among our customers), the browser runs out of memory (OOM) or freezes when trying to render datasets of this size.

  • The main bottleneck seems to be in the client-side rendering, even when the CSV is already filtered and processed by the .NET Server.

What we need

We want to ensure that any file, regardless of size, can be processed and either rendered in the platform or delivered to the client while preserving their template configuration (column order, filters, aliases, calculated fields, etc.).

We would appreciate your recommendations on:

  1. Optimization in Flexmonster Data Server (.NET):

    • Are there parameters, patterns, or best practices to reduce memory usage when loading and transforming large CSV files?

    • Is it possible to enable some form of lazy loading, streaming, or column reduction before rendering?

    • Are there any recent or upcoming updates in the api-data-source project that improve large file handling?

  2. Client-side optimization:

    • Are there configurations or techniques to improve memory management in Flexmonster Pivot in the browser?

    • Would you recommend any paging, partitioning, or segmented loading patterns to avoid loading the full dataset into client memory?

  3. Alternatives recommended by Flexmonster:

    • Have you seen implementations that handle scenarios of 4 GB+ datasets more efficiently for customers with limited hardware?

    • Do you consider it viable to integrate an external preprocessing step to reduce the dataset before sending it to Flexmonster Pivot, and if so, how would this best fit with the current Data Server architecture?

We would greatly appreciate any guidance or examples you could share to help us continue delivering the best possible experience to our customers when working with Flexmonster, even in high-volume data scenarios.

Looking forward to your feedback and suggestions.

 

Best regards,
Alejandro Gallardo
Agile Delivery Manager – Nubimetrics

6 answers

Public
Maksym Diachenko Maksym Diachenko Flexmonster August 18, 2025

Hello, Alejandro!

Thank you for reaching out to us.

Our custom API was designed to optimize performance by storing all the data on the server in RAM, performing aggregations on the server with more computational resources compared to the browser, and sending only part of the data required for visualization to the client. This works best when the position of data displayed on the table is much smaller than the original dataset.

However, when the result sent to the client is still large, for example, a flat view with all rows shown, the browser still needs to download and keep gigabytes of data in memory. In such cases, the out-of-memory exceptions and crashes may occur, especially on client machines with less RAM.

Currently, the only way of avoiding such scenarios is to reduce the volume of data sent to a client by redefining filters before loading data.

To better understand your case and see if we can suggest further optimizations, could you please let us know:

  1. Is pivot view or flat view used when the issue occurs?
  2. If in pivot view, how many columns and rows are in the resulting dataset?
  3. If in flat view, is the entire dataset shown, or are filters applied before rendering?
  4. Could you send us an example report configuration (JSON) used in one of these large scenarios?

Looking forward to hearing from you.

Best Regards,
Maksym

Public
Marcelo Alejandro Gallardo August 18, 2025

Hello Maksym, thank you for your prompt reply!

Here is the additional context you requested:

  1. View used when the issue occurs

  • We use both pivot and flat views.

  • The problem happens more often in flat view, due to the large volume of data being rendered on the client side.

  1. Concrete scenarios (same CSV ~4.1 GB)
    All three cases use the same CSV (~4.1 GB). In every case, the load takes a long time and often results in Out of Memory errors in the browser or server errors.

    a. Flat view with row and column filters

    • Expected result: ~6,000 rows × 29 columns (both row and column filters applied).

    • Symptom: very long load times and OOM errors on 8 GB RAM machines.

    b. Flat view with high row count

    • Expected result: ~3,350,000 rows × 5 columns (only column filters applied).

    • Symptom: very long load times and frequent OOM/server errors on 8 GB RAM machines.

    c. Compact (pivot) view with hierarchies

    • Even in compact view, the same dataset still loads slowly and fails in many attempts.

    Note: On 32 GB RAM PCs these scenarios usually load, while on 8 GB RAM PCs (very common among our customers) they lead to OOM or crashes.

  2. Are all rows shown in flat view or are filters applied first?

  • In (a) and (b), filters are applied before rendering (always column projection; in (a) also row filters).

  • Still, in (b) the resulting dataset is very large (~3.35M rows).

  1. Example report configuration (JSON)
    Below is an actual example of a report template that reproduces one of the problematic scenarios:

 

{
"dataSource": {
"type": "api",
"url": "https://nubimetrics.com/api/cube",
"index": "SAS CSV",
"singleEndpoint": false,
"mapping": {
"Periodo": { "visible": false },
"Periodo.Year": { "caption": "Período.Ano" },
"Periodo.Month": { "caption": "Período.Mês" },
"Periodo.Day": { "caption": "Período.Dia" },
"Period.Year": { "caption": "Período.Ano" },
"Period.Month": { "caption": "Período.Mês" },
"Period.Day": { "caption": "Período.Dia" },
...
},
"withCredentials": false,
"concurrentRequests": false
},
"slice": {
"reportFilters": [
{ "uniqueName": "Categoria_Nivel_3", "sort": "asc" },
{ "uniqueName": "Categoria_Nivel_4", "sort": "asc" },
{ "uniqueName": "Categoria_Nivel_5", "sort": "asc" },
{ "uniqueName": "Categoria_Nivel_6", "sort": "asc" }
],
"rows": [
{ "uniqueName": "Nombre_Tienda_Oficial", "filter": { "members": ["nombre_tienda_oficial.[loja.oficial.15289]"] }, "sort": "asc" },
{ "uniqueName": "Marca", "sort": "asc" },
{ "uniqueName": "Nickname_Vendedor", "sort": "asc" },
{ "uniqueName": "Categoria_Nivel_2", "sort": "asc" },
...
],
"columns": [{ "uniqueName": "[Measures]" }],
"measures": [
{ "uniqueName": "Unidades_Vendidas", "aggregation": "sum", "active": true },
{ "uniqueName": "Monto_Vendido_Moneda_Local", "aggregation": "sum", "active": true, "format": "31spc8p5" },
{ "uniqueName": "Mes.Day", "aggregation": "sum", "active": true },
...
],
"flatOrder": [
"Nombre_Tienda_Oficial",
"Marca",
"Nickname_Vendedor",
"Categoria_Nivel_2",
...
],
"flatSort": [
{ "uniqueName": "Unidades_Vendidas", "sort": "desc" }
]
},
"options": {
"viewType": "grid",
"grid": {
"type": "flat",
"showFilter": true,
"showHeaders": true,
"showTotals": "off",
"showGrandTotals": "off",
"drillThroughMaxRows": 1000
}
},
"formats": [
{
"name": "31c9dq7y",
"thousandsSeparator": ",",
"decimalSeparator": ".",
"decimalPlaces": 0,
"currencySymbol": " USD"
},
{
"name": "31c9i9md",
"thousandsSeparator": ".",
"decimalSeparator": ",",
"decimalPlaces": 2,
"currencySymbol": "$"
},
{
"name": "31spc8p5",
"thousandsSeparator": ".",
"decimalSeparator": ",",
"decimalPlaces": 0
}
],
"localization": "/Locales/pt/pivotTable.json?v.0.1.0",
"version": "2.9.92",
"creationDate": "2025-01-15T11:22:45.052Z"
}

Additionally, I confirm that we are using the Flexmonster Data Server from your official GitHub repository (.NET), and in our workflow we only load CSV files, with some customizations applied server-side (aliases, date formats, data types).

We would highly appreciate your recommendations regarding:

  • Possible Data Server adjustments to reduce memory usage in these scenarios (parameters, patterns).

  • Suggestions for client-side optimizations to handle memory in Flat view with large datasets (e.g., segmented/paged loading strategies, recommended limits).

If needed, we can also share more report JSONs or detailed logs/error traces.

Thank you again for your support!

 

Best regards,
Alejandro Gallardo

Public
Maksym Diachenko Maksym Diachenko Flexmonster August 19, 2025

Hello, Alejandro!

Thank you for sharing more details with us.

For this case, Flexmonster provides an additional optimization step revolving around disabling the member filters. By default, Flexmonster sends /members requests for every field to populate filter dialogs. This means extra network calls, additional loading time, and more memory usage in the browser to store all unique members.

It is possible to prevent the component from making these requests by disabling the members filter for some or all fields within the mapping with the filter.members property. An obvious tradeoff of this solution is that the member filters would be unavailable. However, you can use the query filters to filter by conditions.

We have prepared an example illustrating this approach: https://jsfiddle.net/flexmonster/1a5xjbgL/

As a conclusion, disabling member requests (filters: { members: false }) can noticeably help in scenarios where the dataset has few rows but many fields, as in this case, the /members responses may create an even larger memory load than the data itself, especially when many members are unique. For cases with millions of rows but only a few fields, this approach is still worth trying since it reduces client-side load.

Please let us know if our recommendation helped you.

Best Regards,
Maksym

Public
Marcelo Alejandro Gallardo August 19, 2025

Hello Maksym,

Thank you for your suggestion and for sharing the example with filter.members = false.

We applied this optimization on our side (disabled member filters for the high-cardinality fields). However, when testing with a ~500 MB CSV, the browser still shows the following error:

“The file is too large and cannot be loaded completely due to the browser limitations.”
(Screenshot attached)

This happened in flat view, even after disabling member filters.

Please let us know if there are any additional optimizations we can try (either server-side or client-side) to prevent this browser limitation, or if you recommend a specific upper bound on rows/columns for flat view to avoid such errors.

Thanks again for your support and guidance.

 

Best regards

Public
Maksym Diachenko Maksym Diachenko Flexmonster August 20, 2025

Hello, Alejandro!

Thank you for your reply.

Just to clarify, our previous recommendation regarding filter.members = false was specific to scenarios with the custom API data source, where disabling member requests helps reduce unnecessary queries and memory usage. The error message you provided indicates that you are connecting to a CSV file with the filename property, where Flexmonster loads the entire file. Hence, the previous recommendation does not apply to this case.

For optimizing file loading via the filename, it is important to know that the maximal CSV size supported in Flexmonster is 256MB. This limitation was set due to payload size limit for XHR loading in some browsers. As a workaround, we suggest switching from CSV to JSON format and utilizing the useStreamLoader data source parameter. This method bypasses these limitations by streaming data, allowing complete loading of large files in chunks. We recommend using an array of arrays format of JSON data because it is similar to a CSV, meaning it can be relatively simple to switch to JSON.

Also, reducing the dataset size with an external filter is one of the most efficient approaches to improving performance when working with JSON and CSV data sources. This filter should be configured outside Flexmonster, only to load part of the data. It can pass the filter query through the request parameters within the filename URL, which the server can process to send part of the data.

Please let us know if our answer helped you.

Best Regards,
Maksym

Public
Maksym Diachenko Maksym Diachenko Flexmonster 9 hours ago

Hello, Alejandro!

Hope you are doing well.
We would like to know if you tried switching from CSV to JSON with the stream loader to load big datasets.
Please let us know if it helped resolve the issue of files larger than 256 MB not being loaded.

Best Regards,
Maksym

Please login or Register to Submit Answer