I’m planning to use the Flexmonster pivot table, but I have an extremely large dataset—on the order of hundreds of gigabytes—that I want to analyze using a pivot table. According to the documentation, the Data Source API loads all the data into memory for processing, which doesn’t seem feasible for such large volumes. I looked at the custom API example on your Git repository and noticed that it requires users to write their own SQL queries to connect. Is there a direct way to connect Flexmonster to a database without having to craft every query by hand? Or would users have no choice but to write the SQL themselves for each scenario?
Hello,
Thank you for contacting us.
Using the custom data source API, you can implement the server to dynamically generate the response without storing all the data in RAM. The custom data source API is our protocol designed to pass the data from your server implementation to Flexmonster in a processed format.
Our sample servers only show the possible way of using the custom data source API, and it is not a ready-to-use solution. While developing your server allows you to create logic tailored for specific usage, it also requires more time and effort from developers.
Kindly note that users are not expected to write SQL queries themselves. However, since Flexmonster delegates data processing to your backend, your server implementation is responsible for generating the necessary SQL queries based on the pivot configuration (e.g., filters, rows, columns, measures). How those queries are built dynamically depends on how you implement your server logic.
You are welcome to refer to our documentation for more details: https://www.flexmonster.com/doc/implement-custom-data-source-api/
Please let us know if it works for you. Feel free to contact us if other questions arise.
Kind regards,
Nadia