The purpose of this feature is to give you access to the data your users generate so that you can analyse it in any way that you want.
This feature exports data from the Magine Pro Data warehouse into your AWS S3 bucket so you can download and import it to a data tool of your own choice. The files consist mainly of user generated data such as viewing data, users payments, consumed promotions and active Entitlements for Offers etc. It is the same data that we at Magine Pro use to build the template and custom dashboards that you can access in the Analytics section in the Magine Pro Console.
Once each day between 10:00 am and 12:00 pm, jobs will run to create these exports. The exports will be divided into one report for each table in our Data warehouse. The reports can be found in the Amazon S3 bucket you already have access to in a folder named Internal. Every report will have a separate subfolder for better orientation.
Most of the reports have one file for each date. To make sure all data is correct, the past reports will be updated for 3-5 days after they were initially created. Some other reports will consist only of one file that is completely replaced each day.
These CSV's can either be used by you to just open e.g. Excel to get instant access to the data, or you can set up a process to download these reports, import them to a data warehouse of your own choice and then connect a data visualisation tool of your own choice.
You can only access the Internal subfolder on S3 with the main access keys. This means that you can prevent any subcontractor from getting access to these reports.
If we upload a new file every day we will keep them for 90 days in the S3 bucket.
Before our system can export the data to your S3 bucket folder it gathers the information from different source e.g. payment providers or AWS.
If an export fails on one day e.g. due to connection issues within AWS or something else completely out of our hands. Instead of waiting for this issue to be fixed and trigger the exports again manually for those “missed” dates, the data will automatically be included the next time the job runs successfully. The updates will be within the same filename. Your system will not be notified. The best way for you is to implement the same logic, i.e. overwrite all data for X amount of days back in time when you do your imports. Otherwise you can also decide to not update the data in your system until after X amount of days.
For detailed description of each report, please read these individual cards: