Forum Discussion

AaronB's avatar
AaronB
Qrew Trainee
2 years ago

Download to Mulitiple CSVs based on field value

I searched the board and didn't see anything that fell into this category so here is what I am trying to do.....

I have a table that gets wiped out and then repopulated once a month by a .csv upload.  One field in that table is called "Category".  The number of distinct values in the "Category" field can vary between each upload, depending on that particular data set.  One month, I may have distinct values of:  'Cat 1', 'Cat 2', 'Cat 3'.  The next month, it may go all the way up to 'Cat 10'. 

Each month, I also want to generate .csv downloads for all records in each distinct "Category" value. 

If the number of distinct "Category" values was the same each month, I could hard code a pipeline to do this.  But since the number of distinct values varies each month, I am unsure how to make this happen within Quickbase.

Any suggestions will be appreciated. 



------------------------------
Aaron B
ab1692@att.com
------------------------------
  • Is there a master known universal list of the possible unique categories or do they change each month?



    ------------------------------
    Mark Shnier (Your Quickbase Coach)
    mark.shnier@gmail.com
    ------------------------------
    • AaronB's avatar
      AaronB
      Qrew Trainee

      Hi Mark, 

      I always look forward to your recommendations so I am glad you chimed in. 

      To answer your question, the list of categories can change each month.  This month, I have 43 categories.  The categories actually revolve around 'buckets' for cybersecurity scanning purposes.  We categorize each IT asset by how it needs to be scanned (OS/IOS version, asset type, etc).  In addition, each scanner 'bucket' (category) can only handle 50 assets at a time.  So, I may have 75 assets that should go into an IOS-15 bucket but I can only handle 50 at a time so I need to divide them into two buckets (IOS-15-1, IOS-15-2).

      I've created a report to show me all the distinct categories (scan buckets).  Here are a couple snips from the top and bottom of that list for this month:

      I'd like to dynamically create a .csv file for each 'Scan Bucket' (category) each month.  Each individual .csv file would contain the records for that category.  So this month, I would have 43 .csv files.

      If I can't dynamically calculate all the different 'Scan Buckets' each month, my fallback would be to create a hard-coded master list & adjust each month if need be (since I don't foresee a large number of changes each month). 

      Either way, I am still not certain of the most elegant way to do either option (pipeline??).

      Let me know if I am making any sense or not

      Thanks.



      ------------------------------
      Aaron B
      ab1692@att.com
      ------------------------------

      • MarkShnier__You's avatar
        MarkShnier__You
        Icon for Qrew Legend rankQrew Legend

        So I think you need to have a self maintaining table of Scan Buckets where the Key field is the text value [Scan Bucket] and a relationship back down to the details.   Then a summary Checkbox on "any" details.

        Then the pipeline would search for Scan Buckets which have any details and make that in a For Each Loop make the csv export.

        OK, so how to have a self maintaining table of the unique scan buckets.

        Make a formula checkbox field on the Scan Buckets table called [Scan Bucket Exists?] with a formula of true and look that up down to details.

        Then have a pipeline trigger when a detail record is added in the Scan Bucket does not exist. Now in order to avoid getting an inbox full of pipeline error messages on duplicate Key field you need to do this which is counterintuitive.

        Within the For Each each loop you will set up a Bulk Upsert and then add the missing Scan Bucket into the Upsert and then commit the Upsert.

        While it seems incredibly inefficient too upsert one record at a time only, the issue is that if you upload hundreds or thousands of details there could be the need caused by say 20 entries for the same nonexistent scan bucket to be created. For Each loop runs asynchronously which means that the pipeline may many of these details concurrently and they can get in a race condition where two different detail records are both causing the same Key field to be created in the table of unique scan buckets.

        So this way they will merge in and whoever loses the race will just merge in and do no damage and cost no error messages.

        It would be nice if Quickbase gave us a native way to do an upsert of a single record into a table without going through the hoops of setting up a Bulk upsert but we are where we are.   



        ------------------------------
        Mark Shnier (Your Quickbase Coach)
        mark.shnier@gmail.com
        ------------------------------