Re: Download to Mulitiple CSVs based on field value
So I think you need to have a self maintaining table of Scan Buckets where the Key field is the text value [Scan Bucket] and a relationship back down to the details. Then a summary Checkbox on "any" details.
Then the pipeline would search for Scan Buckets which have any details and make that in a For Each Loop make the csv export.
OK, so how to have a self maintaining table of the unique scan buckets.
Make a formula checkbox field on the Scan Buckets table called [Scan Bucket Exists?] with a formula of true and look that up down to details.
Then have a pipeline trigger when a detail record is added in the Scan Bucket does not exist. Now in order to avoid getting an inbox full of pipeline error messages on duplicate Key field you need to do this which is counterintuitive.
Within the For Each each loop you will set up a Bulk Upsert and then add the missing Scan Bucket into the Upsert and then commit the Upsert.
While it seems incredibly inefficient too upsert one record at a time only, the issue is that if you upload hundreds or thousands of details there could be the need caused by say 20 entries for the same nonexistent scan bucket to be created. For Each loop runs asynchronously which means that the pipeline may many of these details concurrently and they can get in a race condition where two different detail records are both causing the same Key field to be created in the table of unique scan buckets.
So this way they will merge in and whoever loses the race will just merge in and do no damage and cost no error messages.
It would be nice if Quickbase gave us a native way to do an upsert of a single record into a table without going through the hoops of setting up a Bulk upsert but we are where we are.
------------------------------
Mark Shnier (Your Quickbase Coach)
mark.shnier@gmail.com
------------------------------