Forum Discussion
Hi Mark,
I always look forward to your recommendations so I am glad you chimed in.
To answer your question, the list of categories can change each month. This month, I have 43 categories. The categories actually revolve around 'buckets' for cybersecurity scanning purposes. We categorize each IT asset by how it needs to be scanned (OS/IOS version, asset type, etc). In addition, each scanner 'bucket' (category) can only handle 50 assets at a time. So, I may have 75 assets that should go into an IOS-15 bucket but I can only handle 50 at a time so I need to divide them into two buckets (IOS-15-1, IOS-15-2).
I've created a report to show me all the distinct categories (scan buckets). Here are a couple snips from the top and bottom of that list for this month:
I'd like to dynamically create a .csv file for each 'Scan Bucket' (category) each month. Each individual .csv file would contain the records for that category. So this month, I would have 43 .csv files.
If I can't dynamically calculate all the different 'Scan Buckets' each month, my fallback would be to create a hard-coded master list & adjust each month if need be (since I don't foresee a large number of changes each month).
Either way, I am still not certain of the most elegant way to do either option (pipeline??).
Let me know if I am making any sense or not
Thanks.
------------------------------
Aaron B
ab1692@att.com
------------------------------
So I think you need to have a self maintaining table of Scan Buckets where the Key field is the text value [Scan Bucket] and a relationship back down to the details. Then a summary Checkbox on "any" details.
Then the pipeline would search for Scan Buckets which have any details and make that in a For Each Loop make the csv export.
OK, so how to have a self maintaining table of the unique scan buckets.
Make a formula checkbox field on the Scan Buckets table called [Scan Bucket Exists?] with a formula of true and look that up down to details.
Then have a pipeline trigger when a detail record is added in the Scan Bucket does not exist. Now in order to avoid getting an inbox full of pipeline error messages on duplicate Key field you need to do this which is counterintuitive.
Within the For Each each loop you will set up a Bulk Upsert and then add the missing Scan Bucket into the Upsert and then commit the Upsert.
While it seems incredibly inefficient too upsert one record at a time only, the issue is that if you upload hundreds or thousands of details there could be the need caused by say 20 entries for the same nonexistent scan bucket to be created. For Each loop runs asynchronously which means that the pipeline may many of these details concurrently and they can get in a race condition where two different detail records are both causing the same Key field to be created in the table of unique scan buckets.
So this way they will merge in and whoever loses the race will just merge in and do no damage and cost no error messages.
It would be nice if Quickbase gave us a native way to do an upsert of a single record into a table without going through the hoops of setting up a Bulk upsert but we are where we are.
------------------------------
Mark Shnier (Your Quickbase Coach)
mark.shnier@gmail.com
------------------------------
- AaronB2 years agoQrew Trainee
Thanks for putting so much thought into it. Much to digest here so I'll make some notes and give it a try.
Much appreciated.
------------------------------
Aaron B
ab1692@att.com
------------------------------- AaronB2 years agoQrew Trainee
Mark,
After thinking about this a bit more, I am would like to try a simpler apprach. I have the logic figured out, I just don't know if Quickbase can make it happen.
Logic steps:
- Find all distinct values within the 'bucket' field from Table 1.
Accomplished via a Summary Report: Complete - Automatically populate Table 2 with the Summary Report data from step 1
Table 2 would now hold 1 record for each distinct 'bucket' value in Table 1 - Run a pipeline:
For each record in Table 2, find all the records in Table 1 with that same 'Bucket' value and kick out a .csv file for that specific bucket name.
I just don't know how to accomplish step 2. Any suggestions?
Thanks.
------------------------------
Aaron B
ab1692@att.com
------------------------------- MarkShnier__You2 years ago
Qrew Legend
Right, finding the distinct values is the obstacle. Hence my suggested method. A Pipeline cannot see a Summary report.
Mark
------------------------------
Mark Shnier (Your Quickbase Coach)
mark.shnier@gmail.com
------------------------------
- Find all distinct values within the 'bucket' field from Table 1.