Forum Discussion

EdwardHefter's avatar
EdwardHefter
Qrew Cadet
3 years ago

Speeding up a pipeline that creates 15K records

I need help speeding up a pipeline.

I have an app that tracks Printed Circuit Board Assemblies (PCBAs) by serial number and the components that are on them. The system lets the user barcode scan each PCBA serial number into a field and the scanner adds a linefeed to the end, so at the end of a run, there may be 100 serial numbers in the field. The system also has a list of which components go on the PCBA, and there may be as many as 150 of them.

When the pipeline executes, it:
1) Splits apart the serial numbers
2) Start a loop for each serial number that:
    2a) Creates a "top level" record for the PCBA with some master data as well as that specific serial number
    2b) Searches for all of the component records in the master data table associated with that PCBA, then creates a record in the history table with the component info and the PCBA serial number tied to it
3) Loops back for the next serial number and does it all again

With 100 serial numbers and 150 components, that is 15K records. Needless to say, it takes a long time for the pipeline to run (3 hours last time I tried). I am using a bulk upsert for step 2b, which I am sure is helping, but I am looking for other ways to speed this up.

There is a step in the Bulk Record Sets called Copy Records, but I can't find any documentation on it and it looks like it just makes a 1 for 1 copy of the record without allowing changes like adding a serial number.

Any suggestions?

------------------------------
Edward Hefter
www.Sutubra.com
------------------------------
  • Not sure if this is possible in your case, but for something that big I always get creative and use table to table imports.

    This may mean using a pipeline earlier while users are entering info, that is somehow creating a dummy of information, not sure. But to quickly create 15k records, I would look into somehow using T2T.

    ------------------------------
    Mike Tamoush
    ------------------------------
  • I have heard that the Copy Records step is very very fast, way faster than normal Pipeline speeds.  However, when they introduced it, I think that their design goal was to replicate Table to Table Copy and not what we had in Automations.  Various QSPs expressed this to the Product Manager and I put in this User Voice suggestion here and my suggestion is that you upvote it.


    ------------------------------
    Mark Shnier (Your Quickbase Coach)
    mark.shnier@gmail.com
    ------------------------------
    • MarkShnier__You's avatar
      MarkShnier__You
      Icon for Qrew Legend rankQrew Legend
      I do agree with Mike that if there is any way to leverage running table to table imports they are super fast.

      ------------------------------
      Mark Shnier (Your Quickbase Coach)
      mark.shnier@gmail.com
      ------------------------------
      • EdwardHefter's avatar
        EdwardHefter
        Qrew Cadet
        I don't think I can do a table-to-table import because I need the 100 different serial numbers on the 150 components (well, 15,000 components I suppose). But by using a temp table to put just the 150 components in from the larger Master Data table, the pipeline sped up a lot.

        Since the pipeline may get triggered more than once during the long duration of a run, I made sure the temp table had the PCBA ID in it as well as the time the record that triggered the whole thing was updated. The pipeline still does a search on the temp table and it looks for both the PCBA ID and the timestamp. That way, even if there are multiple triggering events during the first event's run, the multiple "instances" of the pipeline will get the right data from the temp table based on the timestamp. Also, the pipeline deletes the data from the temp table using the PCBA ID and timestamp.

        This is the biggest set of data manipulation I've done in Quickbase and it definitely made me think about multiple users and/or pipeline instances, giving a sense of "ready to do the next thing" to the user, and managing large sets of data!

        ------------------------------
        Edward Hefter
        www.Sutubra.com
        ------------------------------