Quickbase Announcements

 View Only

Building Table-to-Table (T2T) Imports using Pipelines

By Brian Cafferelli posted 06-25-2021 16:20

  

Building Table-to-Table (T2T) Imports using Pipelines

 

Introduction

You may have used automations to trigger a Quickbase table-to-table (T2T) import. You can achieve a similar outcome in a pipeline using actions available in the Quickbase channel’s Bulk Record Sets category. This method has some advantages over using automations to trigger an import, such as the ability to choose values in a multiple-choice field.


Overview
 

Overview of example pipeline

The basic T2T pipeline has the following steps:

  • Step A: Prepare Bulk Record Upsert - identifies your destination table.
    • “Upsert” is short for "update or insert". The behavior is similar to that of an import. Based on a matching unique identifier, records that already exist are updated and new records are created.
  • Step B: Search the Records and identify the source table
    • Then, for each record found :
      • Step C: Add Bulk Upsert Row – prepares the found records to be upserted
    • Step D: Commit Upsert – import the records from Step B into the table identified in Step A


 

 

Guided Instructions

 

Pipeline Step A: Prepare Bulk Upsert

Step A identifies the destination table into which you want to import.

 

 

Setup:

  1. Create a new pipeline and give it a name
  2. Open the Quickbase channel, then open the Bulk Record Sets category
  3. Drag the Prepare Bulk Record Upsert action onto the canvas
  4. In the Account input field, select your user token or enter a new one
  5. In the Table input field, select the table into which you want to import
  6. In the Fields input field, identify the fields you want to populate
  7. In the Merge Field input field, select a unique field for the import to reference. The pipeline uses this field to determine whether a record should be added or updated

    NOTE: The merge field must be marked as unique within Quickbase to appear as an option in this input.


Step B: Search for related tasks

Step B identifies the source data that you will import into the table defined in Step A.

 

 

 

Setup:

  1. Drag the Search Records action onto the canvas as Step B
  2. Select your user token again
  3. Select the source table that stores the data you want to import
  4. Under Fields, choose the field values that store the data you want to import. These fields will be used in Step C

Note: You can add a query to limit the number of records searched or leave it blank to include all records in the table. If you include the query, make sure to include that field in the Fields input.



Step C: "For each" Step B result create an upsert row

Step C prepares the records found in Step B to be upserted.



Setup:

  1. Drag the Add a Bulk Upsert Row action onto the canvas under the Do branch as Step C
  2. In the Bulk Record Set box, choose the Prepared Bulk Record Upsert to add the row
  3. Populate the field inputs with the appropriate data values from Step B by dragging the field values from the available fields panel

Note: Source field types much match the destination field types.

Pipeline Step D: Commit the upsert

Step D performs the Bulk Upsert. This step occurs on the main branch and not within the Do branch.   

 

Setup:

  1. In the Bulk Record Set input field, choose the Bulk Record Upsert you want to perform

Note: This step will show two rows in the Activity, one for input and output. It only takes 1 step-run, but it's set up that way to illustrate how the pipeline is running.

Permalink

Comments

08-03-2021 13:47

Something to note is that this solution does not achieve full parity and as far as I'm aware the use case below is only achievable by Table to Table import. You cannot merge records using the built in Record ID# field using Pipelines if the Record ID# does not exist in the destination app. If you want to do this you'll have to use some other field as the key field, and if you do that you'll lose out on the automatic incremental id capability and will need to come up with a solution of your own. I'd love for this use case to be supported in bulk Pipelines or API. I've even got a UserVoice post open about it.