Forum Discussion

DanLocks's avatar
DanLocks
Qrew Trainee
2 years ago

Undocumented Pipeline yaml functionality

Pipeline yaml files can apparently use at least some of yaml's advanced features.  I was able to use "yaml references" in my pipeline spec to avoid repeating long strings that might be prone to typos. specifically:

- IF:
  - AND:
    - a<>body_params.partner starts c1
  - THEN:
    - ACTION quickbase[abcdefg] bulk_record_set csv_import -> c:
        inputs-meta:
          csv_url: '{{b.file_transfer_handle}}'
          header_row: &headers A,REALLY,LONG,LIST,OF,HEADERS

- IF:
  - AND:
    - a<>body_params.partner starts shi
  - THEN:
    - ACTION quickbase[bcdefgh] bulk_record_set csv_import -> e:
        inputs-meta:
          csv_url: '{{b.file_transfer_handle}}'
          header_row: *headers

The &headers creates a yaml reference to whatever follows.  *headersuses that reference.

Yaml references are kind of obscure and a bit tricky to use in my past experience, but they do work.  Unfortunately, the references are not preserved: When you export the pipeline after using them, the export will have A,REALLY,LONG,LIST,OF,HEADERShard coded wherever the *headers reference was used.

It would be nice if QB would publish a spec for these yaml files along the lines of Gitlab/Github/AWS CodeBuild/ etc.



------------------------------
Dan Locks
------------------------------

5 Replies

  • Nice find!
    I'm curious if the references are preserved in the "Pipeline Summary" in the Activity Log.


    Also, I fear that they might also be lost if you use the Refresh Schemas button:




    ------------------------------
    Justin Torrence
    Quickbase Expert, Jaybird Technologies
    jtorrence@jaybirdtechnologies.com
    https://www.jaybirdtechnologies.com/#community-post
    ------------------------------
  • DonLarson's avatar
    DonLarson
    Qrew Commander

    Dan,

    Would you please expand on the functionality this provides.   

    I think you are using it to define the fields that you are importing to with the CSV.



    ------------------------------
    Don Larson
    ------------------------------
    • DanLocks's avatar
      DanLocks
      Qrew Trainee

      A best practice across all programming disciplines is to avoid repeating identical lines or values when writing code. In my case, I have 3 tables with identical columns but differing data. The "import with csv" action looks *almost* the same for each table.  This yaml syntax allows me to write the value for `header_row:` once and define a reference to that value in-line. Any time I need that same value, I can use the reference instead of the actual value.

      Keep in mind this is a purely textual modification to the yaml, and after the yaml is loaded, the resulting values are hard coded again.  After loading a pipeline with yaml references, you could edit the pipeline in the UI to have different `header_row` values.

      I am using yaml references to define the fields i import, however this simple reference is insufficient to create a valid "import with csv" block.  Each field must also have mapping in the form of `qb_field_for_a` then `qb_field_for_really`, then `qb_field_for_long`... one for each field listed in `header_row`.  The value of the mapping is the field ID for the destination table.  For me, the destination field ID is not consistent, so I can't simply repeat all the `qb_field_for` using yaml references, because the values differ.

      Honestly, it's not clear how useful it is, but I figured someone might get some use.



      ------------------------------
      Dan Locks
      ------------------------------
      • DonLarson's avatar
        DonLarson
        Qrew Commander

        Dan,

        Thanks for introducing something new.  You have me thinking about it.

        Don



        ------------------------------
        Don Larson
        ------------------------------