Records, Pipelines, and Callable Pipelines clarification
Hello! I am a new user, and I'm looking for some clarifications on some of the specifics about how records, pipelines, and the callable pipelines action relate. I have a complex pipeline trigger by a user record creation. The pipeline runs through a substantial decision tree and updates records and sends notifications depending on the path through the decision tree the record takes. However, I have run out of actions and need to set up a callable pipeline to do some record validation (instead of adding those steps into the main pipeline). So, I am working on setting up the callable pipeline, but I want to make sure that the pipelines are only updating a given record, and not ALL records. - Update: before you read the next part of the post, consider that I may be overthinking this whole thing and I just need to pass through the record ID(s) for the records I want my called pipeline to validate. - I assume that when a pipeline runs with the "record created" trigger that the pipeline will run only with relation to a given record; i.e. is the context of the pipeline run a single record? Is that correct, or does it evaluate the whole table of records? Or do I need to set a record limit on the trigger? When I use a callable pipeline, does the calling pipeline maintain the context of the call; i.e. for a single or batch of records? Depending on that answer, does the pipeline evaluate each of those records individually, or does it evaluate the entire table with respect to the called field(s)? For example, here are my two actions. I thought I had them set up correctly, but I can't call the fields from the calling pipeline in the called pipeline (third screenshot). Update: this may be because neither of the pipelines are turned on, and I may be using Jinja expressions where I should instead be using aliases which I can set to the calling pipeline's fields (see this video). Do I need to bother with passing the record IDs ("a.id"), or are those "implicit"? Call action: Called trigger: Not finding called fields: BUT let's assume that I get these calls to work. Will the changes to the records made in the calling pipeline have been written to the table when the called pipeline is called? Because that pipeline will need to do evaluations based on the records' data, which is dependent on the actions in the calling pipeline! Thanks for all of your help!54Views1like1CommentDate / Field stamp through pipeline
I have a step in my pipeline that creates line items and inserts required data in that line item. The completion status of this line item is dependent on the field completed date. How do i include the completed date in the step so that when the data is added so that the line item is complete23Views0likes2CommentsPipeline formula for user names
I need a pipeline to populate a field in a new record with the names from another field. Many users did not update their profile with a ‘Screen Name’ so I can’t use that field in the pipeline. I am having trouble getting it to populate using the First and Last Names. What formula can I use in Pipelines to create a list of multiple names? I used this field to create a text field and this is what is returned: Using this text field is not ideal as Barb does not have a screen name so it defaults to her email address. If there are several users, this field will be harder to read with emails being used. I want the pipeline to take the Fundraisers Assigned values and populate like this – preferably comma separated but semi-colon is fine too. Example: Amy Gosz, Barbara Burns I can get the First Name and Last Name to combine but then it only returns one of the names. How do I write the formula to not only combine the First Name and Last Name, but also list the multiple users names? I need all names (can be several) that appear in the Fundraisers Assigned field to populate to the new field.47Views0likes2CommentsDo Bulk Records Pipelines Make Simultaneous Updates?
When bulk records are handled in a pipeline, are all of the applicable records updated simultaneously? Will they have the same modified date and time? I have notifications setup to be triggered when a date field is updated. I have a formula checkbox field used to identify the most recent record associated with an email address field value. The most recent record may not be the one that gets updated along with other records with the same email address field value. I don't want the email notification being sent before other associated records are updated. I am looking at using Bulk Record steps to reduce the number of pipeline steps I am currently using as well as handle everythingmore efficiently. I just need to know if I would be safe having the date field updated on the most recent record with the other potential changes without worrying about the email being triggered before all applicable updates are made. Thank you, in advance for any input. Let me know if I need to provide a visual and/or more clarification on what I am looking to accomplish.38Views1like1CommentPIpeline - split multiselect and create records from it
I can't seem to wrap my head around this. I have a record with a multiselect field. When conditions are met, I want a pipeline to take data from the record and create a new record in another table, creating one record for every value in a multi-select field. I can't seem to figur eout where I would put the jinja to split those values and then loop through them. Googling came up with the following for the Jinja: {% set selected_values = a.your_multi_select_field | split(';') %} But where would I put this? Seems like I need a step after my trigger to do this, but there isn't a "jinja" channel.25Views0likes1CommentCould not parse XML input
Hi, I'm getting errors on my pipelines for a multi-line field. Quickbase reported an error: 11 : Could not parse XML input : XML Parsing Error. not well-formed (invalid token) at line 3 column 353 (which is byte 700) When I look through the activity log and the original db i'm seeing these characters: †and “. When I talked with the users they said they did not put them in the field. After a little research it looks like it was a copy and paste issue. My question is how can i prevent this from being sent through the pipeline? Thank you42Views0likes2CommentsError listing for Pipelines?
Is there any kind of troubleshooting/listing somewhere of Common issues with Pipelines. I have Literally copied a working Pipeline line by line, instruction by instruction, and the one I am working on keeps getting an error. What I have noticed is for some reason the "Commit Upsert" runs twice, which i don't believe it ever used to if I look at older runs similar to this one. It is very frustrating with Pipelines to build an exact copy a functioning pipeline, only difference is a different app, and the copy will not function. I've added a screenshot of the error i'm getting. If anyone can either help me or guide me to a answer, I would appreciate it.Solved26Views1like4CommentsPipeline help
I have a parent table called principal investigators (PI) that has a child table called "applications". This is for external users to apply to a grant we are hosting, all the information will come through a form in the applications table. However, I want to pull information from the applications table into the PI table for future querying. Very simply, users will put in their name, email address, and departmental affiliations which I want to then create a record in the PI table if it does not exist. If it does exist, I want to update the applications table with the related PI record ID. I cannot get this to trigger properly and I have tried many times. It fails at the IF/ELSE statement. Here is my YAML file below if it helps (I removed identifying information): # Add PI to table # # Account slugs: # - quickbase[DB]: Realm Default Account <None> --- - META: name: Add PI to table enabled: false - TRIGGER quickbase[DB] record on_create -> a: inputs-meta: allow_triggers: Any export_fields: '"Applicant Name, Applicant Email, Departmental Affiliations, Related Investigator" <6, 9, 81, 234>' table: '"Awards Portal: Applications" <##>' - QUERY quickbase[DB] record search -> b: inputs-meta: export_fields: '"Email Address, Full Name" <7, 6>' table: '" Awards Portal: Principal Investigators" <##>' name: Search for records in PI table note: Search for records in PI table that match the applicant email - b<>LOOP: - DO: - IF: - AND: - a<>applicant_email equals {{b.email_address}} - THEN: - a<>ACTION quickbase record update -> c: inputs: related_investigator: '{{b.id}}' name: Update the record note: This step updates a record in the table - ELSE: - ACTION quickbase[DB] record create -> d: inputs-meta: export_fields: '"Affiliations, Email Address, Full Name" <8, 7, 6>' table: '"Awards Portal: Principal Investigators" <DB>' inputs: affiliations: '{{a.departmental_affiliations}}' email_address: '{{a.applicant_email}}' full_name: '{{a.applicant_name}}' name: Create the record note: This step creates a record in the table - a<>ACTION quickbase record update -> e: inputs: related_investigator: '{{d.id}}' name: Update the record note: This step updates a record in the table - metadata: name: If a condition is met, do something note: Check if condition is true - metadata: name: Iterate through the records note: Iterate through the records found in the previous step ...24Views0likes1CommentJoin multi-line text field to one line
Hi all. I'm trying to find a way (either using a formula or in Pipelines) to join a multi-line text field to a single line. Use case: The multi-line text field will contain text that will go to the JSON request body of an API call in Pipelines, so the field needs to be converted to a single line. The field has to a multi-line text field, because values can contain several paragraphs and app users can easily customize it. Thanks!105Views0likes5CommentsA solution to un-pivot table data
If this topic has been discussed and there are well known solutions to this topic - I apologize. I didn't come across any before I came up with the solution below. Background: I was working on a request from my team to build out some functionality that required a lot of check boxes. I didn't want to take a lot of time building a custom form to ensure that the checkbox data was stored with a row for each box. There are 28 checkboxes representing various issues (oil leak, lights, forklift forks, etc). While not totally necessary, I did hope I could somehow un-pivot these columns into a table where only issues that were checked had a record. TLDR; Using fetch JSON and the Quickbase API, you're creating a list of field ids that you want to unpivot and then for each record in a second set of JSON data, you are looping through that field list and creating a new single record in a 2nd table with the values you are unpivoting as well as any other data you wish to repeat in each new row. This process leaves the original table completely intact. Solution: API Urls You'll need two API request urls. One request for the field information for the table that has the information you wish to un-pivot (eg, field id, field label, etc) One request to a report that has the data you'll need for the records in the new table. (eg. values in fields, checkbox data, etc) Don't forget to include columns that contain foreign key data you'll use in the new table If you know how to create/build those URLs, skip ahead to step 9 To build/learn about how to build the API URLs, go to developer.quickbase.com On the left side of your window you'll see a list of different topics related to the Quickbase API. Select the arrow next to the 'Fields' label. Then, click on the first item on the list "Get fields for a table" From here, you can enter the table id of the table you want to un-pivot, and fill out the other information related to realm, authorization, etc. I suggest also testing it and seeing the data that comes through. Copy the url produced in the top right corner. This is your field data URL. Back on the list on the left, click on 'Run a report', and then repeat steps 6-7 to get the record data URL. Pipeline | Fields Data Create a new pipeline and add a 'Fetch JSON' step. Populate the fetch JSON step with the field data url and required headers. This step is a GET step. Following that step, create a 'Iterate over JSON' step The step should automatically select the prior step, but in case it didn't, in the 'JSON Source' field, select the previous step as the source. In this step, it is helpful to have a sample of the JSON schema so you can reference some of the items in the next step. So include a sample data dump from the developer.quickbase.com API website. This makes references to specific field data much simpler In the 'Iterate over JSON' step, go to the bottom and filter the field list to ONLY include the fields you want to unpivot. In my case this was simple as all of my fields were checkboxes, so I just chose the 'Field Type' field as my filter, and conditioned it on "checkbox". If you aren't as fortunate as me, and you don't use the 'Field Help' field for your applications - you could use the 'Field Help' field as your filter by populating each field that you wish to un-pivot with the text 'unpivot', and then filter on the 'Field Help' This is absolutely vital to the process. If you can't get the field list down to exactly what you want you'll get unwanted rows. Pipeline | Record Data The 'Iterate over JSON' step from above will have created a 'loop' step. IN BETWEEEN the 'Iterate over JSON' step and its corresponding loop, repeat the steps 1 - 4 from the Pipeline | Fields Data section for the Record Data url A second loop will have been created as part of this new 'Iterate over JSON' step Move the Field Data Loop (from the 1st part) INTO the new loop you just created. The field data, i.e. the information that has field ids, field labels, etc. should be nested INSIDE the loop that has the table data, i.e. the records that have the actual information you want to un-pivot. Testing for Field Data In the nested loop (the field data loop), create a 'condition' step. In the drop down field where you select the field to evaluate, go to the bottom of the list and choose "Expression (advanced)". In the dropdown to the right, you can leave it as "evaluates to True" In the criteria field below the two dropdowns, include the following jinja {{d.raw_record[(b.id|string)]['value']}} To explain, what we're doing here is taking the table data we downloaded (d.raw_record), and using the current field id (b.id) of our loop to grab the data ['value'] for just that specific field. In my case, the value was either going to be 'true' or 'false' because it was a checkbox. You may need to adjust the logic to test if the field has data or not. If you don't test for this, you'll get empty rows of data. Creating a Record If you haven't already, create the table where the data is going to go. In the pipeline, in the 'If condition is met' branch, add a 'Create Record' step Choose the table you created Add all the fields from that new table that you are going to populate with data from the un-pivot. For each of the fields in the NEW table that you'll use to group on, use jinja to select the fields that will fill each of the new fields. I've included the jinja I used to populate the fields that get the same data for every record. This was the record id of the report that has all the checkbox issues {{(d.raw_record['3']['value'])|int}} Below was the date of the report which I wanted to include in each new record {{time.parse(d.raw_record['6']['value'])|date_mdy}} For the field that contains the value that you want to un-pivot, you use the same jinja statement you used in the condition section to check on valid data. {{d.raw_record[(b.id|string)]['value']}} Cross your fingers, say a few prayers, call your mother and then hit run on the pipeline.26Views0likes2Comments