A solution to un-pivot table data
If this topic has been discussed and there are well known solutions to this topic - I apologize. I didn't come across any before I came up with the solution below. Background: I was working on a request from my team to build out some functionality that required a lot of check boxes. I didn't want to take a lot of time building a custom form to ensure that the checkbox data was stored with a row for each box. There are 28 checkboxes representing various issues (oil leak, lights, forklift forks, etc). While not totally necessary, I did hope I could somehow un-pivot these columns into a table where only issues that were checked had a record. TLDR; Using fetch JSON and the Quickbase API, you're creating a list of field ids that you want to unpivot and then for each record in a second set of JSON data, you are looping through that field list and creating a new single record in a 2nd table with the values you are unpivoting as well as any other data you wish to repeat in each new row. This process leaves the original table completely intact. Solution: API Urls You'll need two API request urls. One request for the field information for the table that has the information you wish to un-pivot (eg, field id, field label, etc) One request to a report that has the data you'll need for the records in the new table. (eg. values in fields, checkbox data, etc) Don't forget to include columns that contain foreign key data you'll use in the new table If you know how to create/build those URLs, skip ahead to step 9 To build/learn about how to build the API URLs, go to developer.quickbase.com On the left side of your window you'll see a list of different topics related to the Quickbase API. Select the arrow next to the 'Fields' label. Then, click on the first item on the list "Get fields for a table" From here, you can enter the table id of the table you want to un-pivot, and fill out the other information related to realm, authorization, etc. I suggest also testing it and seeing the data that comes through. Copy the url produced in the top right corner. This is your field data URL. Back on the list on the left, click on 'Run a report', and then repeat steps 6-7 to get the record data URL. Pipeline | Fields Data Create a new pipeline and add a 'Fetch JSON' step. Populate the fetch JSON step with the field data url and required headers. This step is a GET step. Following that step, create a 'Iterate over JSON' step The step should automatically select the prior step, but in case it didn't, in the 'JSON Source' field, select the previous step as the source. In this step, it is helpful to have a sample of the JSON schema so you can reference some of the items in the next step. So include a sample data dump from the developer.quickbase.com API website. This makes references to specific field data much simpler In the 'Iterate over JSON' step, go to the bottom and filter the field list to ONLY include the fields you want to unpivot. In my case this was simple as all of my fields were checkboxes, so I just chose the 'Field Type' field as my filter, and conditioned it on "checkbox". If you aren't as fortunate as me, and you don't use the 'Field Help' field for your applications - you could use the 'Field Help' field as your filter by populating each field that you wish to un-pivot with the text 'unpivot', and then filter on the 'Field Help' This is absolutely vital to the process. If you can't get the field list down to exactly what you want you'll get unwanted rows. Pipeline | Record Data The 'Iterate over JSON' step from above will have created a 'loop' step. IN BETWEEEN the 'Iterate over JSON' step and its corresponding loop, repeat the steps 1 - 4 from the Pipeline | Fields Data section for the Record Data url A second loop will have been created as part of this new 'Iterate over JSON' step Move the Field Data Loop (from the 1st part) INTO the new loop you just created. The field data, i.e. the information that has field ids, field labels, etc. should be nested INSIDE the loop that has the table data, i.e. the records that have the actual information you want to un-pivot. Testing for Field Data In the nested loop (the field data loop), create a 'condition' step. In the drop down field where you select the field to evaluate, go to the bottom of the list and choose "Expression (advanced)". In the dropdown to the right, you can leave it as "evaluates to True" In the criteria field below the two dropdowns, include the following jinja {{d.raw_record[(b.id|string)]['value']}} To explain, what we're doing here is taking the table data we downloaded (d.raw_record), and using the current field id (b.id) of our loop to grab the data ['value'] for just that specific field. In my case, the value was either going to be 'true' or 'false' because it was a checkbox. You may need to adjust the logic to test if the field has data or not. If you don't test for this, you'll get empty rows of data. Creating a Record If you haven't already, create the table where the data is going to go. In the pipeline, in the 'If condition is met' branch, add a 'Create Record' step Choose the table you created Add all the fields from that new table that you are going to populate with data from the un-pivot. For each of the fields in the NEW table that you'll use to group on, use jinja to select the fields that will fill each of the new fields. I've included the jinja I used to populate the fields that get the same data for every record. This was the record id of the report that has all the checkbox issues {{(d.raw_record['3']['value'])|int}} Below was the date of the report which I wanted to include in each new record {{time.parse(d.raw_record['6']['value'])|date_mdy}} For the field that contains the value that you want to un-pivot, you use the same jinja statement you used in the condition section to check on valid data. {{d.raw_record[(b.id|string)]['value']}} Cross your fingers, say a few prayers, call your mother and then hit run on the pipeline.14Views0likes2CommentsCould not parse XML input
Hi, I'm getting errors on my pipelines for a multi-line field. Quickbase reported an error: 11 : Could not parse XML input : XML Parsing Error. not well-formed (invalid token) at line 3 column 353 (which is byte 700) When I look through the activity log and the original db i'm seeing these characters: †and “. When I talked with the users they said they did not put them in the field. After a little research it looks like it was a copy and paste issue. My question is how can i prevent this from being sent through the pipeline? Thank you8Views0likes0CommentsUpsert based on Condition
Hi All, I have three tables. First is basic information, where people are marked as type A, B, or C. Second is the question bank table, containing all the questions for them. There are two columns for weight. If they are type A or B, they will see the column "weight - A or B", but if they are type C, they see the column "weight - C" Every time I create a new person record in the first table, I want to pull the questions and corresponding weight from the question bank to another table. I have created a pipeline (search records - add a bulk upsert row) to pull all the questions, but when it comes to the weight, how could I pull the corresponding weight based on their type? Thanks!23Views0likes1CommentPipeline Jinja For Duration in Seconds
I would like to have a pipeline condition where nothing happens if the current time is less than or equal to 30 seconds from the value in a given field, which is a date/time field. I have tried multiple variations of jinja code and mostly get the error message: Validation error: Incorrect template "whatever my jinja is". ValueError: invalid literal for int() with base 10: 'my_date_time_field_name' I have tried subtracting and converting the time.now and my field to an integer before comparing to 30 for seconds, I tried using time.delta(seconds=30) and adding it to the date field then comparing it to the current time, and some other things. I get the above error each time. I would really like to do this with Jinja, if at all possible. If anyone can help, that would be greatly appreciated. I know, worse comes to worst, I can add a helper field and just work off of that, but I am doing this for learning purposes. Thank you in advance to anyone who can help.Solved111Views1like10CommentsPipeline Error trying to Create a Record
I am not a fan of Pipelines. I miss Automations. I used to be able to build an Automation in like 30 min and it did what I wanted. I've been working two days to build this Pipeline, even using the AI to do the framework, and I get nothing but errors or issues like this one. The intent of this pipeline is, if a record is created/updated in one app, it will look to see if there is a matching record in another app. If there is a matching record in the other app based on a site # & uniquely created identifier, it will update any modified changes in the second app. If there is not a matching record, it will create one in the second app. After tweaking several different things, I finally got it to create the record. At first, it was saying it was finding a record that matched the criteria despite no record being present. But now, whenever it does the search for a "Matching Record" based on the criteria I asked it to look at (site & unique identifier), it continues to have a FALSE finding and just creates a duplicate instead of updating. And the Activity log seems to say that it couldn't find the record based on the search criteria but created duplicate records with the exact criteria it was asked to look for. (I now have 5 records with identical site #s and unique #'s. What am I missing to get it to have a TRUE finding of the record. Also, is there any way to change the "Query" to look at anything besides the Record ID??77Views1like4CommentsBulk upsert field limit in pipeline
Hi, I have a pipeline that I'm using the bulk upsert steps for. Essentially, I'm just copying data from one table to another table and I need to do this bi-weekly. It's my first time using this bulk upsert and now I see there is a 250 field limit. Is there any way around this limit? I checked online and the solution seems to be to using a helper table, I'm not sure how to do this. I only have about 400 fields so I don't want to do that if unnecessary. I also saw something about a loop but I don't know how to do this either. What's the best way to go about this? I have the following steps in my pipeline: Prepare bulk record upsert Search Records In Loop: Add a bulk upsert row - End loop Commit upsertSolved95Views0likes6CommentsQuickBase Pipeline Best Practices
After working with QuickBase Pipelines for years, one thing is clear: Every real-world use case is unique, from API limits and Jinja logic quirks to race conditions or silent failures. That’s why I believe we need more shared knowledge and real-time collaboration around building smarter Pipelines. What I’ve learned: Pipelines are incredibly powerful but not always intuitive. The best solutions often come from hands-on trial, error, and teamwork. Every failure teaches us something about logic, structure, or scalability What I’ve learned: Share lessons from the field Explore patterns that reduce manual tasks outside the system Let’s learn by doing — together. #QuickBase #Pipelines #Automation #LowCode #WorkflowAutomation #NoCodeDevelopment #QuickBaseCommunity #APIIntegration #ProblemSolving17Views0likes0CommentsDocument Template Save PDF to field in Record API Pipeline
I have been wracking my head all day on this. I was reading Generate Documents from your records in your app where it says "However, you can also create API calls and use Pipelines, code pages, or custom integrations to generate documents. Each part of the formula represents an API parameter.". I was using Restful "Generate a Document" and was able to make a request: How do I get the output of making a request or basically the document template for this record as a file attachment in the record? I tried using: <qdbapi> <rid>1700</rid> <field fid="22" filename="Model_T.jpg"> Base64 encoded file content</field> Thank you so much!85Views0likes2CommentsHow to create a pipeline to count records in a table
I want a pipeline to do the following: Go to a table in an application A Search for records meeting a certain criteria in that table (specifically, the number of records that were completed in the previous month) Count the number of records found Go to application B Create a record in a table in application B Put the count of the records found in application A in a field on the form (fields: 1 application A name, 2 is count of records found, 3 date the pipeline ran) I want the pipeline to run this monthly on a scheduled date. Can someone kindly tell me the steps I would need to include in the pipeline to make this work? High-level is all I'm looking for. Thank you!Solved73Views1like5Comments