ContributionsMost RecentMost LikesSolutionsRe: Using Pipelines to update and load a CSV into tables Thanks Mark, as always. I think my first question to solve is -- given data like this - what's the quickest way to do lookups / updates in bulk. Rep Name Employee_ID FSA COID Store Code Knock Status Created DateTime Joe Smart MH7J 1012A CRCD ICI089 Successful Order 2024-11-07T22:08:01 Jim Smarter MG2C LOND ICI171 Successful Order 2024-11-07T22:08:01 In this example, my employee ID is not a key field -- long story, but my folks might have any number of IDs given what contract they're on. What I need to do is look up this employee ID with the related record ID on the person table. Using pipelines; mostly because I like to keep it as consistent -- what is the most efficient way to update this table where the employee ID from this staging table matches the related Person ID (ie the record ID from the Employee table). I do this rather a lot for various things, but it's slow -- Search Records where Employee ID matches, return the Record ID, Loop over the search results, Update record, do it again. Seems to take the API 1-2 seconds to do this lookup. I just tried a bucket -- Defined a Pipeline Table, Search Records from Client ID table, pulled my employee ID, added the row in memory -- but the same steps for looking up the bucket table as with the quickbase table -- and not appreciably faster. Using Pipelines to update and load a CSV into tables I am having a tough time figuring out the most efficient way to handle this use case. Currently, my pipeline has been running for 12 hours on about 45000 records — so my current method does not work! I will regularly receive a csv file of about 35-40,000 records or about 5mb of data. The data is basically a contact transaction - has a handful of fields including my employee’s unique employee id. Using pipelines, I want to create a key ID by concatenating three fields together, and add a Related employee ID from my employee table. I’ve created a bulk upsert, fetched the csv (which took two hours as it was loading 10 rows at a time), and added a search step to pull the related employee id. It adds the upsert then loops back. I only have about 200 employees - is there a way to loop through the unique employee ids in the csv to update in bulk or is there a whole better approach I’m missing? I need non technical folks to be able to drop the file and have it run. thanks! malcolm Jotforms API Hey folks - Sharing a win for future generations - I use Jotforms on the website to gather simple form data...but their webhook payload is in multipart/form-type rather than JSON. Had to take a couple steps to make it usable. Getting the Submission ID Their API lets you call form data and get a JSON response if you know the submission ID. After resetting up the incoming webhook, fetch JSON using the following: {% set data = a.body %} {% set start = data.find('name="submissionID"') + 20 %} {% set end = data.find('-', start) %} {% set submission_id = data[start:end]|trim %} This parses through the lengthy multipart/formdata content, searching for SubmissionID, pulling the data, and trimming any extra spaces for safety. A bit of a fragile solution because it really depends on JotForm not changing their wording, but works well enough. The api URL I can call then looks like: https://api.jotform.com/submission/{{submission_id}}?apiKey= Getting the file I have a file attachment in the 12th field of the JSON; but extracting a clean name was also difficult. The JSON only gives a result like: https://www.jotform.com/uploads/blah/blah/blah/Screenshot.png That provides the path to download the file, but to strip the name out, jinja split needs to be used; however, the node the url is on is a list. I know there will only be one file - so used the following to work around the split limitation. {% set dlurl = c.raw_record.answers['12'].answer[0] %} {% set parts = dlurl.split('/') %} {% set last_part = parts[-1] %} {{ last_part|trim }} Where the [0] pulls the index 0 from the list array. Hope this helps someone in the future! ------------------------------ Malcolm McDonald ------------------------------ Re: Record created off of a form and/or Pipeline, but then no content in the fields - why? I've read through this a couple times, and am not 100% sure I know what's expected and what it's actually doing Are there two issues here? 1) When just an individual workshop is selected, those fields don't populate 2) When the whole series is selected, just the first workshop session populates? Here's a couple generic ideas.. Are the blank fields lookup fields to a related table? If so, is the recordID being set correctly in pipelines? Pipelines looping can be tricky - and I fairly often realize that I'm working on the wrong branch or have forgotten to put a for each loop after a search Good luck, fellow canuck! ------------------------------ Malcolm McDonald ------------------------------ Re: Filter Report Based on User Role Chiming into an old thread for future people searching for similar solutions. Building on Mark's posts, I have a table with a couple dozen user guides. Some user guides are for general audiences, some restricted to admins etc. Created a multi-text selection field where the options were <Role ID # > - <Role Name> eg 10 - Viewer. Then a checkbox formula field using this formula: If(Contains(ToText([Role ID]),ToText(UserRoles("ID"))),true) It works to set the report filters to where the checkbox formula is checked, but I prefer to set it at the table permission level with a custom rule on what users can view. Don't know that I would go to this kind of trouble for a big table - but for small ones, really works nicely as a quick row-level security by user role workaround. ------------------------------ Malcolm McDonald ------------------------------ Display an image attachment on homepage Hello dear QBers! Been banging my head against this for a bit... I have an identity badge app -- where each user has an image of themselves in their record. On the homepage of that app, I'd like to show them their picture, if they have one. Wrinkle: I don't want to use the single-field report approach ... it's ugly, and doesn't translate well to mobile. I've been trying to write it as a api_doquery using the current user as the filter.. but not getting anywhere Thanks as always! M ------------------------------ Malcolm McDonald ------------------------------ Re: How do you get the full URL of attached file using an API? Updating the link reference: Appendix 2: Managing Files ------------------------------ Malcolm McDonald ------------------------------ Re: Pipeline Bucket - Can it be used to accept files from API calls? Doug -- thank you for continuing to work this very much appreciated. I've engaged QB Support too .. I think we're an inch from the finish line, but stuck at the binary encoding part. The jinja filter | base64_encode can only handle string, but the image mime-type encoding (I think I'm saying that right) has all kinds of characters outside of the 128-ascii set expected. Annoying because the source image manipulation service API won't return a url I can just download the file from, and annoying because QB doesn't accept the result it without me finding another API to do the conversion step. I can see all kinds of local server-side solutions in javascript etc, but nothing I can fire off in pipelines. If I come to a solution, I'll post here -- and thank you! M ------------------------------ Malcolm McDonald ------------------------------ Re: Pipeline Bucket - Can it be used to accept files from API calls? Thanks Doug-- that does get me further -- I can now "see" the image binary in the pipeline's output - but still can't do much with it. Intention is for the file to be uploaded as a file attachment .. but that pipeline only accepts a URL. I tried it anyways, and it didn't take the binary. I'm stuck feeling like buckets might be an approach (limited documentation but it talks about handling files?). Ideally I'm not setting up a box account or something as a temporary URL hosting.. seems like too many steps in the chain. Any other thoughts?? Thanks again for your help! M ------------------------------ Malcolm McDonald ------------------------------ Pipeline Bucket - Can it be used to accept files from API calls? Hello fellow Quickbasers! I am using an image processing API to compress and resize images before uploading to Quickbase. I have _most_ of it working using a FetchJSON pipeline step .. the original image goes out, and I get the correct response back from the image processing service. The next step escapes me though -- the result is not an application/json content type, it's an image/jpg result. EG the below. HTTP/1.1 200 OK Compression-Count: 1 Image-Width: 530 Image-Height: 300 Content-Type: image/jpeg Content-Length: 46480 [binary] With just pipelines, not codepages, has anyone been able to process a result like this? The Iterate over JSON step doesn't seem to work because there's no JSON schema to reference (and it's the wrong content type) .. I'm thinking maybe a bucket step would work? Thanks in advance! M ------------------------------ Malcolm McDonald ------------------------------