ContributionsMost RecentMost LikesSolutionsRe: Microsoft Graph API in a pipeline For posterity: There is currently a bug in the webhook oauth2 authentication. it won't work if you need to provide a scope. I ended up building a pipeline That authenticates itself. It's rather a hack, but.. it works. like this: trigger on new outlook email in shared mailbox, filter: subject starts with "new time proposed" fetch JSON from the MS oauth2 authentication service (https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/token) to get a token (grant_type=client_credentials , scope=https://graph.microsoft.com/.default) use the token in another fetch JSON to https://graph.microsoft.com/v1.0/users/{shared_mailbox_from_step_1}/messages/{{a.id}} Process the json to get the proposedNewTime values. Functional, but kinda ugly. ------------------------------ Dan Locks ------------------------------ Microsoft Graph API in a pipeline Has anyone connected to the Microsoft Graph API from a pipeline? I need access to some outlook data that doesn't come through with the methods provided by the Outlook channel. Specifically the "New Time Proposed" values for calendar item updates. The graph API provides the information I need in a straightforward way. I'm having trouble getting authenticated. I've provisioned everything correctly, and can use the client_id client_secret values in Postman. However when used in the pipeline, no token is being provided. The error coming back from the pipeline is... not illuminating. ------------------------------ Dan Locks ------------------------------ Re: Undocumented Pipeline yaml functionality There are a few other automation systems out there that use yaml files for specifications (kubernetes, AWS CloudFormation, Terraform, Gitlab, AWS Codebuild, probably others) Most of them seem to use templating to pre-process the yaml similar to what I suggest here. Some use jinja for their templating system. These systems publish a rigorous spec for the yaml documents and don't provide a pretty GUI. It would be great if QB would publish a spec and treat the yaml as the primary source for pipelines. I think the GUI gets in the way of managing pipelines of any complexity at all. Granted, I am quite new to QB, so take my opinions as uninformed. ------------------------------ Dan Locks ------------------------------ Re: Undocumented Pipeline yaml functionality A best practice across all programming disciplines is to avoid repeating identical lines or values when writing code. In my case, I have 3 tables with identical columns but differing data. The "import with csv" action looks *almost* the same for each table. This yaml syntax allows me to write the value for `header_row:` once and define a reference to that value in-line. Any time I need that same value, I can use the reference instead of the actual value. Keep in mind this is a purely textual modification to the yaml, and after the yaml is loaded, the resulting values are hard coded again. After loading a pipeline with yaml references, you could edit the pipeline in the UI to have different `header_row` values. I am using yaml references to define the fields i import, however this simple reference is insufficient to create a valid "import with csv" block. Each field must also have mapping in the form of `qb_field_for_a` then `qb_field_for_really`, then `qb_field_for_long`... one for each field listed in `header_row`. The value of the mapping is the field ID for the destination table. For me, the destination field ID is not consistent, so I can't simply repeat all the `qb_field_for` using yaml references, because the values differ. Honestly, it's not clear how useful it is, but I figured someone might get some use. ------------------------------ Dan Locks ------------------------------ Undocumented Pipeline yaml functionality Pipeline yaml files can apparently use at least some of yaml's advanced features. I was able to use "yaml references" in my pipeline spec to avoid repeating long strings that might be prone to typos. specifically: - IF: - AND: - a<>body_params.partner starts c1 - THEN: - ACTION quickbase[abcdefg] bulk_record_set csv_import -> c: inputs-meta: csv_url: '{{b.file_transfer_handle}}' header_row: &headers A,REALLY,LONG,LIST,OF,HEADERS - IF: - AND: - a<>body_params.partner starts shi - THEN: - ACTION quickbase[bcdefgh] bulk_record_set csv_import -> e: inputs-meta: csv_url: '{{b.file_transfer_handle}}' header_row: *headers The &headers creates a yaml reference to whatever follows. *headers uses that reference. Yaml references are kind of obscure and a bit tricky to use in my past experience, but they do work. Unfortunately, the references are not preserved: When you export the pipeline after using them, the export will have A,REALLY,LONG,LIST,OF,HEADERS hard coded wherever the *headers reference was used. It would be nice if QB would publish a spec for these yaml files along the lines of Gitlab/Github/AWS CodeBuild/ etc. ------------------------------ Dan Locks ------------------------------ Re: Realm Admin to get Switch User option. Thanks for confirming the situation. I also provided feedback. The system seems to have groups or roles, we just can't create new ones. Our team uses a shared account. It's a pretty terrible way to run things, but better than realm-admin. maybe. ------------------------------ Dan Locks ------------------------------ Re: pipeline import with CSV jinja issue It would appear that in the error display UI, jinja templates are not always expanded when values are present. ------------------------------ Dan Locks ------------------------------ Re: pipeline import with CSV jinja issue Cart before the horse, here: I'm also having issues with the S3 lookup returning empty. ------------------------------ Dan Locks ------------------------------ pipeline import with CSV jinja issue I have a pipeline like: Webhook: incoming request json body with { "bucket": "some_bucket", "key": "processed/some_data.csv", "partner": "c1", "table_id": "abcdefghi", "headers": "a,b,c" } S3: Lookup an Object bucket = {{a.body_json.bucket}} key = {{a.body_json.key}} Quickbase: import with CSV table = {{a.body_json.table_id}} merge field = fixed_merge_field csv url = {{b.file_transfer_handle}} first row is headers = True header row = {{a.body_params.headers}} First two steps appear to work. Third step does not expand the jinja template and appears to insert a literal string. Is jinja prohibited in certain fields? Do I need some extra magic to make this work? ------------------------------ Dan Locks ------------------------------ unzip csv I've processed a csv using aws lambda, and dumped it to an S3 bucket as a gzip archive. Is there a way to unzip the archive and load the csv in a pipeline? ------------------------------ Dan Locks ------------------------------