ContributionsMost RecentMost LikesSolutionsRe: Year to Year comparison Line Graph Hi all, It feels like there should be a native solution to produce this kind of Yr on Yr comparison of cumulative data using the amazing new formula queries, but for the life of me I cannot get it to work. My base data is a simple table of sales where each record has a data and an amount. Using a formula query to calculate total cumulative sales to date for the same year is simple enough. What's foxing me is how I can then use this in a graph report that looks like Debbie's example above. What am I missing? Thanks David ------------------------------ System Admin ------------------------------ Re: How to trim a list to X number of records. Hi Mark, I'm sure you've thought of this and discounted for some reason... If I was doing this I would: 1) Use Count Summary fields to determine the number of dates which need to be deleted, and pass this back to the child via a lookup. I assume the max possible is 80-60=20? 2) Use Max Summary fields (you'd need 20 of these to cover the max you'd ever need to delete) to identify the child record IDs with the highest dates excluding your blackout dates with each summary excluding the value of the preceding one. 3) Pass these IDs back down to the child records in 20 separate lookups. 4) Create a formula checkbox field called something like 'Record to be deleted?' to identify where the record ID of that specific child equals any of the up to 20 record IDs you've pulled from the parent via the lookup and identified as needing to be deleted. You'd use the value from Step 1) in your formula, to determine how many of the 20 fields to check. 5) Use a timed automation (or pipeline) to delete any child records with that flag checked. I've done something similar to this (without step 5) where I needed to identify the latest 9 'active' records (with no limit on the potential total number of child records) in a child table, allowing for uses to re-order and manually exclude / re-include records, and it works fine. In fact I'm pretty sure it was one of your posts, Mark, that inspired this approach! Lol. Hope you can get your head around this now, and that for a change I can actually help you with something! David ------------------------------ dmlaycock2000 dmlaycock2000 ------------------------------ Re: API API_AddRecord - with UpdateHi Mark, Whilst I've played with pipelines, so far we've not become good friends. )-: The sort of stuff which pipelines ought to excel in, I tend to use Zapier for - primarily because of the maturity of it's UX / error handling (and I have the Zapier account anyway for other things and know the interface inside out). D ------------------------------ dmlaycock2000 dmlaycock2000 ------------------------------ Re: API API_AddRecord - with UpdateHi Glad to hear you've got a solution that works now. An alternative approach you could use is the upsert functionality in the newerJSON API https://developer.quickbase.com/operation/upsert What's really useful is the fact that this can be called from a quickbase webhook (example screenshot below). For bulk record upserts outside of Quickbase, in the past I've used the excellent Qunect ODBC Connector which has upsert capability. https://www.qunect.com/ Cheers David ------------------------------ dmlaycock2000 dmlaycock2000 ------------------------------ Re: Javascript to execute three URLs and then do a quiet jgrowlThanks Mark - I'll be using this today! David ------------------------------ dmlaycock2000 dmlaycock2000 ------------------------------ Re: Javascript to execute three URLs and then do a quiet jgrowlHi Mark Sorry can I clarify - when you say they 'did not execute in sequence' do you mean they executed - but not sequentially? Or did only the first one execute, and the others failed? If the former, then I can use this where I need to call one or more URLs, but sequence is not important. D ------------------------------ dmlaycock2000 dmlaycock2000 ------------------------------ Re: Programmatically deselect invalid multiselect optionsThanks Blake I went with the multi-select with values drawn from a separate table for a whole host of reasons. It is without doubt the most elegant and easy to understand solution for a user when adding and editing records in the primary table - for them to simply select options from the drop down, in what is essentially a many to many relationship. But I agree my quest to make it easy for the user, has created no end of other problems! Anyway, many thanks for your input on this - however my current feeling is that my best option is to sacrifice the UI usability benefits of the multi select field, and revert to a relationship based model. D ------------------------------ dmlaycock2000 dmlaycock2000 ------------------------------ Programmatically deselect invalid multiselect options Hi, I have a multiselect field in my primary which draws its potential selections from another secondary table. The valid selections change over time - new ones are added, old ones are deleted and existings ones may be changed. At the moment, this is a two step process. Step 1 - amend the valid selections in the secondary table then Step 2 - gridedit all records in the primary table where there are now invalid selections (usefully highlighted in red by QB). There will always be some manual work involved with this process, but at the very least it would be very useful to be able to use an API call to automatically deselect any and all selections which are no longer valid (the ones QB highlights in red) and update the relevant records in my primary table Is this possible, and if so how, please? Thanks David ------------------------------ dmlaycock2000 dmlaycock2000 ------------------------------ Re: Apps, Performance, and Best Practice Brill, thanks Mark. D ------------------------------ dmlaycock2000 dmlaycock2000 ------------------------------ Re: Apps, Performance, and Best PracticeThanks to both Evan and Mark for your various responses on this thread. Given what I've now learned, it's clear that I need to transfer the data from my master to my slave app differently, so the two apps are run in different threads, for optimum performance. I thought Id share my plan here, in case there are any flaws which Mark, Evan and the community can point out for me, before I replace one problem with another! To summarise my current situation, I have 4 master tables in the Master App, and 4 slave tables in the slave app. The master app has lots of complex relationships (normal and reverse), summaries and lookups, and it's where we do our content management, so it's optimised for usability within the native QB UI. Lightning fast speed isn't a priority. The slave app is currently refreshed from the master on a daily schedule (or upon manual trigger) by import API. The slave is optimised for speed, has no relationships or formula fields, and only essential data for its sole purpose which is serving read only content to our web app. Given that I don't want to mess with the slave tables, and they cannot be reconfigured as connected tables to use the sync functionality, my plan is now to use a two step process as follows: 1) Create 4 new connected tables in the slave app, to act as an interim store of the data. These interim holding tables would be refreshed using the connected tables sync functionality. 2) Refresh my original real slave tables from the interim tables (both in the same app) using the good old import API. One thing at the back of my mind, that I'm not sure how I will approach yet (beyond careful scheduling) is... I'd like to trigger the API Import automatically when the sync has fully completed. And obviously this could involve anything from 1 to thousands of records changing, so I need to trigger this in a way which cannot result in a single import triggering the API calls multiple times. I also wouldn't want the import to happen until the sync had finished. Presumably I'll need to resort to pipelines for this, and I'm thinking that there will be no way to know the sync has finished, so it will have to just be a case of creating a pause in the processing that is long enough to be sure there is no way it wouldn't have. One final question for anyone who is familiar with pipelines. Presumably they run independently of QB db operations (except when they are running QB DB operations themselves). So a pause in a pipeline that is being run doesn't create a blockage in the processing of normal queries against the db? Is my assumption here correct? David ------------------------------ dmlaycock2000 dmlaycock2000 ------------------------------