ContributionsMost RecentMost LikesSolutionsRe: API Record Limits In PythonThanks Matthew. Very helpful. API Record Limits In PythonWe seem to be limited to only getting 500 records at a time on our API calls. Any suggestions? api=qb.QuickBaseApiClient(us er_token='TOKEN',realm_ hostname='companyname. quickbase.com') response=api.query( table_id='bmhgaf4uj', fields_to_select=["35"," 34","26","3","1","2"], where_str=where_str_var, sort_by=[3] ) data=response.json() ------------------------------ Chris Anderson ------------------------------ Getting App Data (Not Files) To S3: How Would You Approach?We have a reasonably sizable QB app with tables having >200K records. One of the things we need to do is take some incoming data from multiple sources and then based on what we have in QB, decide what we need to add (or modify). In short, if I could get data into storage like Amazon S3, we know what to do from there. This transfer would run once per day. But, only need a few fields out of each table. Is there a clean way to get data over there without having to write custom code that grabs data via the API (but has to deal with record size limits), and then transfer to Amazon S3? If you can point me to some top level concepts to look at, then I am all ears. Thank you. ------------------------------ Chris Anderson ------------------------------ Pipelines - Is This A Good Use Considering API Limits?We have some fairly large tables of data in QB and then we need to update the table based on results that we have collected into a CSV. We have done this externally in Javascript but would love to keep all of this tied to QB instead of having another platform (AWS) etc. Basically in Javascript, we read the entire table into a local DB, then bring in the CSV data, then for each line of the CSV data, see if we have a match already in the QB Table. If we do not, then we add that to a list and once we process the entire CSV file, then we can efficiently upload all the "need to add" records at one time and not run up API requests. Are there efficient way to do similar in Pipelines/Jinja? I'm sure we could CSV line by line, search the DB if we have a match and upload if we don't but would be a mess from number of API calls required (I think). Happy to dive in if sounds pretty doable just didn't want to waste a bunch of time on something that would obviously not work to people who have been there, done that. ------------------------------ Chris Anderson ------------------------------