Forum Discussion
MarkShnier__You
Qrew Legend
3 years agoI have heard that the Copy Records step is very very fast, way faster than normal Pipeline speeds. However, when they introduced it, I think that their design goal was to replicate Table to Table Copy and not what we had in Automations. Various QSPs expressed this to the Product Manager and I put in this User Voice suggestion here and my suggestion is that you upvote it.
------------------------------
Mark Shnier (Your Quickbase Coach)
mark.shnier@gmail.com
------------------------------
------------------------------
Mark Shnier (Your Quickbase Coach)
mark.shnier@gmail.com
------------------------------
MarkShnier__You
Qrew Legend
3 years agoI do agree with Mike that if there is any way to leverage running table to table imports they are super fast.
------------------------------
Mark Shnier (Your Quickbase Coach)
mark.shnier@gmail.com
------------------------------
------------------------------
Mark Shnier (Your Quickbase Coach)
mark.shnier@gmail.com
------------------------------
- EdwardHefter3 years agoQrew CadetI don't think I can do a table-to-table import because I need the 100 different serial numbers on the 150 components (well, 15,000 components I suppose). But by using a temp table to put just the 150 components in from the larger Master Data table, the pipeline sped up a lot.
Since the pipeline may get triggered more than once during the long duration of a run, I made sure the temp table had the PCBA ID in it as well as the time the record that triggered the whole thing was updated. The pipeline still does a search on the temp table and it looks for both the PCBA ID and the timestamp. That way, even if there are multiple triggering events during the first event's run, the multiple "instances" of the pipeline will get the right data from the temp table based on the timestamp. Also, the pipeline deletes the data from the temp table using the PCBA ID and timestamp.
This is the biggest set of data manipulation I've done in Quickbase and it definitely made me think about multiple users and/or pipeline instances, giving a sense of "ready to do the next thing" to the user, and managing large sets of data!
------------------------------
Edward Hefter
www.Sutubra.com
------------------------------- EdwardHefter3 years agoQrew CadetDoes anyone know at what point tables start slowing down? If we put in 15K records at a shot, after only 65 sets (about a month) it is up to a million records and then after a year it could be over 10 million records.
------------------------------
Edward Hefter
www.Sutubra.com
------------------------------- MarkShnier__You3 years ago
Qrew Legend
Well before you worry about speed you should worry about the actual size of the table. The maximum size of a table is 500 MB. You should look at your record account now and the number of megabytes and then see how much it can grow until you hit the 500 limit.
Go to settings for the application then App Management, and then show app statistics.
The issue of that performance with large records counts is not black and white. It mainly depends to what degree you use summary fields which access these large tables and are used often.
------------------------------
Mark Shnier (Your Quickbase Coach)
mark.shnier@gmail.com
------------------------------