One of the most powerful aspects of Quickbase is that you can create many apps that can be deployed across your enterprise and accessed with a shared user base. The more apps you build and share, the more value you drive in your business. This article discusses the different ways you can share, replicate, and transform data in Quickbase and explores the various tools Quickbase provides for moving data between apps:
The aim is to make it easier for builders to decide which strategy works best for their needs.
You can use cross-app relationships to connect two tables from different applications within a given account. They are quick to set up and follow the same pattern as relating two tables in the same Quickbase app.
Key Benefit - You can create one-to-many relationships between tables in separate apps.
- Operates like a single application table-to-table relationships
- Real-time updates
- No additional authentication needed
- Quick setup
- You can't edit related data in the destination, like lookups and summaries
- Role setup can be tricky
- Relationships are not visible in the Parent table application
- Dependency results in shared system resources, preventing applications from scaling out (CPU)
Cross-app relationships work well when:
- You are connecting small to medium-sized applications with light-to-moderate complexity.
- You don't need to edit the data in the destination (child) table
- You do need data to be immediately available on the destination table and ready for viewing on forms and reports
Cross-app relationships considerations
- Make sure that data visibility is as you expect
- Roles and permissions can be tricky with cross-app relationships, so it is important to make sure you are securing data in the way you intend.
- The parent table must grant the child table access and assign a role for all users in the child table. Additionally, if a user has access to both applications the role in the parent table will take precedence
Document the purpose of the cross-app relationship
- Document why you are creating each lookup and summary field.
- Consider using field comments
Use a consistent naming connection for cross-app fields
Understanding the source of data is especially important for builders. By developing a naming convention for lookup and summary fields, a convention that indicates the data is coming from another application, you can help save a lot of time when creating or modifying reports, forms, or relationships. This convention becomes even more important as your applications grow in both size and complexity.
Cross-app relationships create application dependencies
When you create a cross-app relationship, it results in the two apps sharing system resources. This sharing of system resources can become an issue when daisy-chaining apps or creating a hub-and-spoke structure with a large number of applications or a few large and complex apps. We suggest never using x-app relationships with high-complexity apps.
Learn more about creating cross-application relationships in our table-to-table relationship help guide.
You can use table-to-table imports (TTI) to move data from one table into another, within the same app or across different apps. You accomplish this by defining field mappings from a source table with fields in a destination table.
Key Benefit - You can replicate data from multiple sources into one table.
- It's easy to configure field mappings
- Large data sets execute quickly
- You'll be able to merge data from multiple sources
- Supports complex filtering criteria
- You set it up once and use when needed
- Data is not moved in real-time
- No native ability to schedule runs
- User creating import must have access to both tables/applications
- No warning when the source or destination field is deleted
- Application dependency results in shared system resources (CPU)
Table-to-table imports work well when:
- You need to connect small to medium-sized applications with light-to-moderate complexity.
- You need to centralize data from multiple apps into one app e.g. for rollup reporting purposes.
- You need to create snapshots of data for trend reporting.
- You don't need the import to happen immediately when the source data changes.
Table-to-table imports considerations
- Use API_RunImport to automate table-to-table imports
- You configure TTIs once, but you can run them manually at any point. If you would like to set up the TTI to run at some interval, you will need to leverage the Quickbase API (API_RunImport) using a script or an integration platform such as Workato or Zapier.
Ensure data types are the same when mapping fields
When you configure the TTI, make sure that your field types for source and destination match, to prevent unwanted results. The one exception is importing a numeric into a text field, which displays the number in text form.
Ensure you have the correct permission levels set for the destination
Once you import the data into the destination, you view and modify the data following the same roles and permissions as data entered by a user. If your intention is for the data to be read-only/not editable, be sure to configure each role to have view-only permissions.
Leverage table-to-table imports for trend reporting
Table-to-table imports allow you to configure an import to capture data over time (snapshots) for trend reporting by executing the TTI at a desired interval.
Document the purpose of the TTI and the intended field mappings
You should periodically review the TTIs and ensure field mapping and filter criteria still exist and make sense. The documentation and periodic review are especially useful when multiple application managers work with the schema because field type changes and field deletions can affect the TTI.
TTIs create dependencies between applications when executed
When you execute TTIs, it results in the share of system resources. This sharing of system resources can become an issue with leveraging TTIs for roll-up reporting across a large number of applications (10 or more apps, as a rule of thumb) or a few apps that are large and complex.
Note: There may be situations when you need to transfer relatively big amounts of records (over 1000, or maybe as many as tens and hundreds of thousands) across your Quickbase apps and tables. See Using Copy Records in Bulk Record Sets for more information.
Learn how to create table-to-table imports.
Use Quickbase Sync to connect tables to numerous cloud services including Quickbase itself. Sync also connects to .csv files via cloud storage services such as Box, Dropbox, and Google Drive; as well as SFTP servers.
Key Benefit - You can easily configure and schedule data replication.
- You can easily connect Quickbase tables using the wizard configurator
- You can use light filtering
- You can schedule Sync refreshes can up to every hour
- You can run applications independently without sharing resources (CPU)
- Data is read-only
- No real-time updates
- Refreshes require comparing data sets, which can take a few minutes
- Cannot do nested filtering
Quickbase Sync works well when:
- You need to schedule sync refreshes for times when there is light usage in the source and destination applications.
- You need a hub-and-spoke model where multiple destination tables pull data from the source.
- You need to keep applications decoupled.
- You don't need to edit the data in the destination table.
Quickbase Sync considerations
- Don't schedule unnecessary updates
- Consider the frequency of updates being made in the source and how quickly you need the updates in the destination. Your needs will help determine the refresh schedule and eliminate unnecessary updates.
- Consider not creating an endless hub and spoke model. Too many sync connections on one source can begin to cause bottlenecks.
- Do not do more than one sync an hour against the source. Consider having a master source and additional satellite sources to support multiple syncs in a given hour.
Consider alternative Quickbase functionality when real-time updates are needed
Sync updates are scheduled or manually executed. When real-time updates are needed consider using other tools which trigger updates on changes in the source, such as pipelines.
Use Sync to decouple application dependencies
Sync allows builders to easily connect tables between applications without sharing system resources. As such, Sync can be useful when you need to connect multiple large and complex apps with high usage.
Schedule Syncs during low use times when possible
Sync refreshes can be expensive and take minutes when the data sets being replicated are large. When using Sync, a best practice is to schedule refreshes during non-business hours/low usage.
Ensure the connection owner has the appropriate privileges in the destination
The connection to the source table is established and maintained with the credentials and permissions of the person who created the connection. The user creating the connection in the destination app must be in a role that has administrative rights. User access to the destination table is controlled the same way as any other table in the destination application, through roles and permissions.
Learn more about Sync and creating connected tables.
You can use Pipelines to perform actions on Quickbase data across applications (and realms) either by a trigger (data added, modified, or deleted) or on a schedule and fully integrate their data across cloud-based software tools, as well as some on-premise systems.
Key Benefit - You can create a workflow with no code and connect disparate third-party systems without always needing to use an API.
- No coding is required
- Easy to replicate data across apps
- Execute in real-time or on a schedule
- Built-in resiliency
- Perform actions against another application without sharing system resources (CPU)
- Ability to take action(s) based on responses
- Connect third party systems – Quickbase is not required to be part of a pipeline
Cons: (please keep in mind some of these will be addressed in future releases)
- Visible only to user who created the pipeline (and realm admins via Switch User functionality.)
Pipelines work well when:
- You need to transform data before upserting to another table/system
- You need to integrate with third party systems such as Salesforce, Jira, Box, or via an API
- You need to take action(s) on a response from another action (add/delete/update action)
- You want to set your applications up for maximum scale
Don’t create infinite loops
Actions, webhooks, and triggered pipelines all execute based on a trigger. Ensure triggering one doesn’t trigger the initiator.
Consider the rate at which your pipelines will fire
For any app, a maximum of 50 HTTP messages can be sent per second. If the rate is exceeded, subsequent messages are not sent, and the app manager receives an alert and email notification. The limit of 50 triggers per second becomes increasingly important with scripts involving a high frequency of adds, edits, and deletes that will trigger a pipeline.
Consider how real-time your changes need to be
You’re able to have 90 unique triggers per Quickbase table. Scheduling pipelines or utilizing callable pipelines is a scalable and efficient building pattern. Consider consolidating pipelines if they share a trigger and using logic within the pipeline to drive the workflow.
Schema changes can break pipelines
Pipelines do not have control over changes made to the schema. Therefore, changes to field names could break a pipeline. Set up a process to audit your pipelines. Pipelines does not currently have a notification for failures.
Learn more about building pipelines.