I am starting to use Databricks in AWS. I have a delta table that contains KPIs, with each KPI having a KPI ID (1000, 1001, 1002, etc...). We want to have concurrent processes that update those KPIs at the same time, such as one process updates data for KPI1000 while another process updates data for KPI1002, both at the same time. When we were doing this in the old platform (teradata) we actually had to partition the target table by KPI IDs so the process to update KPI1000 only locks rows for that KPI, leaving the rows for other KPIs open to be updated at any time by the other processes.
Question is, do we need to use partitions with Delta Tables to accomplish this same outcome? I was reading this article from Databricks where it mentioned about Optimistic Concurrency Control, and it gave me the impression that perhaps partitions are not required and that delta tables will allow concurrent writes on a table, whether I have partitions in it or not. Is my interpretation correct?. Just in case this helps, I am not using Unity Catalog at this time. Thanks