The Performance Difference Between SQL Row-by-row Updating, Batch Updating, and Bulk Updating

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Something that has been said many times, but needs constant repeating until every developer is aware of the importance of this is the performance difference between row-by-row updating and bulk updating. If you cannot guess which one will be much faster, remember that row-by-row kinda rhymes with slow-by-slow (hint hint).
Disclaimer: This article will discuss only non-concurrent updates, which are much easier to reason about. In a concurrent update situation, a lot of additional factors will add complexity to the problem, including the locking strategy, transaction isolation levels, or simply how the database vendor implements things in detail. For the sake of simplicity, I’ll assume no concurrent updates are being made.
Example query
Let’s say we have a simple table for our blog posts (using Oracle syntax, but the effect is the same on all databases):

CREATE TABLE post (
id INT NOT NULL PRIMARY KEY,
text VARCHAR2(1000) NOT NULL,
archived NUMBER(1) NOT NULL CHECK (archived IN (0, 1)),
creation_date DATE NOT NULL
);

CREATE INDEX post_creation_date_i ON post (creation_date);

Now, let’s add some 10000 rows:

INSERT INTO post
SELECT
level,
lpad(‘a’, 1000, ‘a’),
0 AS archived,
DATE ‘2017-01-01’ + (level / 100)
FROM dual
CONNECT BY level O(N log N)

We’d get far worse results for larger tables!
What about doing this from Java?
The difference is much more drastic if each call to the SQL engine has to be done over the network from another process. Again, the benchmark code is available from a gist, and I will paste it to the end of this blog post as well.
The result is (same time unit):

Run 0, Statement 1: PT4.546S
Run 0, Statement 2: PT3.52S
Run 0, Statement 3: PT0.144S
Run 0, Statement 4: PT0.028S
Run 1, Statement 1: PT3.712S
Run 1, Statement 2: PT3.185S
Run 1, Statement 3: PT0.138S
Run 1, Statement 4: PT0.025S
Run 2, Statement 1: PT3.481S
Run 2, Statement 2: PT3.007S
Run 2, Statement 3: PT0.122S
Run 2, Statement 4: PT0.026S
Run 3, Statement 1: PT3.518S
Run 3, Statement 2: PT3.077S
Run 3, Statement 3: PT0.113S
Run 3, Statement 4: PT0.027S
Run 4, Statement 1: PT3.54S
Run 4, Statement 2: PT2.94S
Run 4, Statement 3: PT0.123S
Run 4, Statement 4: PT0.03S

The difference between Statement 1 and 4 is a factor of 100x !!
So, who’s winning? Again (by far):
Statement 4, running the bulk update
In fact, the time is not too far away from the time taken by PL/SQL. With larger data sets being updated, the two results will converge. The code is:

try (Statement s = c.createStatement()) {
s.executeUpdate(
“UPDATE postn” +
“SET archived = 1n” +
“WHERE archived = 0n” +
“AND creation_date < DATE '2018-01-01'n"); } Followed by the not that much worse (but still 3.5x worse): Statement 3, running the batch update Batching can be compared to PL/SQL’s FORALL statement. While we’re running individual row-by-row updates, we’re sending all the update statements in one batch to the SQL engine. This does save a lot of time on the network and all the layers in between. The code looks like this: try (Statement s = c.createStatement(); ResultSet rs = s.executeQuery( "SELECT id FROM post WHERE archived = 0n" + "AND creation_date < DATE '2018-01-01'" ); PreparedStatement u = c.prepareStatement( "UPDATE post SET archived = 1 WHERE id = ?" )) { while (rs.next()) { u.setInt(1, rs.getInt(1)); u.addBatch(); } u.executeBatch(); } Followed by the losers: Statement 1 and 2, running row by row updates The difference between statement 1 and 2 is that 2 caches the PreparedStatement, which allows for reusing some resources. This can be a good thing, but didn’t have a very significant effect in our case, compared to the batch / bulk alternatives. The code is: // Statement 1: try (Statement s = c.createStatement(); ResultSet rs = s.executeQuery( "SELECT id FROM postn" + "WHERE archived = 0n" + "AND creation_date < DATE '2018-01-01'" )) { while (rs.next()) { try (PreparedStatement u = c.prepareStatement( "UPDATE post SET archived = 1 WHERE id = ?" )) { u.setInt(1, rs.getInt(1)); u.executeUpdate(); } } } // Statement 2: try (Statement s = c.createStatement(); ResultSet rs = s.executeQuery( "SELECT id FROM postn" + "WHERE archived = 0n" + "AND creation_date < DATE '2018-01-01'" ); PreparedStatement u = c.prepareStatement( "UPDATE post SET archived = 1 WHERE id = ?" )) { while (rs.next()) { u.setInt(1, rs.getInt(1)); u.executeUpdate(); } } Conclusion As shown previously on this blog, there is a significant cost of JDBC server roundtrips, which can be seen in the JDBC benchmark. This cost is much more severe if we unnecessarily create many server roundtrips for a task that could be done in a single roundtrip, namely by using a SQL bulk UPDATE statement. This is not only true for updates, but also for all the other statements, including SELECT, DELETE, INSERT, and MERGE. If doing everything in a single statement isn’t possible due to the limitations of SQL, we can still save roundtrips by grouping statements in a block, either by using an anonymous block in databases that support them: BEGIN statement1; statement2; statement3; END; (you can easily send these anonymous blocks over JDBC, as well!) Or, by emulating anonymous blocks using the JDBC batch API (has its limitations), or by writing stored procedures. The performance gain is not always worth the trouble of moving logic from the client to the server, but very often (as in the above case), the move is a no-brainer and there’s absolutely no reason against it. So, remember: Stop doing row-by-row (slow-by-slow) operations when you could run the same operation in bulk, in a single SQL statement. Hint: Always know what your ORM (if you’re using one) is doing, because the ORM can help you with automatic batching / bulking in many cases. But it often cannot, or it is too difficult to make it do so, so resorting to SQL is the way to go. Code PL/SQL benchmark SET SERVEROUTPUT ON DROP TABLE post; CREATE TABLE post ( id INT NOT NULL PRIMARY KEY, text VARCHAR2(1000) NOT NULL, archived NUMBER(1) NOT NULL CHECK (archived IN (0, 1)), creation_date DATE NOT NULL ); CREATE INDEX post_creation_date_i ON post (creation_date); ALTER SYSTEM FLUSH SHARED_POOL; ALTER SYSTEM FLUSH BUFFER_CACHE; CREATE TABLE results ( run NUMBER(2), stmt NUMBER(2), elapsed NUMBER ); DECLARE v_ts TIMESTAMP WITH TIME ZONE; PROCEDURE reset_post IS BEGIN EXECUTE IMMEDIATE 'TRUNCATE TABLE post'; INSERT INTO post SELECT level AS id, lpad('a', 1000, 'a') AS text, 0 AS archived, DATE '2017-01-01' + (level / 100) AS creation_date FROM dual CONNECT BY level

X ITM Cloud News

Emily

Next Post

Truth First, or Why You Should Mostly Implement Database First Designs

Sun Nov 24 , 2019
Spread the love          In this much overdue article, I will explain why I think that in almost all cases, you should implement a “database first” design in your application’s data models, rather than a “Java first” design (or whatever your client language is), the latter approach leading to a long road […]
X- ITM

Cloud Computing – Consultancy – Development – Hosting – APIs – Legacy Systems

X-ITM Technology helps our customers across the entire enterprise technology stack with differentiated industry solutions. We modernize IT, optimize data architectures, and make everything secure, scalable and orchestrated across public, private and hybrid clouds.

This image has an empty alt attribute; its file name is x-itmdc.jpg

The enterprise technology stack includes ITO; Cloud and Security Services; Applications and Industry IP; Data, Analytics and Engineering Services; and Advisory.

Watch an animation of  X-ITM‘s Enterprise Technology Stack

We combine years of experience running mission-critical systems with the latest digital innovations to deliver better business outcomes and new levels of performance, competitiveness and experiences for our customers and their stakeholders.

X-ITM invests in three key drivers of growth: People, Customers and Operational Execution.

The company’s global scale, talent and innovation platforms serve 6,000 private and public-sector clients in 70 countries.

X-ITM’s extensive partner network helps drive collaboration and leverage technology independence. The company has established more than 200 industry-leading global Partner Network relationships, including 15 strategic partners: Amazon Web Services, AT&T, Dell Technologies, Google Cloud, HCL, HP, HPE, IBM, Micro Focus, Microsoft, Oracle, PwC, SAP, ServiceNow and VMware

.

X ITM