

If your application specifies TYPE=FASTLOAD and the FastLoad protocol was used in the past, but is no longer used, then something must have changed. A regular SQL INSERT will be performed for INSERT statements that aren't compatible with FastLoad.

TYPE=FASTLOAD instructs the Teradata JDBC Driver to use the FastLoad protocol for INSERT statements that are compatible with FastLoad. I hope that this quick tutorial has given you a good overview of the different coding choices and their performance implications when interacting with Teradata. Its much better to avoid these types of issues early instead of dealing with a fire drill when your new application has already been rolled out to production. Planning for maximizing performance throughput should always be on your mind when you're coding. JDBC and ODBC allow C/C++ and Java programmers to easily build database applications with Teradata. Using JDBC FastLoad can be 3 to 10 times faster than the previous approach. To get top-notch performance, you need to use a batch size of roughly 50,000 to 100,000. Note that the recommended batch size for JDBC FastLoad is much higher than for a regular SQL Prepared Statement batch, which means you may need to increase your JVM heap size. Just add TYPE=FASTLOAD to your connection parameters, and the Teradata JDBC Driver will use JDBC FastLoad for particular SQL requests, if it can. Your application uses the exact same Prepared Statement batches as in the previous example.
#Download jdbc odbc driver for java code
The nice thing is that your Java code doesn't need to change in order to use JDBC FastLoad. JDBC FastLoad can only insert data into an empty table, and JDBC FastLoad is only recommended for loading large amounts of data - at least 100,000 rows total. Ps.executeBatch() // sends all the batched rows to the databaseįor loading truly huge amounts of data, JDBC FastLoad can provide even better performance. This is done once per the desired batch size. Ps.addBatch() // adds the row of input values to the batch Using batches can be 10 to 40 times faster than the previous approach.įor ( /* Loop through a subset of the input values - the desired batch size */ ) A batch size of roughly 5,000 to 10,000 works well for most applications. In addition to the benefits of reusing the Prepared Statement, batching your input values also reduces the number of round trips to the database. Prepared Statement batches take your performance to the next level. … and these can be repeated many times with different values. PreparedStatement ps = conn.prepareStatement(sql) String sql = "insert into Transactions(custID, transaction_date, amount, desc) values(?,?,?,?)" This avoids recalculating the execution steps for each individual request, which is what happens in the first example. The database prepares the execution steps of the SQL statement to optimize performance, and the prepared statement can then be used over and over again. These will provide significantly better performance by first sending the database the outlines of the SQL statement using variable parameters in place of the actual data. This type of database coding is pretty much like driving your sports car and staying stuck in first gear!Ī much better approach is to use Prepared Statements. But turn on some production volume and this will quickly become a performance bottleneck, especially when your application is processing many SQL inserts such as when batch loading.

Sure this works for a demo and the beginning programmer is probably pretty happy with the results. Stmt.close() // Your real code should use try-finally blocks to manage resources.Ĭonn.close() // Let's not even get into connection pools! That's another article. String sql = "insert into Transactions(custID, transaction_date, amount, desc) values(" + custID + ", " + tran_date + ", " + amount + ", " + desc + "')" A typical first implementation looks something like:Ĭonnection conn = DriverManager.getConnection(url, username, password)
#Download jdbc odbc driver for java how to
Many new database developers are more focused on how to create a database connection and pass a SQL statement than they are with performance. This article will highlight the programming techniques available to maximize the performance when interacting with the database and help developers choose the right implementation. And in many of these cases, the blame is sometimes unfairly placed onto the JDBC and ODBC drivers. However, many developers are surprised when their fully functioning application suddenly hits a performance roadblock when it is deployed to their production environment. The Teradata JDBC Driver and ODBC Driver allow developers to quickly build applications that interact with the Teradata Database.
