Until the production release. Then the peace was over: the application server had one out of memory error after another. The code was rolled back and there was an emergency release. The whole thing was unsuccessful and the issue was dead.
How was that possible? How can a moderate value of 80 lead to such massive memory problems?
According to the white paper "Oracle JDBC Memory Management" [1], Oracle has fundamentally changed the way memory is allocated for results between 11g and 12c. This could mean that massive problems like the ones just described should no longer occur or only occur under significantly different conditions. But is quoting a white paper enough to convince colleagues and superiors bulgaria telegram screening to try again after a failure like the one above? That is not so certain. The above-mentioned DBA probably has a better chance with a tangible experiment that really puts the new driver to the test. I would like to present such an experiment - so that the DBA does not have to do it himself - below.
But first, let's go back to theory. Now that we are (hopefully convinced that this is an important parameter, what exactly is the fetch size?
When a JDBC or OCI client wants to retrieve data from the database, various steps are performed. A cursor is opened and a statement is parsed (PARSE). If the statement returns a result, output variables must be defined. If bind variables are used, values must be bound. Then the statement is executed (EXECUTE), and finally the relevant records are fetched (FETCH). Once all records have been retrieved, the cursor is closed.