5 Feb 2018

My MySQL Linux Tuning Checklist


Things I look for when optimising or debugging a Linux OS:
  • IOschedular (noop or deadline)
  • Linux Kernel > 3.18 (multi queuing)
  • IRQbalance > 1.0.8
  • File System: noatime, nobarrier
    • ext4: data=ordered
    • xfs: 64k
    • logfiles in different partition (if possible)
  • Swapiness (0 or 1, depending)
  • Jemalloc (if needed)
  • Transparent hugepages - disabled
  • Ulimit (open files) >1000
  • Security
    • IPtables
    • PAM security
  • Raid Controller/HBA have 
    • write-cache enabled
    • battery backed
    • For HP servers: hpssacli controller all show (detail|status)
    • Tweak cache-ratio to 50/50 or 75/25 (and test)

4 Jan 2018

A DBA Analyses 'The Phoenix Project'

Last year, I read 'The Phoenix Project'. I liked it and as an IT manager in the past, I did experience high blood pressure during the SEV1 scenarios in the book.



I also liked the way DevOps methodology helped solve issues with IT as well as help the company succeed overall.

As a DBA, however, I did have some things that didn't make sense to me about this story. 
Bare in mind that the two major incidents in the book were database related. So in this post, I would like to jot down some things I have noticed and how they could have been solved looking at them from a different lens.


Caution, Spoiler Alert



Incident No.1 - Tokenisation


In the first incident, a 3rd party supplier ran a script against the database to tokenise some personal data. This was related to an issue that information security highlighted, but had the result of effecting HR and accounting.

In the book, there is a complaint that there was no test environment to see if this script would have any negative effects on other parts of the organisation. 

Now to me, this does make sense and at the same time, makes no sense at all.

If you meant, that back in the day, it was hard to get full environments setup to test changes on your application servers, then you would probably be right. Today, perhaps based on the methodology that this book introduces, you probably do have those environments setup: either virtualised or in a container. 


Testing Database


What doesn't make sense to me is that is not having a test database. Now reading through the book, there are mentions of Oracle database and some MS SQL databases. As a mainly MySQL DBA, I have not always worked on those databases, but I have worked next to people who have. My observation is, if you were to have an Oracle database, you would almost certainly have other dev/test/UAT/staging/pre-prod database servers as well. 
Why do I think this? If you can afford to pay for an Oracle database, you would probably get more testing databases under the same license. License being the most expensive part when using Oracle.  
So a testing database to test things that may effect the precious and expensive database server is almost a certainty.


DBA as a Gatekeeper


Now it seems shocking to me that the DBA had not been involved in the process to validate this 3rd party script. Old school Oracle DBAs are involved in everything that happens on their servers. 
Need a user on the database? goto the DBA. 
Need a database server for a new app? please fill these in triplicates, detailing what would be the projected usage for the next 5 years. 
In most companies, an Oracle DBAs may even setup integration between other products like Oracle HR and finance. 
So how could you have run something that significant against the database without their knowledge is beyond me.

Assuming that a database field had in fact been messed up, then Oracle DBAs have a TON of really enviable backup and restore features. 
They can query a table to view all the backups that are available to restore from and choose the point-in-time that is closest to what they need. A DBA could simply restore the database, fetch the specific table that had its column changed and apply it to the to production database. 
Its more than one table? Restore the database, go over the changes in the logs a point-in-time and skip the parts the conversion script applied.


It seems to me that the authors wrote the book based on their own experiences, but those experiences occurred in companies that had no DBAs. Not having a DBA is a product of start ups, not old school 1500-person car-parts manufacturers.


Incident No.2 - Conversion


There was a crippling database issue to do with a database conversion that was needed along side some new code roll out. The issue caused a 2 day - break out the hand held receipt machine - downtime to the system.


Works on My Laptop


During the initial investigation, a developer said something along the lines of 'it worked fine on my laptop' when describing the performance of the database conversion scripts. The problem was that on production, it was x1000 slower. Now, I have written about how to not be the one that deploys that slow query to production before and this really states that situation. Apparently, they still didn't have a database testing environment to test it against.

However, on the topic above of 'DBA as a gatekeeper':
Why didn't the DBA review the conversion scripts or was involved in the the code review process for SQL statements?
It could be that there wasn't any in the company.

Another point was that they couldn't cancel the conversion after they started and noticed how slow it was. If this was within a transaction or a single alter table statement, why not?
If too many things have changed, could they not restore the database to a point-in-time before the changes were made?
Was the conversion x1000 slow instead of maybe x10 slow, because of a foreign key check that could have been turned off?
A DBA would have given you those options.


Project Unicorn


After the hero turns things around and things begin to pickup, they decide to start a separate project to add predictive features to the main project. In it, they decided to bypass seeking permission for database changes and create a new database where they copied production data into it from several locations. I very much like this approach and it falls in line with the reactive micro services pattern

This would make this book ahead of its time. Instead of managing one main database (although, they did mention in the book that had a couple of dozen database servers) for the website, they can break it up into several database servers, based on functionality. What is required is to use tools - and I would believe in 2012, they meant ETL tools - to migrate the needed data into these new database servers. 

This would still need a DBA though or at the very least, a data engineer with an ops background, as you now need to:
  • Data model new environments based on data from old ones
  • Create and maintain data pipelines
  • Monitor for errors and fix data that didn't make it 
  • Fix data drift and re-sync data across servers

In addition, you now need to backup, monitor the availability and performance of these additional database servers.

So while it adds complexity to the backend and you are now moving from simple database maintenance to a more data architecture role, it is the way forward. Certainly the only way to have proper micro services with their own single-purpose and loosely coupled data stores.








3 Jan 2018

Top 4 Reasons Companies Won't Fix Their Database Issues

When I consult at a company, I aim to identify issues with their database and give options on how to solve them.
However, sometimes implementing those solutions may be a more lengthy process than it needs to be and sometimes they may not be implemented at all. During my career, I have observed some reasons as to why that might happen within organizations.


Obviously, the following observations will never happen at your company. I am just writing about them so that you might notice them in other places.


1. Legacy code 


People don't like to have anything to do with legacy code. It’s painful. It’s difficult. It’s risky to change. It runs business critical functions. Worse of all, they didn’t write it. This can be a problem as often, the most cripling database issues require changes to legacy code.

2. New Technologies or Methods


People don’t like you to introduce any new technologies they don’t want to learn and maintain. Not even different methods in technologies already being used. No fancy upgrades to the DB server, no new load balancers and certainly don’t start using SQL statements in the code over their existing ORM.

3. Old Technologies or Methods


In a complete polar opposite, people in tech organisations don’t like you to introduce boring technologies. What would be the point of introducing boring (yet tested) technologies when they could be playing around with shiny new ones. There is a caveat to this - groups prefer it when other groups they depend on (let’s say developers depend on ops) choose to use boring and tested technologies. Just not for themselves. And vice versa.

4. Management Involvement


Last, but certainly not least, no one from upper management will get involved in resolving these issues and push forward solutions. No project/product manager/agile-coach will be assigned to chase up issues. As far as they are concerned, this is an engineering issue and as engineers, you need to sort it out yourselves. Only 'change requests' from the business, have managers around it.


Final Thoughts


After some years of analysing database systems for performance issues, I am finally realising that I should also analyse human systems for performance issues.


27 Dec 2017

Setting Up Databases in your Development Environment

Setting up databases in development environments can be challenging.

Normally, what I usually see is some automated process for setting up empty databases with up-to-date data structures. This is helpful for integration testing, but is tricky for actual development as well as performance testing. 

For example:
  • It is difficult to conceptually get your head around writing a query when you cannot see any data in your tables
  • You cannot possibly know if your query is slow before you deploying it to production without running it against 'some' data.

In addition, there can be a strict requirement to not let sensitive customer data be available outside certain secure environments and certainly not available to development environments.

Step 1


What you would need to do is go over your database and separate the elements into different criteria:
  1. Data Structure
  2. User Management
  3. Referential Tables
  4. Primary Tables
  5. Child Tables
  6. Mapping Tables
  7. Sensitive Data
(explanation below)

Data structure management and user management should be, by now, a solved problem. You have systems like Liquibase and Flyway that manage this for you. Essentially, you can use these systems to automatically generate containers which your developers can then use or setup empty local databases on developer machines using SQL scripts. 

For user management, MySQL has PAM plugin to manage users via LDAP, but you can manage this through scripts as well.

Referential tables (tables that contain data such as id = 1, status = 'DONE') should also be small enough to be included in this stage as well. You need to identify which tables contain this very basic data and add it to the data structure repository or SQL file.

Step 2


This is where things get a little bit tricky: You need to identify which tables are your Primary 'feed data' tables or Object tables. You then need to identify which tables are the Child tables of those Primary tables. Lastly, you need to identify which tables Map keys across different tables - either Primary to Child (as in multi-to-multi relationships) or Primary to Primary. 

Once you have identified these tables, you can discern how much data you would like to keep in your development databases. My recommendation would be to go in these three directions:
  1. Specify a set number of keys in the Primary tables and then get the data from the Child and Mapping tables based on those keys.
  2. Specify a specific set of keys from the Primary tables and then get the data from the Child and Mapping tables based on those keys.
  3. Keep data by a date range for the primary table and then use its keys to populate the Child and Mapping tables. 
Make sure that the amount of data is adequate for your needs: not too small and not too large.

Step 3


This separation of table types can now help us with identifying sensitive data. Data structure and also Referential tables, should not have in them sensitive data. Neither should Mapping tables. What would have sensitive data are Primary and Child tables. 

Identify the columns where sensitive data maybe kept in those tables and either:
  • Empty that data
  • Give it a default value (all emails will be test@email.com)
  • Obfuscate those values in some way

You can change this the data by either outputting it with those changes into an SQL file or dumping that data into a staging database, changing the sensitive data and then dumping it into an SQL file with a tool.

Ideally, this stage needs to go through a QA process/person before the company releases sensitive data to generally available containers or repositories which keep history of changes.

Conclusion


With taking the time to separate the different elements in a database, you can make it less complicated and you would then be more able to automate parts of the database into your CI/CD process. 






8 Dec 2017

Data Modelling: Counter Table

A counter table is a table that keeps counts of particular items or for certain keys. This can range from page count on your blog to keep track of a limit the user is allowed to have from a particular item or service.


Usually, a counter table would be better kept in something like Memcached or Redis as frequent increment updates would be better suited to those in-memory systems.

MySQL and Innodb in particular has many stringent systems to make sure that your data has been reliably written to disk. Just going through those systems alone, can make having a counter table, not suitable, not even considering the speed it takes to update the actual table.

However, sometimes there is a need for certain assurances from failure scenarios where in-memory systems may not be suitable for - as when they crash, the data kept in memory is cleared out.

In those cases, may I recommend that you do what I consider a 'aggregate counter table'. The idea here is to replace doing lots of increment updates and simply count the original base table you are interested in having counts for.

In short, instead of:

INSERT INTO base_table;
UPDATE counter_table set value=value+1 where key=key1;


You would do

INSERT INTO base_table;
On interval (like 1 to 5 seconds):
- INSERT INTO counter_table 
- SELECT key1, count(1), max(primarykey) FROM base_table 
- WHERE last_count_position
- GROUP BY key1
- ON DUPLICATE KEY UPDATE value=value+recent_count


In order to be able to aggregate the base_table more correctly, you need to keep some sort of record of what was the last time or position you read for the base table. What I recommend you consider, is either the primary key, assuming its an integer as well as having a last_updated timestamp column.

Below is an example of a counter table that keeps the last id of the primary key it counted from the base table:


CREATE TABLE counter_table (
  key_id int(10) unsigned NOT NULL,
  counts int(10) unsigned DEFAULT '0',
  lastprimary_id int(10) unsigned DEFAULT '0',
  PRIMARY KEY (key_id),
  KEY idx_camp (lastprimary_id)
) ENGINE=InnoDB;


In order to run your 'refresh' query, you would first need to query the counter_table like this:
SELECT max(lastprimary_id) from counter_table;

Then populate the counter table by including in your above INSERT INTO SELECT statement a:
WHERE base_table.primarykey > lastprimary_id


This should be very fast and will prevent the many 'database-attacking update queries' that can become a serious bottleneck to your performance in the long run.


Downsides

This method doesn't factor in if the rows in the base table were UPDATE'd or DELETE'd. It just counts the row number. If this is a requirement, you can revert to using UPDATE statements for:
UPDATE counter_table SET value=value-1
with the understanding that this will happen infrequently.

You also, now need to maintain a procedure and monitor that it is running on the set intervals that you need it. Fortunately, MySQL has scheduled Events to help with that.

30 Nov 2017

Archiving for a Leaner Database

There is an idea that data is sacred and needs to be stored forever. However, if you keep your data forever, you will, sooner or later, have a very large database.

In order to keep operations running smoothly, it would be useful to allocated data that is used more frequently in certain tables and keep data that is used less frequently in archive tables.



Some examples


You have a large table that stores transactions and it's size is 200Gb. It is that way, because your company has been around for 5 years, but in the last year, your company has been successful acquiring new users and your data has doubled.

Congratulations.

In your database, you now have a table that has 5 years worth of data, but your application usually only needs about the last 1-3 months. There may be a use case where someone might require data about a customer for a period starting a year ago and there may also be a reporting request to aggregate data for the last 3 years. Therefore, to play it safe, we need everything in one table.

However, this greatly effects performance. It would be more helpful to try and separate those 3 concerns into 3 different tables:
  1. A table for a 3 month period for frequently used data
  2. An archive table that keeps all old and infrequently used data
  3. A summary table for reporting
With these, we are complying with the principle of Single-Responsibility and greatly improve performance for each purpose.

Having a 'main' table with only the last 3 months worth of data, greatly allows you to scale.
For example, even if your data doubles every year for the next 3-5 years, you still only have to manage a subset of that data. So if those 3 months once took a table 20Gb to store, the year following would be 40Gb and the year after would be 80Gb: These sizes are still very manageable by todays standards.
In addition, hardware and software improves over time, so there can be a legitimate expectation that simply by upgrading and updating, you can keep humming along.

Taking the effort to identify 'hot' and 'cold' data and allocating it to the right tables, can mean that your scalability concerns will be addressed for the long term.



How to implement Archiving?




Table _archive


One way to implement archiving, is by having a table that ends with _archive.

To enable this, you will need to be able to redirect your queries (from your code mainly, or by a proxy that can do that) to the main or the archive table, based on a particular criteria.

For example, if the date is less than today's date minus 30 days, then send it to the archive table, if not, then the main table.

Another example may be, if the status column equals 'inactive' send to the archive table.

You would largely need to dig through your code for that table and wrap it with an IF statement to send to the right.

You would also need a data process that migrates data from the main table over to the archive table when it gets old or becomes cold.



Partitioning by Date


While this is not a different physical data model, this does help split the table into a few tables and achieving the desired purpose without application code changes.

Is it very common to partition your table to specify which data may be old and allocate it in the right partition, based on date.

mysql> CREATE TABLE `largetable` (
->   `id` bigint unsigned NOT NULL AUTO_INCREMENT,
->   `dateCreated` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
->   `status` int default 1,
->   `sometext` text,
->   PRIMARY KEY (`id`,`dateCreated`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.03 sec)

mysql> alter table largetable partition by RANGE(YEAR(dateCreated)) (
-> PARTITION p2016 VALUES LESS THAN (2017), 
-> PARTITION p2017 VALUES LESS THAN (2018), 
-> PARTITION p2018 VALUES LESS THAN (2019), 
-> PARTITION p2019 VALUES LESS THAN (2020), 
-> PARTITION p2020 VALUES LESS THAN (2021), 
-> PARTITION pmax VALUES LESS THAN MAXVALUE);
Query OK, 0 rows affected (0.05 sec)
Records: 0  Duplicates: 0  Warnings: 0

The above example, allocates data by which year the row was created. Please note, after 2020, this sort of manual partitioning will require manually adding new years to this table. If you do it in advance, this can be done without disrupting operations.



Partitioning by Status


You can also have a partition (as mentioned above) to a status column to active/inactive and simply by using UPDATE to change the value MySQL will move over that row to the right partition. REPLACE or INSERT + DELETE will work as well.


mysql> CREATE TABLE `largetable` (
->   `id` bigint unsigned NOT NULL AUTO_INCREMENT,
->   `dateCreated` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
->   `status` int default 1, -- default active
->   `sometext` text,
->   PRIMARY KEY (`id`,`status`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.02 sec)

mysql> alter table largetable partition by list(status) (
-> partition pactive values in (1), -- active 
-> partition pinactive values in (2) -- inactive
-> ); 
Query OK, 0 rows affected (0.03 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> select * from largetable partition (pactive);
Empty set (0.00 sec)

mysql> select * from largetable partition (pinactive);
Empty set (0.00 sec)

mysql> insert into largetable(sometext) values ('hello');
Query OK, 1 row affected (0.01 sec)

mysql> select * from largetable partition (pinactive);
Empty set (0.00 sec)

mysql> select * from largetable partition (pactive);
+----+---------------------+--------+----------+
| id | dateCreated         | status | sometext |
+----+---------------------+--------+----------+
|  1 | 2017-10-30 10:04:03 |      1 | hello    |
+----+---------------------+--------+----------+
1 row in set (0.00 sec)

mysql> update largetable set status = 2 where id =1 ;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> select * from largetable partition (pactive);
Empty set (0.00 sec)

mysql> select * from largetable partition (pinactive);
+----+---------------------+--------+----------+
| id | dateCreated         | status | sometext |
+----+---------------------+--------+----------+
|  1 | 2017-10-30 10:04:03 |      2 | hello    |
+----+---------------------+--------+----------+
1 row in set (0.00 sec)



Partitioning by ID


And lastly, you can partition on the sequence of your auto incrementing id key.


mysql> CREATE TABLE `largetable` (
->   `id` bigint unsigned NOT NULL AUTO_INCREMENT,
->   `dateCreated` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
->   `status` int default 1,
->   `sometext` text,
->   PRIMARY KEY (`id`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.02 sec)

mysql> alter table largetable partition by RANGE(id) (
-> PARTITION p1 VALUES LESS THAN (500000000), 
-> PARTITION p2 VALUES LESS THAN (1000000000), 
-> PARTITION p3 VALUES LESS THAN (1500000000), 
-> PARTITION p4 VALUES LESS THAN (2000000000), 
-> PARTITION p5 VALUES LESS THAN (2500000000), 
-> PARTITION pmax VALUES LESS THAN MAXVALUE);
Query OK, 0 rows affected (0.06 sec)
Records: 0  Duplicates: 0  Warnings: 0

The above example specifies which partition the row should goto based on the range of what the id number is. This example is more useful if your system does a lot of primary key look ups. It also helps with distributing the table sizes more equally when compared to dates, as you can have more data in recent years.



A word of caution


Partitioning on the right key is absolutely crucial and not easy. You need to analyse the queries that the application sends to that specific table and come up with a partitioning key(s) that works well and does not slow down the table - at least not the top 80% of the slowest queries.

The partitioning key would need to go into the PRIMARY KEY and in order for the optimiser to send you to the right partition, that key would ideally be included in the WHERE clause of all SELECT/UPDATE/DELETE queries. Otherwise, your query would run sequentially through each partition in that table.

25 Nov 2017

How to Not be the One that Deploys that Slow Query to Production

Have you ever deployed code to production that slowed the database and the entire site down?
Did you get a feeling of anxiety when you just read that?

Well, it could be to do with your test environment.

Most places nowadays have test suites that will check for integration issues. But in very few will check for performance.

The issue lies with how developers are set up to develop code. They have their code on their local computer with a local copy of an empty database where they develop against. That setup will not give you any useful feedback about how your code will performs once its run against the production database.

How do you get Performance Feedback for your Queries?


Whenever you log into your database, lets say MySQL, you get 3 types of feedback:

  1. Your result set
  2. The number of rows
  3. The time it took the query to run

(Postgres, by the way, has \timing.)

In order to get the right time for your query, you need to run it in on a database that is similar in hardware, parameters and more importantly, database size as your production database. 

Here is an example: if you take a SELECT query that you wrote that has 3 JOINs and you run it on a read-only slave DB server. You choose some decent sample variables from the existing data and you get a result of 0.3 seconds. Now, barring exceptions such as deadlocking, conflicts or server wide slowdown, there is a very high chance that that query will take 0.3 seconds when its run against the production database.

Once you have an environment to test against, you can run EXPLAIN on your query and make improvements till you are happy with it. 

But what do I do if I use an ORM?


Well, if you can output the query that the ORM will use and run that against a database, you will know how long it takes. Hopefully, you will be able to make improvements to your query through the ORM.


Conclusion


Make sure that in your company, you have a database to test against that is similar to production. If that is not available, see if you can get access to a read-only DB (sometimes there is one for back up purposes) and at least test your SELECT queries against it.

You can then relatively confidently, deploy it to live. At the very least, with a lot less stress. 

14 Nov 2017

Top 5 Ways to Overcome Database Skill Shortages

In every organisation and in particular new ones, there seems to be a lack of experience and knowledge around databases.

Our experience shows that there is a huge shortage in skills around managing databases, database performance engineering, developing scalable backend database interactions and designing physical data modelling for performance.

Organisations will typically spend huge amounts of money and time to circumvent these shortages until they become simply too expensive to ignore.

StackOverFlow Developer Hiring Trends 2017


Do your users complain that your system is slow and your developers seem to deploy software releases less and less frequently?

These symptoms could be a result of your company databases becoming more difficult to manage and more cumbersome to work with, making the company spin its wheels while competitors gain ground.

Here are some suggestions to help you overcome lack of skills in this area:

  • Adopt database management best practice. Industry best practices are not easy to come by. The “not invented here” approach can lead to ignoring best practices entirely. But databases have been around for decades, so it’s likely, that industry best practices exist that can solve most problems, offering your business the best and quickest route from where you are now to where you need to get to.
  • Instil a “look under the hood” culture. Nowadays, so much is hidden away from us. In most cases, we prefer this as we have too many other day-to-day problems to solve. However, learning how databases work under the hood can provide the skills to troubleshoot when things go wrong.
  • Find the ‘Top 3’ reasons that are holding your system back. People need to keep in mind that nowadays they almost always work with complex systems. Such systems rarely have just one root cause for any problem. It would be better to focus on the top 3 root causes that may cause severe performance issues and which cannot be explained when looking for a single root cause.
  • Database performance monitoring that makes sense. Monitoring that doesn’t give you the information you need to help maintain the system, is basically noise. You need a combination of metrics and logs to identify bottlenecks and determine changes that will result in faster database performance in order to get an understanding of how the system is managing under load when your application uses it.
  • If you can’t find the answer, seek help. There are experts available to help you with your specific database issues. It would be better to consult with one, rather than look to other products which may be more expensive down the line to move to and maintain. Installing a different product, learning how to use it, discovering it’s quirks and how much work is involved to move to it, will be more expensive and time consuming than bringing in an expert, who can advise on the original problem at a relatively small fixed cost.

17 Oct 2017

Top Slowest Queries and their Associated Tables in MySQL

The following query gets data from performance_schema in MySQL and attempts to regex the digest to the list of tables in the same schema.
 SELECT d.*,  
  (SELECT group_concat(distinct TABLE_NAME) FROM information_schema.TABLES   
 WHERE table_schema = d.schema_name and d.digest_text regexp table_name) table_name  
  FROM performance_schema.events_statements_summary_by_digest d  
 WHERE d.DIGEST_TEXT regexp "^(SELECT|UPDATE|DELETE|REPLACE|INSERT|CREATE)"  
 and d.LAST_SEEN >= curdate() - interval 7 day  
 ORDER BY d.SUM_TIMER_WAIT DESC limit 10\G  

16 Oct 2017

Top 3 Reasons Why SQL is Faster than Java

I had a discussion with a colleague the other day. He was trying to write some SQL to use for a less-than-optimal data structure and was getting frustrated that it was looking "cumbersome". He wanted some advice, but was keen to simply write it with a mix of a few light SQL statements and some Java.

I would like to explain why this option would be slower than using "ugly looking" and "cumbersome" SQL:

1) Disk I/O


If you were to use Java, you would need to probably get a larger dataset from the database, process it in some way and output the results. This would mean that the database would need to fetch that larger dataset for you which would mean more (sometimes much more IO)

If you were to use SQL, you are leaving the fetching operation to the database's optimiser and with the help of indexes - may not fetch as much as with Java.

In short, you are allowing the database to reach the right data and filter what not to fetch - for you.

2) Network


For the reason above, the large dataset normally has to travel over a network. This is unless the java app server is located on the same machine as the database. This is not very common nowadays.

That network overheard can become more pronounced in a virtual or containerised environment where network issues can be a headache. (*note: I am not an expert, just observing from a distance)

In addition, needing to pass data through a network can be an additional overhead in parallel systems where the data needs to travel to each machine before starting an operation.

3) Java's Garbage Collection


It may not be known to most people, but Java adds quite the memory overhead for objects and some data structures. You can sometimes get a x100 difference. This does not mean that that you need x100 available memory, but it would mean that the GC would work extra hard with more CPU cycles to clean up the extra memory churn.

Bonus: SQL takes far less code than it would do in Java


While SQL can be an ugly string in your code sometimes, doing it in Java can take between x30-x100 more lines of code including tests. You may also need to test that your code does what SQL already does well such as JOINs and aggregate functions.


Caveat: When it is a good idea to use Java over SQL?


For processing a lot of data, Databases have the following concept:
Row vs Chunk vs Too Big

'Row' would be the slowest way of getting and processing data - unless you need to guarantee some level of data quality which requires it.
'Chunk' or a set of rows, is just right. Usually this would mean querying a large table by using 2 or 3 keys and get a result set that the database can handle well.
'Too Big' is a case where the database cannot handle well the number of rows and you would need to split your SQL into 'Chunks' using Java and process it that way.


Please also check out this book that I found useful in this matter: