8 Dec 2017

Data Modelling: Counter Table

A counter table is a table that keeps counts of particular items or for certain keys. This can range from page count on your blog to keep track of a limit the user is allowed to have from a particular item or service.


Usually, a counter table would be better kept in something like Memcached or Redis as frequent increment updates would be better suited to those in-memory systems.

MySQL and Innodb in particular has many stringent systems to make sure that your data has been reliably written to disk. Just going through those systems alone, can make having a counter table, not suitable, not even considering the speed it takes to update the actual table.

However, sometimes there is a need for certain assurances from failure scenarios where in-memory systems may not be suitable for - as when they crash, the data kept in memory is cleared out.

In those cases, may I recommend that you do what I consider a 'aggregate counter table'. The idea here is to replace doing lots of increment updates and simply count the original base table you are interested in having counts for.

In short, instead of:

INSERT INTO base_table;
UPDATE counter_table set value=value+1 where key=key1;


You would do

INSERT INTO base_table;
On interval (like 1 to 5 seconds):
- INSERT INTO counter_table 
- SELECT key1, count(1), max(primarykey) FROM base_table 
- WHERE last_count_position
- GROUP BY key1
- ON DUPLICATE KEY UPDATE value=value+recent_count


In order to be able to aggregate the base_table more correctly, you need to keep some sort of record of what was the last time or position you read for the base table. What I recommend you consider, is either the primary key, assuming its an integer as well as having a last_updated timestamp column.

Below is an example of a counter table that keeps the last id of the primary key it counted from the base table:


CREATE TABLE counter_table (
  key_id int(10) unsigned NOT NULL,
  counts int(10) unsigned DEFAULT '0',
  lastprimary_id int(10) unsigned DEFAULT '0',
  PRIMARY KEY (key_id),
  KEY idx_camp (lastprimary_id)
) ENGINE=InnoDB;


In order to run your 'refresh' query, you would first need to query the counter_table like this:
SELECT max(lastprimary_id) from counter_table;

Then populate the counter table by including in your above INSERT INTO SELECT statement a:
WHERE base_table.primarykey > lastprimary_id


This should be very fast and will prevent the many 'database-attacking update queries' that can become a serious bottleneck to your performance in the long run.


Downsides

This method doesn't factor in if the rows in the base table were UPDATE'd or DELETE'd. It just counts the row number. If this is a requirement, you can revert to using UPDATE statements for:
UPDATE counter_table SET value=value-1
with the understanding that this will happen infrequently.

You also, now need to maintain a procedure and monitor that it is running on the set intervals that you need it. Fortunately, MySQL has scheduled Events to help with that.

30 Nov 2017

Archiving for a Leaner Database

There is an idea that data is sacred and needs to be stored forever. However, if you keep your data forever, you will, sooner or later, have a very large database.

In order to keep operations running smoothly, it would be useful to allocated data that is used more frequently in certain tables and keep data that is used less frequently in archive tables.



Some examples


You have a large table that stores transactions and it's size is 200Gb. It is that way, because your company has been around for 5 years, but in the last year, your company has been successful acquiring new users and your data has doubled.

Congratulations.

In your database, you now have a table that has 5 years worth of data, but your application usually only needs about the last 1-3 months. There may be a use case where someone might require data about a customer for a period starting a year ago and there may also be a reporting request to aggregate data for the last 3 years. Therefore, to play it safe, we need everything in one table.

However, this greatly effects performance. It would be more helpful to try and separate those 3 concerns into 3 different tables:
  1. A table for a 3 month period for frequently used data
  2. An archive table that keeps all old and infrequently used data
  3. A summary table for reporting
With these, we are complying with the principle of Single-Responsibility and greatly improve performance for each purpose.

Having a 'main' table with only the last 3 months worth of data, greatly allows you to scale.
For example, even if your data doubles every year for the next 3-5 years, you still only have to manage a subset of that data. So if those 3 months once took a table 20Gb to store, the year following would be 40Gb and the year after would be 80Gb: These sizes are still very manageable by todays standards.
In addition, hardware and software improves over time, so there can be a legitimate expectation that simply by upgrading and updating, you can keep humming along.

Taking the effort to identify 'hot' and 'cold' data and allocating it to the right tables, can mean that your scalability concerns will be addressed for the long term.



How to implement Archiving?




Table _archive


One way to implement archiving, is by having a table that ends with _archive.

To enable this, you will need to be able to redirect your queries (from your code mainly, or by a proxy that can do that) to the main or the archive table, based on a particular criteria.

For example, if the date is less than today's date minus 30 days, then send it to the archive table, if not, then the main table.

Another example may be, if the status column equals 'inactive' send to the archive table.

You would largely need to dig through your code for that table and wrap it with an IF statement to send to the right.

You would also need a data process that migrates data from the main table over to the archive table when it gets old or becomes cold.



Partitioning by Date


While this is not a different physical data model, this does help split the table into a few tables and achieving the desired purpose without application code changes.

Is it very common to partition your table to specify which data may be old and allocate it in the right partition, based on date.

mysql> CREATE TABLE `largetable` (
->   `id` bigint unsigned NOT NULL AUTO_INCREMENT,
->   `dateCreated` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
->   `status` int default 1,
->   `sometext` text,
->   PRIMARY KEY (`id`,`dateCreated`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.03 sec)

mysql> alter table largetable partition by RANGE(YEAR(dateCreated)) (
-> PARTITION p2016 VALUES LESS THAN (2017), 
-> PARTITION p2017 VALUES LESS THAN (2018), 
-> PARTITION p2018 VALUES LESS THAN (2019), 
-> PARTITION p2019 VALUES LESS THAN (2020), 
-> PARTITION p2020 VALUES LESS THAN (2021), 
-> PARTITION pmax VALUES LESS THAN MAXVALUE);
Query OK, 0 rows affected (0.05 sec)
Records: 0  Duplicates: 0  Warnings: 0

The above example, allocates data by which year the row was created. Please note, after 2020, this sort of manual partitioning will require manually adding new years to this table. If you do it in advance, this can be done without disrupting operations.



Partitioning by Status


You can also have a partition (as mentioned above) to a status column to active/inactive and simply by using UPDATE to change the value MySQL will move over that row to the right partition. REPLACE or INSERT + DELETE will work as well.


mysql> CREATE TABLE `largetable` (
->   `id` bigint unsigned NOT NULL AUTO_INCREMENT,
->   `dateCreated` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
->   `status` int default 1, -- default active
->   `sometext` text,
->   PRIMARY KEY (`id`,`status`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.02 sec)

mysql> alter table largetable partition by list(status) (
-> partition pactive values in (1), -- active 
-> partition pinactive values in (2) -- inactive
-> ); 
Query OK, 0 rows affected (0.03 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> select * from largetable partition (pactive);
Empty set (0.00 sec)

mysql> select * from largetable partition (pinactive);
Empty set (0.00 sec)

mysql> insert into largetable(sometext) values ('hello');
Query OK, 1 row affected (0.01 sec)

mysql> select * from largetable partition (pinactive);
Empty set (0.00 sec)

mysql> select * from largetable partition (pactive);
+----+---------------------+--------+----------+
| id | dateCreated         | status | sometext |
+----+---------------------+--------+----------+
|  1 | 2017-10-30 10:04:03 |      1 | hello    |
+----+---------------------+--------+----------+
1 row in set (0.00 sec)

mysql> update largetable set status = 2 where id =1 ;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> select * from largetable partition (pactive);
Empty set (0.00 sec)

mysql> select * from largetable partition (pinactive);
+----+---------------------+--------+----------+
| id | dateCreated         | status | sometext |
+----+---------------------+--------+----------+
|  1 | 2017-10-30 10:04:03 |      2 | hello    |
+----+---------------------+--------+----------+
1 row in set (0.00 sec)



Partitioning by ID


And lastly, you can partition on the sequence of your auto incrementing id key.


mysql> CREATE TABLE `largetable` (
->   `id` bigint unsigned NOT NULL AUTO_INCREMENT,
->   `dateCreated` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
->   `status` int default 1,
->   `sometext` text,
->   PRIMARY KEY (`id`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.02 sec)

mysql> alter table largetable partition by RANGE(id) (
-> PARTITION p1 VALUES LESS THAN (500000000), 
-> PARTITION p2 VALUES LESS THAN (1000000000), 
-> PARTITION p3 VALUES LESS THAN (1500000000), 
-> PARTITION p4 VALUES LESS THAN (2000000000), 
-> PARTITION p5 VALUES LESS THAN (2500000000), 
-> PARTITION pmax VALUES LESS THAN MAXVALUE);
Query OK, 0 rows affected (0.06 sec)
Records: 0  Duplicates: 0  Warnings: 0

The above example specifies which partition the row should goto based on the range of what the id number is. This example is more useful if your system does a lot of primary key look ups. It also helps with distributing the table sizes more equally when compared to dates, as you can have more data in recent years.



A word of caution


Partitioning on the right key is absolutely crucial and not easy. You need to analyse the queries that the application sends to that specific table and come up with a partitioning key(s) that works well and does not slow down the table - at least not the top 80% of the slowest queries.

The partitioning key would need to go into the PRIMARY KEY and in order for the optimiser to send you to the right partition, that key would ideally be included in the WHERE clause of all SELECT/UPDATE/DELETE queries. Otherwise, your query would run sequentially through each partition in that table.

25 Nov 2017

How to Not be the One that Deploys that Slow Query to Production

Have you ever deployed code to production that slowed the database and the entire site down?
Did you get a feeling of anxiety when you just read that?

Well, it could be to do with your test environment.

Most places nowadays have test suites that will check for integration issues. But in very few will check for performance.

The issue lies with how developers are set up to develop code. They have their code on their local computer with a local copy of an empty database where they develop against. That setup will not give you any useful feedback about how your code will performs once its run against the production database.

How do you get Performance Feedback for your Queries?


Whenever you log into your database, lets say MySQL, you get 3 types of feedback:

  1. Your result set
  2. The number of rows
  3. The time it took the query to run

(Postgres, by the way, has \timing.)

In order to get the right time for your query, you need to run it in on a database that is similar in hardware, parameters and more importantly, database size as your production database. 

Here is an example: if you take a SELECT query that you wrote that has 3 JOINs and you run it on a read-only slave DB server. You choose some decent sample variables from the existing data and you get a result of 0.3 seconds. Now, barring exceptions such as deadlocking, conflicts or server wide slowdown, there is a very high chance that that query will take 0.3 seconds when its run against the production database.

Once you have an environment to test against, you can run EXPLAIN on your query and make improvements till you are happy with it. 

But what do I do if I use an ORM?


Well, if you can output the query that the ORM will use and run that against a database, you will know how long it takes. Hopefully, you will be able to make improvements to your query through the ORM.


Conclusion


Make sure that in your company, you have a database to test against that is similar to production. If that is not available, see if you can get access to a read-only DB (sometimes there is one for back up purposes) and at least test your SELECT queries against it.

You can then relatively confidently, deploy it to live. At the very least, with a lot less stress. 

14 Nov 2017

How CTO's can tame their databases and protect their business from chaos

In every organisation and in particular new ones, there seems to be a lack of experience and knowledge around databases.

Our experience shows that there is a huge shortage in skills around managing databases, database performance engineering, developing scalable backend database interactions and designing physical data modelling for performance.

Organisations will typically spend huge amounts of money and time to circumvent these shortages until they become simply too expensive to ignore.

StackOverFlow Developer Hiring Trends 2017


Do your users complain that your system is slow and your developers seem to deploy software releases less and less frequently?

These symptoms could be a result of your company databases becoming more difficult to manage and more cumbersome to work with, making the company spin its wheels while competitors gain ground.

Here are some suggestions to help you overcome lack of skills in this area:

  • Adopt database management best practice. Industry best practices are not easy to come by. The “not invented here” approach can lead to ignoring best practices entirely. But databases have been around for decades, so it’s likely, that industry best practices exist that can solve most problems, offering your business the best and quickest route from where you are now to where you need to get to.
  • Instil a “look under the hood” culture. Nowadays, so much is hidden away from us. In most cases, we prefer this as we have too many other day-to-day problems to solve. However, learning how databases work under the hood can provide the skills to troubleshoot when things go wrong.
  • Find the ‘Top 3’ reasons that are holding your system back. People need to keep in mind that nowadays they almost always work with complex systems. Such systems rarely have just one root cause for any problem. It would be better to focus on the top 3 root causes that may cause severe performance issues and which cannot be explained when looking for a single root cause.
  • Database performance monitoring that makes sense. Monitoring that doesn’t give you the information you need to help maintain the system, is basically noise. You need a combination of metrics and logs to identify bottlenecks and determine changes that will result in faster database performance in order to get an understanding of how the system is managing under load when your application uses it.
  • If you can’t find the answer, seek help. There are experts available to help you with your specific database issues. It would be better to consult with one, rather than look to other products which may be more expensive down the line to move to and maintain. Installing a different product, learning how to use it, discovering it’s quirks and how much work is involved to move to it, will be more expensive and time consuming than bringing in an expert, who can advise on the original problem at a relatively small fixed cost.

17 Oct 2017

Top Slowest Queries and their Associated Tables in MySQL

The following query gets data from performance_schema in MySQL and attempts to regex the digest to the list of tables in the same schema.
 SELECT d.*,  
  (SELECT group_concat(distinct TABLE_NAME) FROM information_schema.TABLES   
 WHERE table_schema = d.schema_name and d.digest_text regexp table_name) table_name  
  FROM performance_schema.events_statements_summary_by_digest d  
 WHERE d.DIGEST_TEXT regexp "^(SELECT|UPDATE|DELETE|REPLACE|INSERT|CREATE)"  
 and d.LAST_SEEN >= curdate() - interval 7 day  
 ORDER BY d.SUM_TIMER_WAIT DESC limit 10\G  

16 Oct 2017

Top 3 Reasons Why SQL is Faster than Java

I had a discussion with a colleague the other day. He was trying to write some SQL to use for a less-than-optimal data structure and was getting frustrated that it was looking "cumbersome". He wanted some advice, but was keen to simply write it with a mix of a few light SQL statements and some Java.

I would like to explain why this option would be slower than using "ugly looking" and "cumbersome" SQL:

1) Disk I/O


If you were to use Java, you would need to probably get a larger dataset from the database, process it in some way and output the results. This would mean that the database would need to fetch that larger dataset for you which would mean more (sometimes much more IO)

If you were to use SQL, you are leaving the fetching operation to the database's optimiser and with the help of indexes - may not fetch as much as with Java.

In short, you are allowing the database to reach the right data and filter what not to fetch - for you.

2) Network


For the reason above, the large dataset normally has to travel over a network. This is unless the java app server is located on the same machine as the database. This is not very common nowadays.

That network overheard can become more pronounced in a virtual or containerised environment where network issues can be a headache. (*note: I am not an expert, just observing from a distance)

In addition, needing to pass data through a network can be an additional overhead in parallel systems where the data needs to travel to each machine before starting an operation.

3) Java's Garbage Collection


It may not be known to most people, but Java adds quite the memory overhead for objects and some data structures. You can sometimes get a x100 difference. This does not mean that that you need x100 available memory, but it would mean that the GC would work extra hard with more CPU cycles to clean up the extra memory churn.

Bonus: SQL takes far less code than it would do in Java


While SQL can be an ugly string in your code sometimes, doing it in Java can take between x30-x100 more lines of code including tests. You may also need to test that your code does what SQL already does well such as JOINs and aggregate functions.


Caveat: When it is a good idea to use Java over SQL?


For processing a lot of data, Databases have the following concept:
Row vs Chunk vs Too Big

'Row' would be the slowest way of getting and processing data - unless you need to guarantee some level of data quality which requires it.
'Chunk' or a set of rows, is just right. Usually this would mean querying a large table by using 2 or 3 keys and get a result set that the database can handle well.
'Too Big' is a case where the database cannot handle well the number of rows and you would need to split your SQL into 'Chunks' using Java and process it that way.


Please also check out this book that I found useful in this matter:


6 Oct 2017

What is a Good Data Model

This is an excerpt from something I am working on about physical data modelling.

A good data modelled table should

  1. Be able to retrieve data quickly 
  2. Be able to store data quickly
  3. Be clear and easy to work with

A good data modelled table should not

  1. Store unneeded data
  2. Need to change its rows very often **
  3. Need too many JOINs to get you the data that you need

Purposes of a Data Model

A good data model should serve a specific and narrow set of purposes.
The more purposes the table serves the:
  1. More indexes it would need. 
  2. More cumbersome it will be to store and keep in memory.
  3. More overhead it would be to write to.
  4. More likely it be a single-point-of-failure
  5. More likely it would have locks and deadlocks
  6. More likely it would be to add unneeded data
  7. More difficult it would be to make changes to your application if you needed to make changes to the table.
If you notice a pattern here, you may notice that reusability to a high degree, may hinder the performance of a database. There needs to be a balance between reusability and single-responsibility of the data models to be effective. 

** A table that has data that needs to change often and is transient, may be better suited in a cache. If it needs to be saved and transactional, then a smaller table that records the state of certain keys or values with a combination of a log to store how it got that way if it is needed.

30 Dec 2016

MariaDB's Columnar Store

I have been keeping an eye on MariaDB's Columnar store progress for a bit longer then half a year.

MariaDB chose to take the infinidb code after Calpoint closed shop about two years ago and implemented it into their product. I was a bit wary about infinidb as well as it was a columnar store without compression that had mixed reviews on news hacker.

However, it seems like MariaDB have pulled it off. They have added the infinidb engine to MariaDB with all its quirks and extra commands and they have added snappy compression as well. This is truely a huge win for them and their users, specifically in the area of reporting and analytics.

Here are two real life examples for getting data ready for reporting currently happening in the wild:
1) MySQL -> Sqoop -> Hadoop - where you would need a) 5-6 additional servers, b) someone to set those servers up in a hadoop cluster and then c) monitor the daily data transfer.
2) MySQL -> CDC -> Kafka -> Hadoop - a) more technologies to master, b) a few more servers and some c) more monitoring. But this time, its streaming.

To set all of this up could take from a couple of months to a year.

Now with MariaDB, you have:
1) MariaDB + some SQL scripts -  such as INSERT INTO datawarehouse.facttable SELECT ... FROM site.table1 WHERE date >= curdate() - interval 1 day;
2) MariaDB -> Pentaho/Talend -> MariaDB - Could be a bit slower, but with a GUI and really a lot of monitoring out of the box.

As you can see, there are a lot fewer technologies, a lot fewer complexities and it is a lot more straight forward to develop.

It is also very important to add that no one other than MariaDB is doing this. The closest you have is Tokudb which is great and can also last you a while, but a Columnar store is definitely more suited for this type of task.

So once again, congratulations to MariaDB for offering such a huge benefit to its users.

22 Dec 2016

Using a Generated Column to help with date lookups

I have a table that has two columns: year and month.
While its ok to search on exact dates or by year, it is harder to search between two dates.

Lets see if we can solve this issue by using a generated column.
(Table taken from https://github.com/jonathanvx/uk_land_registry_paid_dataset)


21 Dec 2016

JSON and MySQL Stored Procedures

You probably heard that MySQL 5.7 supports JSON.
But did you know that you can also use JSON with MySQL Stored Procedures - making them very flexible?

Less talk, more code:
(Data was used from the UK Land Registry that I worked on)