1 May 2018

How to Implement Technical Change in an Organisation

Your time is short and valuable, so I will not waste it on fluff. You would, however, need to take on these points and do more research on them, in the event that you like and agree with them.

You are in a company and you see a glaring issue that you would like to solve. You have an idea or some experience that your solution might be helpful. However, its frequently difficult to implement change in companies (there are entire books and MBA courses on it). Sticking to IT/technical problems, here are 3 points that can help you implement technical change in an organisation.


Social Capital


This could be 'understand politics' or it could mean 'being friendly' or it could mean 'being consistently reliable and hardworking'. Either way, you can implement change by spending your social capital.  Please note: you will not get this social capital back, even if your idea works amazingly well.


Expertise


If you are the expert in the area and you want to implement technical change, the resistance to your idea may be greatly reduced. Personally, I like giving presentation to people inside the company. Then if I want to make a change, the conversation would usually go this way: "Hey, do you remember that presentation I did a while back? I was thinking of implementing one of the points I had in there. It wont take too long and I will make sure it works." "What presentation? oh that one.. yeah, yeah.. sure. Just let me know when X is done and Y is finished".


De-Risk


Usually, a lot of the resistance to change is due to the risk of something going wrong. If your idea carries certain unacceptable or high risk, try to reduce the scope of the change or remove some of the moving parts. The idea is to still implement the 'core' of the technical suggestion or break it down into steps - where if the first step succeeds, then the second step would be less risky.


4 Apr 2018

MySQL Compression Olympics

And the results are in:


Innodb (no compression/original) - 660Gb
RocksDB - 209Gb
TokuDB (snappy) - 144Gb
TokuDB (LZMA) - 67Gb


Benchmark performance with mysqlslap on production sample queries :
(8-9 Very quick SELECTs + 1-2 medium SELECTs)

Innodb (original)
Benchmark
Avg: 0.100 seconds
Min: 0.091 seconds
Max: 0.182 seconds
Total: 5.101s

TokuDB (snappy)
Benchmark
Avg: 0.100 seconds
Min: 0.089 seconds
Max: 0.183 seconds
Total: 5.106s

RocksDB
Benchmark
Avg: 0.113 seconds
Min: 0.104 seconds
Max: 0.164 seconds
Total: 5.730s

TokuDB (LZMA)
Benchmark
Avg: 0.099 seconds
Min: 0.090 seconds
Max: 0.155 seconds
Total: 5.037s


Testing Platform:
    Platform  | Linux
     Release  | CentOS release 6.8 (Final)
      Kernel   | 2.6.32-642.11.1.el6.x86_64
Architecture | CPU = 64-bit, OS = 64-bit
   Threading | NPTL 2.12
    Compiler | GNU CC version 4.4.7 20120313 (Red Hat 4.4.7-17).
     SELinux | Disabled
 Virtualized | VMWare
  Processors | physical = 2, cores = 4, virtual = 4, hyperthreading = no
      Speeds | 4x2299.998
      Models | 4xIntel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
      Caches | 4x46080 KB
      Memory | 15.6G


Observations - Load during conversion:
TokuDB snappy - Load 1.07, IOPs (around) 30mb/s
RocksDB - Load 1.09, IOPs (around) 50-70Mb/s
(There seem to be data load round and then a second round of compression afterwards)
TokuDB LZMA - Load 3-4, IOPs (around) 7mb/s

20 Feb 2018

CST/Cogs Framework - IT Organisational Principles for Craftsmanship and Innovation

Introduction


My name is Jonathan. I have been working for 11 years, trying to improve the performance of systems that use databases. Through that experience (and with observing leading people in my industry), I have developed a knack for viewing everything as a system and then identifying bottlenecks within that system.

As of the middle of last year, I have started to use this knack and apply it to human systems at work. I have also studied intensively some concepts from: psychology, philosophy, political theory, social systems, economics and business strategy.

After noticing some short comings that began to increasingly frustrate me at work and in the spirit of 'don't just complain, try to fix it', I have come up with a system of organising work in IT organisations that I have given a lot of thought to.

I plan in this post (or white paper) to explain some shortcoming with our current way of working in IT and a possible future or improvement to those systems.


In the Beginning 




IT organisations or the IT department within organisations, typically used to look like the diagram above. You would have Developers, QA, Database Administrators, System Administrators and Network Administrators. Some companies still have this same structure with slightly different divisions.

There were problems with this structure over time. The main one that I would say is that over-time, the objectives of the different teams diverged from that of the overall company to that of the priorities of the team. Meaning, they became fiefdoms or tribes and started warring with each other.

Not physically warring with each other. More like a sort of

  • Territorial protectionism: "This falls into our areas and we will decide whether to do it or not" 
  • Resource allocation: "Team X needs us to do Y. It will take a lot of work and I can't be bothered with it now. I'll just tell them to write me a ticket and I'll put it in the backlog for a while" 
  • Communication process creep: "I know that the ticket was sent 2 months ago, but I have not received the detailed documentation of what to do, nor do I have written authorisation from manager X and head of Y"




If you look at the above chart as a hierarchy or a social system, it would look like Feudalism.

Story: A Java consultant once joined a company for a 6 month contract with a similar Feudalistic structure. He asked the DBA team to give him an Oracle dev database so that he can develop what was asked of him. He wrote up a ticket and waited. After a while of not getting the database, he continued with other things and tried to compensate with what he had available. There was some back and forth between the heads of his department and he did mention the lack of a dev database in meetings.
However, the contract finished at the end of 6 months and he left the company. 1 month later, he received an email that his Oracle dev database is now ready for him to use.


And Then What Happened


Around the start of the first dot-com boom, small start ups started to rise. In those start-ups, it was expected that developers, set up the entire system - what we call full stack developers, today. As those companies succeeded and grew, some chose not to split off responsibilities to the format of feudalist model, but instead decided to add more multi-skilled developers.
This produced the following and arguably the current model for small to mid-sized companies:




Now what you have is what I call a developer-centric IT company and if I were to pick a hierarchical structure for it, I would say Monarchy.
There are a few phenomena that happened to get us here: job compression and automation.

Job compression means that a company decided to restructure its processes to have fewer stages which reduces the need for wait time between stages. 


The example above shows a mortgage approval process. There are 4 stages. Each stage is a person with different expertise and different authority. Between each stage, there 'work request' sits in that person's inbox until they can get to it. The combined processing time and queuing time is 18 days.

Job compression would give 1 person enough authority and expertise to make a decision on the approval process. 


You have now reduced the time it takes to approve a mortgage from 18 days to 7 days. Note that this was largely accomplished by reducing the overall queue time. 

As more developers needed to take care of more areas of expertise, they did so by using certain developer philosophies to solve problems and in this case used automation. This brought about certain innovations like Puppet, Chef and Ansible along side previous SysAdmin innovations like virtualisation and later, cloud computing. 
You can now, using code, boot up a container of a web server with the all files, scripts and images and run a slew of black box tests against it to see if it fully works.

Accordingly, developers now take on several roles in the IT organisation:
  • Development
  • Business Analysis
  • Quality Assurance
  • Database Administration
  • System Administration (now DevOps)
  • Security
  • Data Engineering

However, it is difficult to hold all that information inside one's head and developers are using these automations as a crutch to progress with their original work. For example, you can download a few Puppet modules and install as well as begin monitoring a new high availability database, but you have now lost the expertise (in the company) of what is going on under the hood and how to fix issues when they occur.

Very few innovations have been made in the areas outside the realm of pure developing as there are less experts in companies to make those innovations. 
For example, while we have automated processes for storing and managing database schema changes, we have not had any innovations with deploying dev/test/staging databases that contain actual data to test against. Nor can we use existing automated systems for managing schema changes when our production databases become too big.

There is a general 'uneasy' feeling when needing to make changes to systems we don't fully understand. This negates the 'safe to fail' environments which we use today to make innovations. We also tend to apply 'philosophies' that work in one area and to another. This is sometimes helpful, but other times detrimental. 

Story 1: I was involved in a data batching process that roughly required 200 million items to be processed through an existing API. Had that process gone through the usual way, it would have taken 64 days with the average chance of crashing. 
The idea to improve this process was to add more web servers and parallel the work into as many threads as possible. This is a common philosophy that developers have picked up due to limitations with the speed of cores on CPUs. As core speeds have not improved in 7 years, the only option to improve performance would be to split the work across a number of threads.
I identified that API spent the majority of its time making database calls and that ultimately, the bottleneck would be the hard disk IO and certain mutexes. 
I recommended offloading part of the work to the database: This involved loading 200 million items to a temporary table in the database that took 7.5 minutes, using a single thread. The rest of the work still needed to go through the API and took 8 hours to complete. Had the whole process been applied against the database in an efficient manner, I would assume it would take up to 45 mins.

Story 2: A company had a batch process that took around 2 hours and had a detrimental effect on the website during that time. I configured the database to handle such loads better and brought the time down to 30 mins using 6 application servers. I rewrote the batch process to be more 'database friendly'  (push down work to the database) and reduced the time down to 3 minutes and 1 application server.


Future?


So far, we have had a Feudalistic hierarchy with issues with warring fiefdoms and we have had Monarchy with lost expertise and reduced innovation in those areas.
How can we leverage more advanced social systems such as a Capitalistic/Democracy?

Lets look at Capitalism for a moment. Capitalism says that most things in nature follow the Pareto principle. You have 20% of the people who produce 80% of the products or services. The opposite would be Communism, where you would say: "I need 500 people to start farming wheat. You there! 500 random people! start farming wheat". The idea with Capitalism is to encourage innovation and progress by awarding those people who produce more of the products or services. At the same time, if you are not one of those top 20% of people, you can move to another market of products or services and become the top 20% there. 
So the idea for Capitalism is to create many markets or areas of skill where the top people can then innovate and drive progress. 

What could happen in the future is that IT companies can structure their teams based on competency-based hierarchies. Meaning, areas of specific expertise and philosophies which are exlusive to one particular domain over another, thus maximising results to the domain they are suited to.

I have a list of, what I believe, are those specialist domains, but before I do, I would like to go over two concepts: 'economies of scope' and 'complexity = mess'


Economies of Scope is a term from the world of business. You have probably heard of economies of scale, where you have a few products and you try to have bigger factories and bigger machines to pump out the same product in large quantities which would mean cheaper costs. 
For example, you can have a factory that makes 3 types of sandwiches. You purchase bigger machines and improve your processes as much as possible to make those 3 sandwiches as fast as possible and remove all possible waste. 
Economies of scope, on the other hand, is a system where you try to produce different and varied products at a cheaper price. For example, take Subway. You can go in one and produce a high variety of sandwiches at slightly higher price than if you would buy a prepackaged sandwich in a shop.



The idea with economies of scope is to break down the process of creating new products into sub-processes that have a very defined scope and then set up communication systems to co-ordinate between those defined processes as well as have some synergy between them.


Complexity = Mess means that a complex system is difficult to work with. It is also difficult to work in a mess. Now complexity doesn't exactly equal a mess, but both of them are not an ordered and organised system. So (complexity or mess) is Chaos and not Order, in this context.



For us to get to order, we need to simplify the system by organising the mess with rules. Too many rules, lead to complexity, so once there, we need to either remove unneeded rules or find patterns or philosophies to the rules and use those to simplify the system.

CST/Cogs Framework


CST/Cogs Framework


The idea with this framework is to build on what we have discussed so far. It would be a more Capitalistic system where the right expertise is managing the correctly defined scope. That way, innovation and progress can occur in an optimal environment. 
The currency that is traded in this environment is skill. The idea is that skill leads to revenue. Similar to how currently, it's money leads to progress.

The theme in this framework is: Craftsmen making individual cogs in a large machine. Craftsman meaning expert in their area, making the best possible cogs. But are part of a system that needs all its cogs to work together to achieve its objectives.   

We have divided the areas, but we might risk falling back to Feudalism. How can we prevent that?

The framework needs to focus on three philosophies: Competency, Simplification and Transparency. The higher we have of each, the better for the overall system. 

Transparency is ultimately, the best way to prevent fiefdoms from occurring. Fiefdoms usually silo and represent information to other parts of the company to benefit itself. 
For example, lets say an unethical manager would like a talented individual to stay in their division. That manager can simply not promote that individual and even give negative reviews to keep them where they are. 
If, however, HR had access to objective metrics about all the employees, they could see that that person produced good work and has been in there position for some years. They would promote that person before they move to another company.

Some metrics that help can be included in Transparency:
  • Time until first 100 lines of code (gitprime.com)
  • Complexity rating of class (PMD)
  • 95% API response time
  • Average time for SEV2 tickets resolution
  • Orders per week
  • Website feature usage (clicks) per week
  • Usefulness of App feature - survey




Let's take three measures of the output of a system to see how these philosophies could work: Speed, Control and Quality.

Speed

  • Competency: If we have experts, then we can make the best choices to build the products instead of trying out many choices until we reach the right one. 
  • Simplification: If we simplify the system as much as we can, we can both integrate new systems faster as well as produce easy to use systems. In a lot of ways, simplifying equals business agility as it helps you change the business faster to meet the needs of the marketplace. 
  • Transparency: If we have metrics that show us were bottleneck are in the system, we can make those systems as fast as possible.

Control

  • Competency: If we have a high degree competency for a defined scope and area, then we have a high degree of control over the system. 
  • Simplification: If the system is simplified, it is easy to use it.
  • Transparency: If the movement of work is transparent, we can see monitor the time it takes to exchange communication and complete work in the system. Another way of looking at it is that one cog is moving slower and is slowing the system down. Ultimately, this is where a manager would need to step in.

Quality

  • Competency: If we have craftsmen, the cogs they produce are of high quality. 
  • Simplification: If the products we deliver have been simplified, it provides an easy to use product for the customer (perceived quality).
  • Transparency: If we have metrics to see how popular the new product is and how it is used, we can improve the quality of that product. Ultimately, this will need direction from 'the business' and would require interaction with Technical Business Analysts (TBAs in the diagram).



New Roles

This framework has a definition for an old role: Managers and a new role which I felt should be included that I call: Technical/Business Analyst. Both are very important for the framework, so I will explain them now.

Managers

I would like to start off with saying that managers do not equal team leaders. In the developer-centric companies, there are very few managers and there are mainly team leaders: developers that have been promoted to lead other developers.

Dilbert.com

It is no secret that people do not like managers that have no idea about their technical role. In addition, there was a study that determined that 65% of managers actually produced negative value for the company. On the other side, good managers produce huge value (Pareto Principle) for the company and it should not be something we write-off.

Currently, with the lack of managers in IT companies, there is a reliance on hiring someone who 'is the right fit' and are basically outsourcing the need to manage to the individual. If they don't work well, then there is something wrong with them.

In the context of a Capitalistic/Democracy, what role would managers play?
Well, in a Democracy, there is a need for Law-makers to make systems for people to interact in a helpful way to society. There is also a need for Courts for dispute resolution.

Managers should think of systems inside the company that promote honesty, tolerance and freedom of speech. Managers should also resolve disputes in the company and look for workplace complications before they become a full blown warring tribe. Bare in mind, that this framework encourages experts and experts usually have opinions.


Following the values of the framework, lets go over what a manager should do:

  • Competency: The manager should be competent enough at coming up with social systems  that are effective for that specific company culture. The idea is that the cogs turn smoothly.
  • Simplification: The manager should set out rules in those systems, but set out very few rules and then enforce them. With regards to communication, less is more. The manager should make sure that a group can handle things in their own expertise and scope and try to reduce communication dependancies.
  • Transparency: The manager should implement metrics gathering to both know how the IT company is performing, but also be transparent to stakeholder outside IT and build up trust with them.


Technical Business Analyst


Business Analysts seem to be something that only large companies have. That is a shame as there has been some huge innovation in documenting and expressing business knowledge in the last 5 years.
I have been looking into Business Process Modelling Notation 2.0 and Decision Modelling Notation and I have found it very useful in bridging an understanding gap between business and IT.

Story 1: I was trying out using decision tables to document requirements. I talked with the Product Manager and asked her to give it a try. She took a ticket that a developer quoted as taking 5-8 days to implement. She went over the requirements and built a decision table in excel. She then showed it to the original developer, who said: "If this is all that is required, then it should take 1-2 days to implement".

Story 2: I was working on a way to document technical processes. I went over some code and found an if-then-else "pyramid of doom" in it. I then tried to put the conditions from the code into a decision table. After I was finished, I showed it to the original developer and he instantly understood it and made a correction to the table. I then proceeded to tell the business analysts in the company that were extremely impressed that that developer understood it so quickly. Apparently, they have had difficulties communicating business requirements to him before.


In the old way BPMN 1.0, mapping a process would look something like this:



I am sure, everyone has ran into something like this glued to a wall in an office. It's not very clear what is going on.

What happens in BPMN 2.0 and DMN, is as follows:

Decision Table - Discount Decision

And then, the process mapping is simplified:

BPMN 2.0 - Notice the small square/hash icon in the discount decision

The magic happens in three different ways:
  1. The business logic is captured in an easy to understand way for the business user (notice, its in Excel)
  2. That same decision table is understood by the developer
  3. The process mapping is now easy to understand and therefore easier to understand more parts of the system.

We've gone over the business side, but we can go a bit further and apply this same process mapping to the technical side:

DMN for a Technical Process


So when you go into the 'Process Order' task from the diagram above, you would goto a technical process diagram listed below:

DMN for a Technical Process


Technical Business Analyst should be the ones to go over both and create both of these types of diagrams and tables. This should achieve a couple of things:
  1. Provide a counter-balance and due diligence to new business requirements: "I understand you would like this new feature. Could you please explain to me in detail what it is that you need?"
  2. Reduce the time groups of developers spend next to whiteboards.
  3. Reduce risk by using decision tables to notice scenarios that were not considered: "We have Active for CustomerStatus, but I don't see a scenario where the OrderStatus is suspended."
  4. Reduce the meetings between developers and business users.
  5. Reduce the scope that developers need to work on and increase focus on a specific task.
  6. Create a system of business and technical documentation. 

TBAs should spend time going over the backlog of tickets. This should increase the velocity of the team if the tickets are very well defined.




When a new ticket is taken on by the team, a developer and a QA engineer should pick up the same ticket: The QA should start writing functionality tests based on the scenarios in the decision table and the developer should write the code and test it against those tests.

This role should cover the following points from 'Boehm's Top 10 Software Defect Reduction list':
  1. Finding and fixing a software problem after delivery is often 100 times more expensive than finding and fixing it during the requirements and design phase
  2. Current software projects spend about 40 to 50 percent of their effort on avoidable rework.
  3. About 80 percent of avoidable rework comes from 20 percent of the defects


In addition, this role should also prevent or at least greatly reduce cancelled projects or priority changes. I understand that these are extremely demoralising for developers.

Let us finish up by going over the framework values with this role:
  • Competency: This is a new role for most small-to-medium companies. It should streamline the development process by adding an expert into the right area and reducing the scope of work for other people in the company.
  • Simplification: Having easy to understand diagrams and documentation simplifies development work. TBAs should also identify parts of the system that could be simplified (value stream mapping) and suggest very specific and narrow work for technical debt.
  • Transparency: TBAs should make the whole system easy to understand for both IT and business users, outside of it.  



F.A.Qs


  • Is this system a replacement for Agile? 
    • No, its completely complimentary to it and would probably better serve the principle of having 'multi disciplinary teams'.
  •  How do you prioritise or expedite work in this system?
    • That would be up to the manager. Technically, if you would like the option of expediting, you would need to leave some spare capacity in the teams.
  • What if there is not enough skill in house?
    • If you don't have the skills you need in the company, then consider bringing in an outside consultant - even if its for a few days. You will not gain new innovations, but you will gain from other company's experience.
  • What would happen there isn't enough work to justify a new field?
    • It could be very possible to let one person in the company have a dual-role and still have time to try and innovate in this new field. 
  • How can I split up an area of expertise without it leading to a huge overhead of communication?
    • That would really depend on you and your needs. You need to find a balance of 'less is more' with regards to communication, but also have enough work concentrated in front of an expert for them to recognise patterns and generate innovation.


5 Feb 2018

My MySQL Linux Tuning Checklist


Things I look for when optimising or debugging a Linux OS:
  • IOschedular (noop or deadline)
  • Linux Kernel > 3.18 (multi queuing)
  • IRQbalance > 1.0.8
  • File System: noatime, nobarrier
    • ext4: data=ordered
    • xfs: 64k
    • logfiles in different partition (if possible)
  • Swapiness (0 or 1, depending)
  • Jemalloc (if needed)
  • Transparent hugepages - disabled
  • Ulimit (open files) >1000
  • Security
    • IPtables
    • PAM security
  • Raid Controller/Smart HBA
    • write-cache enabled
    • battery backed
    • For HP servers: hpssacli controller all show (detail|status)
    • Tweak cache-ratio to 50/50 or 75/25 (and test)

4 Jan 2018

A DBA Analyses 'The Phoenix Project'

Last year, I read 'The Phoenix Project'. I liked it and as an IT manager in the past, I did experience high blood pressure during the SEV1 scenarios in the book.



I also liked the way DevOps methodology helped solve issues with IT as well as help the company succeed overall.

As a DBA, however, I did have some things that didn't make sense to me about this story. 
Bare in mind that the two major incidents in the book were database related. So in this post, I would like to jot down some things I have noticed and how they could have been solved looking at them from a different lens.


Caution, Spoiler Alert



Incident No.1 - Tokenisation


In the first incident, a 3rd party supplier ran a script against the database to tokenise some personal data. This was related to an issue that information security highlighted, but had the result of effecting HR and accounting.

In the book, there is a complaint that there was no test environment to see if this script would have any negative effects on other parts of the organisation. 

Now to me, this does make sense and at the same time, makes no sense at all.

If you meant, that back in the day, it was hard to get full environments setup to test changes on your application servers, then you would probably be right. Today, perhaps based on the methodology that this book introduces, you probably do have those environments setup: either virtualised or in a container. 


Testing Database


What doesn't make sense to me is that is not having a test database. Now reading through the book, there are mentions of Oracle database and some MS SQL databases. As a mainly MySQL DBA, I have not always worked on those databases, but I have worked next to people who have. My observation is, if you were to have an Oracle database, you would almost certainly have other dev/test/UAT/staging/pre-prod database servers as well. 
Why do I think this? If you can afford to pay for an Oracle database, you would probably get more testing databases under the same license. License being the most expensive part when using Oracle.  
So a testing database to test things that may effect the precious and expensive database server is almost a certainty.


DBA as a Gatekeeper


Now it seems shocking to me that the DBA had not been involved in the process to validate this 3rd party script. Old school Oracle DBAs are involved in everything that happens on their servers. 
Need a user on the database? goto the DBA. 
Need a database server for a new app? please fill these in triplicates, detailing what would be the projected usage for the next 5 years. 
In most companies, an Oracle DBAs may even setup integration between other products like Oracle HR and finance. 
So how could you have run something that significant against the database without their knowledge is beyond me.

Assuming that a database field had in fact been messed up, then Oracle DBAs have a TON of really enviable backup and restore features. 
They can query a table to view all the backups that are available to restore from and choose the point-in-time that is closest to what they need. A DBA could simply restore the database, fetch the specific table that had its column changed and apply it to the to production database. 
Its more than one table? Restore the database, go over the changes in the logs a point-in-time and skip the parts the conversion script applied.


It seems to me that the authors wrote the book based on their own experiences, but those experiences occurred in companies that had no DBAs. Not having a DBA is a product of start ups, not old school 1500-person car-parts manufacturers.


Incident No.2 - Conversion


There was a crippling database issue to do with a database conversion that was needed along side some new code roll out. The issue caused a 2 day - break out the hand held receipt machine - downtime to the system.


Works on My Laptop


During the initial investigation, a developer said something along the lines of 'it worked fine on my laptop' when describing the performance of the database conversion scripts. The problem was that on production, it was x1000 slower. Now, I have written about how to not be the one that deploys that slow query to production before and this really states that situation. Apparently, they still didn't have a database testing environment to test it against.

However, on the topic above of 'DBA as a gatekeeper':
Why didn't the DBA review the conversion scripts or was involved in the the code review process for SQL statements?
It could be that there wasn't any in the company.

Another point was that they couldn't cancel the conversion after they started and noticed how slow it was. If this was within a transaction or a single alter table statement, why not?
If too many things have changed, could they not restore the database to a point-in-time before the changes were made?
Was the conversion x1000 slow instead of maybe x10 slow, because of a foreign key check that could have been turned off?
A DBA would have given you those options.


Project Unicorn


After the hero turns things around and things begin to pickup, they decide to start a separate project to add predictive features to the main project. In it, they decided to bypass seeking permission for database changes and create a new database where they copied production data into it from several locations. I very much like this approach and it falls in line with the reactive micro services pattern

This would make this book ahead of its time. Instead of managing one main database (although, they did mention in the book that had a couple of dozen database servers) for the website, they can break it up into several database servers, based on functionality. What is required is to use tools - and I would believe in 2012, they meant ETL tools - to migrate the needed data into these new database servers. 

This would still need a DBA though or at the very least, a data engineer with an ops background, as you now need to:
  • Data model new environments based on data from old ones
  • Create and maintain data pipelines
  • Monitor for errors and fix data that didn't make it 
  • Fix data drift and re-sync data across servers

In addition, you now need to backup, monitor the availability and performance of these additional database servers.

So while it adds complexity to the backend and you are now moving from simple database maintenance to a more data architecture role, it is the way forward. Certainly the only way to have proper micro services with their own single-purpose and loosely coupled data stores.








3 Jan 2018

Top 4 Reasons Companies Won't Fix Their Database Issues

When I consult at a company, I aim to identify issues with their database and give options on how to solve them.
However, sometimes implementing those solutions may be a more lengthy process than it needs to be and sometimes they may not be implemented at all. During my career, I have observed some reasons as to why that might happen within organizations.


Obviously, the following observations will never happen at your company. I am just writing about them so that you might notice them in other places.


1. Legacy code 


People don't like to have anything to do with legacy code. It’s painful. It’s difficult. It’s risky to change. It runs business critical functions. Worse of all, they didn’t write it. This can be a problem as often, the most cripling database issues require changes to legacy code.

2. New Technologies or Methods


People don’t like you to introduce any new technologies they don’t want to learn and maintain. Not even different methods in technologies already being used. No fancy upgrades to the DB server, no new load balancers and certainly don’t start using SQL statements in the code over their existing ORM.

3. Old Technologies or Methods


In a complete polar opposite, people in tech organisations don’t like you to introduce boring technologies. What would be the point of introducing boring (yet tested) technologies when they could be playing around with shiny new ones. There is a caveat to this - groups prefer it when other groups they depend on (let’s say developers depend on ops) choose to use boring and tested technologies. Just not for themselves. And vice versa.

4. Management Involvement


Last, but certainly not least, no one from upper management will get involved in resolving these issues and push forward solutions. No project/product manager/agile-coach will be assigned to chase up issues. As far as they are concerned, this is an engineering issue and as engineers, you need to sort it out yourselves. Only 'change requests' from the business, have managers around it.


Final Thoughts


After some years of analysing database systems for performance issues, I am finally realising that I should also analyse human systems for performance issues.


27 Dec 2017

Setting Up Databases in your Development Environment

Setting up databases in development environments can be challenging.

Normally, what I usually see is some automated process for setting up empty databases with up-to-date data structures. This is helpful for integration testing, but is tricky for actual development as well as performance testing. 

For example:
  • It is difficult to conceptually get your head around writing a query when you cannot see any data in your tables
  • You cannot possibly know if your query is slow before you deploying it to production without running it against 'some' data.

In addition, there can be a strict requirement to not let sensitive customer data be available outside certain secure environments and certainly not available to development environments.

Step 1


What you would need to do is go over your database and separate the elements into different criteria:
  1. Data Structure
  2. User Management
  3. Referential Tables
  4. Primary Tables
  5. Child Tables
  6. Mapping Tables
  7. Sensitive Data
(explanation below)

Data structure management and user management should be, by now, a solved problem. You have systems like Liquibase and Flyway that manage this for you. Essentially, you can use these systems to automatically generate containers which your developers can then use or setup empty local databases on developer machines using SQL scripts. 

For user management, MySQL has PAM plugin to manage users via LDAP, but you can manage this through scripts as well.

Referential tables (tables that contain data such as id = 1, status = 'DONE') should also be small enough to be included in this stage as well. You need to identify which tables contain this very basic data and add it to the data structure repository or SQL file.

Step 2


This is where things get a little bit tricky: You need to identify which tables are your Primary 'feed data' tables or Object tables. You then need to identify which tables are the Child tables of those Primary tables. Lastly, you need to identify which tables Map keys across different tables - either Primary to Child (as in multi-to-multi relationships) or Primary to Primary. 

Once you have identified these tables, you can discern how much data you would like to keep in your development databases. My recommendation would be to go in these three directions:
  1. Specify a set number of keys in the Primary tables and then get the data from the Child and Mapping tables based on those keys.
  2. Specify a specific set of keys from the Primary tables and then get the data from the Child and Mapping tables based on those keys.
  3. Keep data by a date range for the primary table and then use its keys to populate the Child and Mapping tables. 
Make sure that the amount of data is adequate for your needs: not too small and not too large.

Step 3


This separation of table types can now help us with identifying sensitive data. Data structure and also Referential tables, should not have in them sensitive data. Neither should Mapping tables. What would have sensitive data are Primary and Child tables. 

Identify the columns where sensitive data maybe kept in those tables and either:
  • Empty that data
  • Give it a default value (all emails will be test@email.com)
  • Obfuscate those values in some way

You can change this the data by either outputting it with those changes into an SQL file or dumping that data into a staging database, changing the sensitive data and then dumping it into an SQL file with a tool.

Ideally, this stage needs to go through a QA process/person before the company releases sensitive data to generally available containers or repositories which keep history of changes.

Conclusion


With taking the time to separate the different elements in a database, you can make it less complicated and you would then be more able to automate parts of the database into your CI/CD process. 






8 Dec 2017

Data Modelling: Counter Table

A counter table is a table that keeps counts of particular items or for certain keys. This can range from page count on your blog to keep track of a limit the user is allowed to have from a particular item or service.


Usually, a counter table would be better kept in something like Memcached or Redis as frequent increment updates would be better suited to those in-memory systems.

MySQL and Innodb in particular has many stringent systems to make sure that your data has been reliably written to disk. Just going through those systems alone, can make having a counter table, not suitable, not even considering the speed it takes to update the actual table.

However, sometimes there is a need for certain assurances from failure scenarios where in-memory systems may not be suitable for - as when they crash, the data kept in memory is cleared out.

In those cases, may I recommend that you do what I consider a 'aggregate counter table'. The idea here is to replace doing lots of increment updates and simply count the original base table you are interested in having counts for.

In short, instead of:

INSERT INTO base_table;
UPDATE counter_table set value=value+1 where key=key1;


You would do

INSERT INTO base_table;
On interval (like 1 to 5 seconds):
- INSERT INTO counter_table 
- SELECT key1, count(1), max(primarykey) FROM base_table 
- WHERE last_count_position
- GROUP BY key1
- ON DUPLICATE KEY UPDATE value=value+recent_count


In order to be able to aggregate the base_table more correctly, you need to keep some sort of record of what was the last time or position you read for the base table. What I recommend you consider, is either the primary key, assuming its an integer as well as having a last_updated timestamp column.

Below is an example of a counter table that keeps the last id of the primary key it counted from the base table:


CREATE TABLE counter_table (
  key_id int(10) unsigned NOT NULL,
  counts int(10) unsigned DEFAULT '0',
  lastprimary_id int(10) unsigned DEFAULT '0',
  PRIMARY KEY (key_id),
  KEY idx_camp (lastprimary_id)
) ENGINE=InnoDB;


In order to run your 'refresh' query, you would first need to query the counter_table like this:
SELECT max(lastprimary_id) from counter_table;

Then populate the counter table by including in your above INSERT INTO SELECT statement a:
WHERE base_table.primarykey > lastprimary_id


This should be very fast and will prevent the many 'database-attacking update queries' that can become a serious bottleneck to your performance in the long run.


Downsides

This method doesn't factor in if the rows in the base table were UPDATE'd or DELETE'd. It just counts the row number. If this is a requirement, you can revert to using UPDATE statements for:
UPDATE counter_table SET value=value-1
with the understanding that this will happen infrequently.

You also, now need to maintain a procedure and monitor that it is running on the set intervals that you need it. Fortunately, MySQL has scheduled Events to help with that.

30 Nov 2017

Archiving for a Leaner Database

There is an idea that data is sacred and needs to be stored forever. However, if you keep your data forever, you will, sooner or later, have a very large database.

In order to keep operations running smoothly, it would be useful to allocated data that is used more frequently in certain tables and keep data that is used less frequently in archive tables.



Some examples


You have a large table that stores transactions and it's size is 200Gb. It is that way, because your company has been around for 5 years, but in the last year, your company has been successful acquiring new users and your data has doubled.

Congratulations.

In your database, you now have a table that has 5 years worth of data, but your application usually only needs about the last 1-3 months. There may be a use case where someone might require data about a customer for a period starting a year ago and there may also be a reporting request to aggregate data for the last 3 years. Therefore, to play it safe, we need everything in one table.

However, this greatly effects performance. It would be more helpful to try and separate those 3 concerns into 3 different tables:
  1. A table for a 3 month period for frequently used data
  2. An archive table that keeps all old and infrequently used data
  3. A summary table for reporting
With these, we are complying with the principle of Single-Responsibility and greatly improve performance for each purpose.

Having a 'main' table with only the last 3 months worth of data, greatly allows you to scale.
For example, even if your data doubles every year for the next 3-5 years, you still only have to manage a subset of that data. So if those 3 months once took a table 20Gb to store, the year following would be 40Gb and the year after would be 80Gb: These sizes are still very manageable by todays standards.
In addition, hardware and software improves over time, so there can be a legitimate expectation that simply by upgrading and updating, you can keep humming along.

Taking the effort to identify 'hot' and 'cold' data and allocating it to the right tables, can mean that your scalability concerns will be addressed for the long term.



How to implement Archiving?




Table _archive


One way to implement archiving, is by having a table that ends with _archive.

To enable this, you will need to be able to redirect your queries (from your code mainly, or by a proxy that can do that) to the main or the archive table, based on a particular criteria.

For example, if the date is less than today's date minus 30 days, then send it to the archive table, if not, then the main table.

Another example may be, if the status column equals 'inactive' send to the archive table.

You would largely need to dig through your code for that table and wrap it with an IF statement to send to the right.

You would also need a data process that migrates data from the main table over to the archive table when it gets old or becomes cold.



Partitioning by Date


While this is not a different physical data model, this does help split the table into a few tables and achieving the desired purpose without application code changes.

Is it very common to partition your table to specify which data may be old and allocate it in the right partition, based on date.

mysql> CREATE TABLE `largetable` (
->   `id` bigint unsigned NOT NULL AUTO_INCREMENT,
->   `dateCreated` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
->   `status` int default 1,
->   `sometext` text,
->   PRIMARY KEY (`id`,`dateCreated`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.03 sec)

mysql> alter table largetable partition by RANGE(YEAR(dateCreated)) (
-> PARTITION p2016 VALUES LESS THAN (2017), 
-> PARTITION p2017 VALUES LESS THAN (2018), 
-> PARTITION p2018 VALUES LESS THAN (2019), 
-> PARTITION p2019 VALUES LESS THAN (2020), 
-> PARTITION p2020 VALUES LESS THAN (2021), 
-> PARTITION pmax VALUES LESS THAN MAXVALUE);
Query OK, 0 rows affected (0.05 sec)
Records: 0  Duplicates: 0  Warnings: 0

The above example, allocates data by which year the row was created. Please note, after 2020, this sort of manual partitioning will require manually adding new years to this table. If you do it in advance, this can be done without disrupting operations.



Partitioning by Status


You can also have a partition (as mentioned above) to a status column to active/inactive and simply by using UPDATE to change the value MySQL will move over that row to the right partition. REPLACE or INSERT + DELETE will work as well.


mysql> CREATE TABLE `largetable` (
->   `id` bigint unsigned NOT NULL AUTO_INCREMENT,
->   `dateCreated` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
->   `status` int default 1, -- default active
->   `sometext` text,
->   PRIMARY KEY (`id`,`status`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.02 sec)

mysql> alter table largetable partition by list(status) (
-> partition pactive values in (1), -- active 
-> partition pinactive values in (2) -- inactive
-> ); 
Query OK, 0 rows affected (0.03 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> select * from largetable partition (pactive);
Empty set (0.00 sec)

mysql> select * from largetable partition (pinactive);
Empty set (0.00 sec)

mysql> insert into largetable(sometext) values ('hello');
Query OK, 1 row affected (0.01 sec)

mysql> select * from largetable partition (pinactive);
Empty set (0.00 sec)

mysql> select * from largetable partition (pactive);
+----+---------------------+--------+----------+
| id | dateCreated         | status | sometext |
+----+---------------------+--------+----------+
|  1 | 2017-10-30 10:04:03 |      1 | hello    |
+----+---------------------+--------+----------+
1 row in set (0.00 sec)

mysql> update largetable set status = 2 where id =1 ;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> select * from largetable partition (pactive);
Empty set (0.00 sec)

mysql> select * from largetable partition (pinactive);
+----+---------------------+--------+----------+
| id | dateCreated         | status | sometext |
+----+---------------------+--------+----------+
|  1 | 2017-10-30 10:04:03 |      2 | hello    |
+----+---------------------+--------+----------+
1 row in set (0.00 sec)



Partitioning by ID


And lastly, you can partition on the sequence of your auto incrementing id key.


mysql> CREATE TABLE `largetable` (
->   `id` bigint unsigned NOT NULL AUTO_INCREMENT,
->   `dateCreated` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
->   `status` int default 1,
->   `sometext` text,
->   PRIMARY KEY (`id`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.02 sec)

mysql> alter table largetable partition by RANGE(id) (
-> PARTITION p1 VALUES LESS THAN (500000000), 
-> PARTITION p2 VALUES LESS THAN (1000000000), 
-> PARTITION p3 VALUES LESS THAN (1500000000), 
-> PARTITION p4 VALUES LESS THAN (2000000000), 
-> PARTITION p5 VALUES LESS THAN (2500000000), 
-> PARTITION pmax VALUES LESS THAN MAXVALUE);
Query OK, 0 rows affected (0.06 sec)
Records: 0  Duplicates: 0  Warnings: 0

The above example specifies which partition the row should goto based on the range of what the id number is. This example is more useful if your system does a lot of primary key look ups. It also helps with distributing the table sizes more equally when compared to dates, as you can have more data in recent years.



A word of caution


Partitioning on the right key is absolutely crucial and not easy. You need to analyse the queries that the application sends to that specific table and come up with a partitioning key(s) that works well and does not slow down the table - at least not the top 80% of the slowest queries.

The partitioning key would need to go into the PRIMARY KEY and in order for the optimiser to send you to the right partition, that key would ideally be included in the WHERE clause of all SELECT/UPDATE/DELETE queries. Otherwise, your query would run sequentially through each partition in that table.

25 Nov 2017

How to Not be the One that Deploys that Slow Query to Production

Have you ever deployed code to production that slowed the database and the entire site down?
Did you get a feeling of anxiety when you just read that?

Well, it could be to do with your test environment.

Most places nowadays have test suites that will check for integration issues. But in very few will check for performance.

The issue lies with how developers are set up to develop code. They have their code on their local computer with a local copy of an empty database where they develop against. That setup will not give you any useful feedback about how your code will performs once its run against the production database.

How do you get Performance Feedback for your Queries?


Whenever you log into your database, lets say MySQL, you get 3 types of feedback:

  1. Your result set
  2. The number of rows
  3. The time it took the query to run

(Postgres, by the way, has \timing.)

In order to get the right time for your query, you need to run it in on a database that is similar in hardware, parameters and more importantly, database size as your production database. 

Here is an example: if you take a SELECT query that you wrote that has 3 JOINs and you run it on a read-only slave DB server. You choose some decent sample variables from the existing data and you get a result of 0.3 seconds. Now, barring exceptions such as deadlocking, conflicts or server wide slowdown, there is a very high chance that that query will take 0.3 seconds when its run against the production database.

Once you have an environment to test against, you can run EXPLAIN on your query and make improvements till you are happy with it. 

But what do I do if I use an ORM?


Well, if you can output the query that the ORM will use and run that against a database, you will know how long it takes. Hopefully, you will be able to make improvements to your query through the ORM.


Conclusion


Make sure that in your company, you have a database to test against that is similar to production. If that is not available, see if you can get access to a read-only DB (sometimes there is one for back up purposes) and at least test your SELECT queries against it.

You can then relatively confidently, deploy it to live. At the very least, with a lot less stress.