In today’s world when everything is going really fast there is no place for anything slow. Maybe that’s the reason for the tremendous success of node.js platform because performance is the main factor that node.js promised to solve.

But with the introduction of PHP7 and everyone praising its performance, is there still any difference left between the two platforms. Maybe or maybe not but this is not the topic that we are going to discuss. What I’m going to discuss about is definitely going to help those who are actually thinking of switching the platform for better performance of their website.

Through my experience of web development, I found that frameworks and languages really do not matter if your website hosts less than 50,000 users on daily basis.

I have written the following blog with an intention to help those who are not able to get the best out of their framework for better user experience.

Now before I begin I would like to say that I have ignored the basic practices like proper table partitioning and other basic good practices as I expect the user has done that already.

1.) Enable indexing of frequently used columns:

If your table holds a lot of records it is very important to index the columns that are going to be used frequently. If you are performing a join on the table then do not forget to index the columns that will be used for join.

In the following image, you can see that I have performed a join on two tables. I have intentionally performed a join on columns that are not indexed.

 

 

2.) The Explain command:

If you could just get to know the reason for your slow query response then it will be very easy to work on making it fast. The SQL explain command does exactly that for you. Whenever you don’t quite understand the reason why the queries running really slow you can just type explain before the query and you will see something like this.

The above Image is from executing the non-indexed join command that I performed earlier and you can see that it shows the key used for reference in the price table but in the product table it shows as Null. This is because we had not indexed the column used for joining the table.
Now, here in the second screenshot, you can see that execution time difference after we indexed the column used to join the table.
So from 1.4560 seconds now our query is just taking .0562 that is just 3% of what it used to take. This means more than 97% execution time reduction by simply indexing the columns used for join. So never forget the index your columns while handling large databases.
3.) Avoid using * in SQL statements
So I’ll start with a screenshot again:
This is the screenshot that I used earlier to demonstrate an SQL join using non-indexed columns:
What I did not show you earlier was the following screenshot:
Now I would like to tell you that this screenshot was also taken while performing the joint of non-indexed columns, the only difference is that besides selecting all the columns from a table I have selected only the required columns.
And again you can see the difference in execution time in both the queries. The query 1 took 1.4560 seconds and the query 2 took just .5262 seconds.
That is just 36% of what it used to take. this means more than 44% reduction in execution time by just selecting the columns you need for execution.
So also make it a point that you should almost never use * sign in SQL queries.
Just to prove more I selected only the required fields from the indexed tables and then also the difference is humongous.
From 0.0562 now on our query just took .0095 seconds when compared with the non-indexed query with * sign from 1.4560 seconds to 0.0095 seconds it is not even 1% of what it used to take.
4.) Using limits in your query:
Till now what we have achieved looks impressive, doesn’t? What if I can say that this query is still not optimized because we have not used limits on the number of records our database has to produce. We generally have very few results to show on a page and when we do not add any limit on our query the query is performed on the entire table.
Not to Forget joins have more than one table. So I just added a limit of 100 and again the difference in execution time was pretty impressive. Have a look:
So from 0.0095 seconds, we have further moved down to 0.0019 seconds which is just 20% of what it used to take that means 80% reduction from our optimized query. So always use limits so that the car is not performed on the entire table.
5.) check if query caching is enabled:
By default, most of the mySQL server have query caching enabled. You must be aware that anything stored in your computer’s cache is quickly accessible by a system. Now if mySQL query caching is enabled on your server your client’s most frequently accessed Content will also be stored in the cache. This means no overhead of New Query execution every time a new user wants to access the same content the previous user did.
 This adds up to a great user experience for your clients so always check if your query cache is enabled. If you have command-line access to your server the following screenshot shows how you can check if your query caching is enabled.
This is as simple as just echoing out the value stored in the environment viable.
6.) Propose table structure in PHPMyAdmin:
Only if I could explain about all the amazing features provided by PhpMyAdmin by default this blog would never end, so I will just keep myself to what is required for now.
When you go to the structure view of your table at the very bottom of the table you would see proposed table structure which allows PhpMyAdmin to propose the most optimized structure for your table.
Once you click on that you would see a table structure proposed by my SQL of what it thinks would be the best structure for your table and mention the most optimized form under Optimal_fieldtype column like below.
Now you just need to verify if the structure proposed by MySQL fulfills your need if yes then you just need to copy and paste Diwali is proposed by the optimal field type column.
7.) Choose the optimal storage engine:
While creating a table it is very important to select between the storage engine.
These are by default provided by mySQL. There are various storage engines that are provided by MySQL but the most popular of them are just 2: innoDB and MYISAM.
 
I will not go very deep into details of these both but all I have to say is that if you need transactions-based architecture then it’s better to go with innoDB and if your database does not requires transaction just requires fast courier execution and you should better stick with MYISAM.
8.) Use third-party memory cache:
Using third-party memory cache like Redis or Memcached to store the most frequently accessed data will help you reduce the website response time. Using third party caching solution may cause some additional charges depending on usage. But again for websites using fewer products would not have to use these kinds of caching mechanism. Head to https://redis.io/ or https://memcached.org/ to know more or for documentation.
So I think I have covered most of the best practices to reduce your website’s response time. By following these practices I am sure you can reduce the website’s response time. So if you are thinking about switching your development stack I recommend you at least making sure your current stack is not as bad as you think.
Summary
Mysql performance tuning and Optimization for large database
Article Name
Mysql performance tuning and Optimization for large database
Description
This blog explains how to reduce the server response time by mysql performance tuning and and other ways to improve mysql performance with large database
Author
Publisher Name
MakeUBIG
Publisher Logo

Leave a Comment

Your email address will not be published.

Your Comment*

Name*

Email*

Website