Optimizing MySQL or table design
NickName:Peter Guan Ask DateTime:2013-11-04T15:19:22

Optimizing MySQL or table design

I'm puzzled with my SQL homework. I'm not an expert in databases. I'm using MySQL.

I have a table named 'news'

create table news
    title text not null,
    time DATE,
    ip varchar(15),
    reporter varchar(25),
    tag varchar(20),
    id int not null auto_increment,
    primary key(id)
)engine=innodb;

And I have a txt file 'news2.txt' with 1000000 lines of data.

I want to optimize my sql for speed.

Here is my SQL statement

SET UNIQUE_CHECKS=0;
SET AUTOCOMMIT=0;

load data local infile "e:/news2.txt" into table news fields terminated by ',' LINES TERMINATED BY '\n' (title,time,ip,reporter,tag); 

SET UNIQUE_CHECKS=1;
SET AUTOCOMMIT=1;

the 'news2.txt' contains like*'Vivamus dictum ultrices Curae aliquet vel nunc Aenean,2013/10/8,10.0.0.252,Reed Wyatt,Science'* it's a row.

The execution time is 29.188s. Are there any problems with my table design? What can I do to make it faster?

Copyright Notice:Content Author:「Peter Guan」,Reproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/19763232/optimizing-mysql-or-table-design

Answers
Mihai Stancu 2013-11-04T09:15:20

Data load\n\nOne million entries take some time to write to disk. Check the size of the table and imagine that it's not just copying a file it's doing some computations in order to save it as a DB record.\n\nIt's not something to be concerned about unless the homework clearly stats that speed is of the essence (which I doubt it would).\n\nIn a normal application inserting one million entries into the news table in 26s is exceedingly good performance... not to mention \"where did the find all of those news articles\"? In real life the application layer will most likely slow things down.\n\nTransactions\n\nYou could remove the \"autocommit=0\" from there (if you don't need it), because that means the entire load operation is being run transactionally which means the DB engine needs to write everything in a temporary place and check if every insert (of the one million you have) was successful before committing the data.\n\nRunning with \"autocommit=1\" would let the DB engine handle each insert as a transaction. \n\nUsually one transaction per statement creates some degree of \"transactional\" overhead, and it's a good practice to run batches of transactions (50-100 statements per transaction) to minimize that overhead, but in the case of very large transactions the overhead builds up and cannot be held in memory so it needs to be written to disk which makes things work slower -- in this case it's better to use one transaction per statement (which can be run from memory) rather than one transaction per entire batch.",


More about “Optimizing MySQL or table design” related questions

Optimizing MySQL or table design

I'm puzzled with my SQL homework. I'm not an expert in databases. I'm using MySQL. I have a table named 'news' create table news title text not null, time DATE, ip varchar(15),

Show Detail

Optimizing a giant mysql table

I have a giant mysql table which is growing at all the time. It's recording chat data. this what my table looks like CREATE TABLE `log` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `channel`

Show Detail

Optimizing queries on a relational DB where entities are full json objects and with another table with entity-value design

In my database, entities are stored as full JSON formatted objects. There is another table for properties of entities. Each property of an entity is stored its own row. In a sense, the second table...

Show Detail

Optimizing MySql Query?

Hi is there any way of optimizing this mysql query ? SELECT ADDRESS , D_ADDRESS , U_DATA FROM DATABASE1.TABLE1 WHERE (ADDRESS LIKE '0010') OR (ADDRESS LIKE 'DHIL') OR (ADDRESS LIKE 'FINN') OR

Show Detail

Disk usage for optimizing a partitioned MySQL-Table

I have a large MyISAM table with 3 million rows that has a size of 31 GB due to a 10KB blob in each row. The table has already 30 partitions. I want to optimize the table since I am going to remove...

Show Detail

MySQL: why could copying to tmp table be faster after optimizing table

We have a table with 130 000 records. A certain query (4 queries with some inner joins and unions) takes 0.13 seconds. Performance is ok. After a few hours we see that the same query takes 0.4-0.5

Show Detail

Optimizing a MySQL DELETE Query which is blocking the database

I need a suggestion for optimizing following mysql delete query: $group_id = '1234'; // DELETE GROUP DISCUSSION POSTS mysql_query("DELETE FROM table_groupposts ...

Show Detail

Large mysql table stuck optimizing table

So I am optimizing a table on a unused server. The table has 40+ million records with a size of about 10GB. The optimize has been running for about 5 days. The .TMD file is 9.115 GB the .MYD file ...

Show Detail

optimizing query and table in mysql 5.1

I have a question regarding optimizing the following query/table SELECT playitemid,MAX(playdatetime) FROM buma WHERE licenseid = 1 AND playdatetime > Date_sub(Curdate(), INT...

Show Detail

Optimizing a select from a large table

I have a table with around 40 million rows, and I want to run something like this: SELECT country, count(*) FROM `signups` where `signed_up` > '2012-03-20 00:00:00' group by country Basically ...

Show Detail