On 10/01/12 12:27, Ian Leonard wrote: > Hi All,
> I'm benchmarking a MySQL based application. Basically I have hundreds of
> processes doing inserts and updates concurrently. I find that as I
> increase the number it gets to a point where queries are being queued.
> This isn't a surprise, but what is confusing me is that when MySQL is
> maxing out, there is minimal effect on other processes. top doesn't rate
> my test programs as high cpu or memory users and there doesn't seem to
> be much impact on general i/o.
> Why is this? I would guess it's something to do with locking contention.
> Any advice on how I can find out? It would seem as though I could get
> much more performance out of the system
In PostgreSQL, you can query system tables to see what kind of locks
are being held - I'd assume that's something you can also do in MySQL.
As to mitigating this, if you're using MyISAM tables, then they only
support table level locking, so performance with many concurrent
changes is going to be awful. I recall that InnoDB tables support row
level locking, but I don't know how sophisticated it is. If possible, I
would strongly recommend you move to PostgreSQL, which has very
sophisticated locking that also plays well with concurrent transactions.
This e-mail has been scanned for all viruses by Star. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
This message was posted to the following mailing lists: