pkgsrc-Changes archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

CVS commit: pkgsrc/databases/mysql-cluster



Module Name:    pkgsrc
Committed By:   jnemeth
Date:           Mon Sep  7 04:33:06 UTC 2015

Modified Files:
        pkgsrc/databases/mysql-cluster: Makefile Makefile.common PLIST distinfo
Removed Files:
        pkgsrc/databases/mysql-cluster/patches: patch-vio_viossl.c

Log Message:
Update to MySQL Cluster 7.4.7:  this is mainly a bug fix release.

pkgsrc change: delete one patch that has been upstreamed

Changes in MySQL Cluster NDB 7.4.7 (5.6.25-ndb-7.4.7) (2015-07-13)

MySQL Cluster NDB 7.4.7 is a new release of MySQL Cluster 7.4,
based on MySQL Server 5.6 and including features in version 7.4 of
the NDB storage engine, as well as fixing recently discovered bugs
in previous MySQL Cluster releases.

This release also incorporates all bugfixes and changes made in
previous MySQL Cluster releases, as well as all bugfixes and feature
changes which were added in mainline MySQL 5.6 through MySQL 5.6.25
(see Changes in MySQL 5.6.25 (2015-05-29)).

Functionality Added or Changed
- Deprecated MySQL Cluster node configuration parameters are now
  indicated as such by ndb_config --configinfo --xml. For each
  parameter currently deprecated, the corresponding <param/> tag
  in the XML output now includes the attribute deprecated="true".
  (Bug #21127135)

Bugs Fixed
- Important Change; Cluster API: The Ndb::getHighestQueuedEpoch()
  method returned the greatest epoch in the event queue instead of
  the greatest epoch found after calling pollEvents2().  (Bug
  #20700220)
- Important Change; Cluster API: Ndb::pollEvents() is now compatible
  with the TE_EMPTY, TE_INCONSISTENT, and TE_OUT_OF_MEMORY event
  types introduced in MySQL Cluster NDB 7.4.3.  For detailed
  information about this change, see the description of this method
  in the MySQL Cluster API Developer Guide. (Bug #20646496)
- Important Change; Cluster API: Added the method
  Ndb::isExpectingHigherQueuedEpochs() to the NDB API to detect
  when additional, newer event epochs were detected by pollEvents2().
  The behavior of Ndb::pollEvents() has also been modified such
  that it now returns NDB_FAILURE_GCI (equal to ~(Uint64)0) when
  a cluster failure has been detected. (Bug #18753887)
- After restoring the database metadata (but not any data) by
  running ndb_restore --restore_meta (or -m), SQL nodes would hang
  while trying to SELECT from a table in the database to which the
  metadata was restored. In such cases the attempt to query the
  table now fails as expected, since the table does not actually
  exist until ndb_restore is executed with --restore_data (-r).
  (Bug #21184102) References: See also Bug #16890703.
- When a great many threads opened and closed blocks in the NDB
  API in rapid succession, the internal close_clnt() function
  synchronizing the closing of the blocks waited an insufficiently
  long time for a self-signal indicating potential additional
  signals needing to be processed. This led to excessive CPU usage
  by ndb_mgmd, and prevented other threads from opening or closing
  other blocks.  This issue is fixed by changing the function
  polling call to wait on a specific condition to be woken up (that
  is, when a signal has in fact been executed). (Bug #21141495)
- Previously, multiple send threads could be invoked for handling
  sends to the same node; these threads then competed for the same
  send lock. While the send lock blocked the additional send threads,
  work threads could be passed to other nodes.  This issue is fixed
  by ensuring that new send threads are not activated while there
  is already an active send thread assigned to the same node. In
  addition, a node already having an active send thread assigned
  to it is no longer visible to other, already active, send threads;
  that is, such a node is longer added to the node list when a send
  thread is currently assigned to it. (Bug #20954804, Bug #76821)
- Queueing of pending operations when the redo log was overloaded
  (DefaultOperationRedoProblemAction API node configuration parameter)
  could lead to timeouts when data nodes ran out of redo log space
  (P_TAIL_PROBLEM errors). Now when the redo log is full, the node
  aborts requests instead of queuing them. (Bug #20782580) References:
  See also Bug #20481140.
- An NDB event buffer can be used with an Ndb object to subscribe
  to table-level row change event streams. Users subscribe to an
  existing event; this causes the data nodes to start sending event
  data signals (SUB_TABLE_DATA) and epoch completion signals
  (SUB_GCP_COMPLETE) to the Ndb object. SUB_GCP_COMPLETE_REP signals
  can arrive for execution in concurrent receiver thread before
  completion of the internal method call used to start a subscription.
  Execution of SUB_GCP_COMPLETE_REP signals depends on the total
  number of SUMA buckets (sub data streams), but this may not yet
  have been set, leading to the present issue, when the counter
  used for tracking the SUB_GCP_COMPLETE_REP signals (TOTAL_BUCKETS_INIT)
  was found to be set to erroneous values. Now TOTAL_BUCKETS_INIT
  is tested to be sure it has been set correctly before it is used.
  (Bug #20575424) References: See also Bug #20561446, Bug #21616263.
- NDB statistics queries could be delayed by the error delay set
  for ndb_index_stat_option (default 60 seconds) when the index
  that was queried had been marked with internal error. The same
  underlying issue could also cause ANALYZE TABLE to hang when
  executed against an NDB table having multiple indexes where an
  internal error occured on one or more but not all indexes.  Now
  in such cases, any existing statistics are returned immediately,
  without waiting for any additonal statistics to be discovered.
  (Bug #20553313, Bug #20707694, Bug #76325)
- The multi-threaded scheduler sends to remote nodes either directly
  from each worker thread or from dedicated send threads, depending
  on the cluster's configuration. This send might transmit all,
  part, or none of the available data from the send buffers. While
  there remained pending send data, the worker or send threads
  continued trying to send in a loop. The actual size of the data
  sent in the most recent attempt to perform a send is now tracked,
  and used to detect lack of send progress by the send or worker
  threads. When no progress has been made, and there is no other
  work outstanding, the scheduler takes a 1 millisecond pause to
  free up the CPU for use by other threads. (Bug #18390321)
  References: See also Bug #20929176, Bug #20954804.
- In some cases, attempting to restore a table that was previously
  backed up failed with a File Not Found error due to a missing
  table fragment file. This occurred as a result of the NDB kernel
  BACKUP block receiving a Busy error while trying to obtain the
  table description, due to other traffic from external clients,
  and not retrying the operation.  The fix for this issue creates
  two separate queues for such requests:  one for internal clients
  such as the BACKUP block or ndb_restore, and one for external
  clients such as API nodes and prioritizing the internal queue.
  Note that it has always been the case that external client
  applications using the NDB API (including MySQL applications
  running against an SQL node) are expected to handle Busy errors
  by retrying transactions at a later time; this expectation is
  not changed by the fix for this issue.  (Bug #17878183) References:
  See also Bug #17916243.
- On startup, API nodes (including mysqld processes running as SQL
  nodes) waited to connect with data nodes that had not yet joined
  the cluster. Now they wait only for data nodes that have actually
  already joined the cluster.  In the case of a new data node
  joining an existing cluster, API nodes still try to connect with
  the new data node within HeartbeatIntervalDbApi milliseconds.
  (Bug #17312761)
- In some cases, the DBDICT block failed to handle repeated
  GET_TABINFOREQ signals after the first one, leading to possible
  node failures and restarts. This could be observed after setting
  a sufficiently high value for MaxNoOfExecutionThreads and low
  value for LcpScanProgressTimeout. (Bug #77433, Bug #21297221)
- Client lookup for delivery of API signals to the correct client
  by the internal TransporterFacade::deliver_signal() function had
  no mutex protection, which could cause issues such as timeouts
  encountered during testing, when other clients connected to the
  same TransporterFacade. (Bug #77225, Bug #21185585)
- It was possible to end up with a lock on the send buffer mutex
  when send buffers became a limiting resource, due either to
  insufficient send buffer resource configuration, problems with
  slow or failing communications such that all send buffers became
  exhausted, or slow receivers failing to consume what was sent.
  In this situation worker threads failed to allocate send buffer
  memory for signals, and attempted to force a send in order to
  free up space, while at the same time the send thread was busy
  trying to send to the same node or nodes. All of these threads
  competed for taking the send buffer mutex, which resulted in the
  lock already described, reported by the watchdog as Stuck in
  Send. This fix is made in two parts, listed here:
  1. The send thread no longer holds the global send thread mutex
  while getting the send buffer mutex; it now releases the global
  mutex prior to locking the send buffer mutex. This keeps worker
  threads from getting stuck in send in such cases.
  2. Locking of the send buffer mutex done by the send threads now
  uses a try-lock. If the try-lock fails, the node to make the send
  to is reinserted at the end of the list of send nodes in order
  to be retried later. This removes the Stuck in Send condition
  for the send threads.  (Bug #77081, Bug #21109605)
- Cluster API: The pollEvents2() method now waits indefinitely for
  events when a negative value is used for the time argument. (Bug
  #20762291)
- Cluster API: NdbEventOperation::isErrorEpoch() incorrectly returned
  false for the TE_INCONSISTENT table event type (see The
  Event::TableEvent Type). This caused a subsequent call to
  getEventType() to fail. (Bug #20729091)
- Cluster API: Creation and destruction of Ndb_cluster_connection
  objects by multiple threads could make use of the same application
  lock, which in some cases led to failures in the global dictionary
  cache. To alleviate this problem, the creation and destruction
  of several internal NDB API objects have been serialized. (Bug
  #20636124)
- Cluster API: A number of timeouts were not handled correctly in
  the NDB API.  (Bug #20617891)
- Cluster API: When an Ndb object created prior to a failure of
  the cluster was reused, the event queue of this object could
  still contain data node events originating from before the failure.
  These events could reference old epochs (from before the failure
  occurred), which in turn could violate the assumption made by
  the nextEvent() method that epoch numbers always increase. This
  issue is addressed by explicitly clearing the event queue in such
  cases. (Bug #18411034) References: See also Bug #20888668.


To generate a diff of this commit:
cvs rdiff -u -r1.8 -r1.9 pkgsrc/databases/mysql-cluster/Makefile
cvs rdiff -u -r1.7 -r1.8 pkgsrc/databases/mysql-cluster/Makefile.common
cvs rdiff -u -r1.5 -r1.6 pkgsrc/databases/mysql-cluster/PLIST
cvs rdiff -u -r1.4 -r1.5 pkgsrc/databases/mysql-cluster/distinfo
cvs rdiff -u -r1.1.1.1 -r0 \
    pkgsrc/databases/mysql-cluster/patches/patch-vio_viossl.c

Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.




Home | Main Index | Thread Index | Old Index